sentences
sequence | labels
sequence |
---|---|
[
"Idioms are unlike most phrases in two important ways.",
"First, words in an idiom have non-canonical meanings.",
"Second, the noncanonical meanings of words in an idiom are contingent on the presence of other words in the idiom.",
"Linguistic theories differ on whether these properties depend on one another, as well as whether special theoretical machinery is needed to accommodate idioms.",
"We define two measures that correspond to the properties above, and we implement them using BERT (Devlin et al., 2019) and XLNet (Yang et al., 2019).",
"We show that English idioms fall at the expected intersection of the two dimensions, but that the dimensions themselves are not correlated.",
"Our results suggest that special machinery to handle idioms may not be warranted.",
"Idiomsexpressions like rock the boat bring together two phenomena which are of fundamental interest in understanding language.",
"First, they exemplify non-conventional word meaning (Wein-reich, 1969; Nunberg et al., 1994).",
"The words rock and boat in this idiom seem to carry particular meaningssomething like destabilize and situation , respectivelywhich are different from the conventional meanings of these words in other contexts.",
"Second, unlike other kinds of nonconventional word use such as novel metaphor, there is a contingency relationship between words in an idiom (Wood, 1986; Pulman, 1993).",
"It is the specific combination of the words rock and boat that has come to carry the idiomatic meaning.",
"Shake the canoe does not have the same accepted meaning.",
"In the literature, most discussions of idioms make use of prototypical examples such as rock the boat .",
"This obscures an important fact: There is no generally agreed-upon definition of idiom ; phrase types such as light verb constructions (e.g., take a walk ) and semantically transparent collocations (e.g., now or never ) are sometimes included in the class (e.g., Palmer, 1981) and sometimes not (e.g., Cowie, 1981).",
"This lack of homogeneity among idiomatic phrases has been recognized as a challenge in the domain of NLP, with Sag et al. (2002) suggesting that a variety of techniques are needed to deal with different kinds of multi-word expressions.",
"What does seem clear is that prototypical cases of idiomatic phrases tend to have higher levels of both non-conventional meaning and contingency between words.",
"This combination of non-conventionality and contingency has led to a number of theories that treat idioms as exceptions to the mechanisms that build phrases compositionally.",
"These theories posit special machinery for handling idioms (e.g., Weinreich, 1969; Bobrow and Bell, 1973; Swinney and Cutler, 1979).",
"An early but representative example of this position is Weinreich (1969), who posits the addition of two structures to linguistic theory: (1) an idiom list , where each entry contains a string of morphemes, its associated syntactic structure, and its sense description, and (2) an idiom comparison rule , which matches strings against the idiom list.",
"Such theories must of course provide principles for addressing the difficult problem of distinguishing idioms from other instances of non-conventionality or contingency.",
"We propose an alternative approach, which views idioms not as exceptional, but merely the result of the interaction of two independently motivated cognitive mechanisms.",
"The first allows words to be interpreted in non-canonical ways depending on context.",
"The second allows for the storage and reuse of linguistic structuresnot just words, but larger phrases as well (e.g., Di Sciullo and Williams, 1987; Jackendoff, 2002; O'Donnell, 2015).",
"There is disagreement in the literature about the relationship between these two proper-4024 ties; some theories of representation predict that the only elements that get stored are those with non-canonical meanings (e.g., Bloomfield, 1933; Pinker and Prince, 1988), whereas others predict that storage can happen no matter what (e.g., O'Donnell, 2015; Tremblay and Baayen, 2010).",
"We predict that, consistent with the latter set of theories, neither mechanism should depend on the other.",
"This paper presents evidence that prototypical idioms occupy a particular region of the space of these two mechanisms, but are not otherwise exceptional.",
"We define two measures, conventionality meant to measure the degree to which words are interpreted in a canonical way, and contingency a statistical association measure meant to capture the degree to which the presence of one word form depends on the presence of another.",
"Our implementations make use of the pre-trained language models BERT (Devlin et al., 2019) and XLNet (Yang et al., 2019).",
"We construct a novel corpus of English phrases typically called idioms, and show that these phrases fall at the intersection of low conventionality and high contingency, but that the two measures are not correlated and there are no clear discontinuities that separate idioms from other types of phrases.",
"Our experiments also reveal hitherto unnoticed asymmetries in the behavior of head and non-head words of idioms.",
"In idioms, the dependent word (e.g., boat in rock the boat ) shows greater deviation from its conventional meaning than the head.",
"In this section we describe the motivation behind our two measures and lay out our predictions about their interaction.",
"Our first measure, conventionality , captures the extent to which subparts of a phrase contribute their normal meaning to the phrase.",
"Most of language is highly conventional; we can combine a relatively small set of units in novel ways, precisely because we can trust that those units will have similar meanings across contexts.",
"At the same time, the linguistic system allows structures like metaphors and idioms, which use words in non-conventional ways.",
"Our conventionality measure is intended to distinguish phrases based on how conventional the meanings of their words are.",
"together in a phrase and, thus, measures the degree to which there is a statistical contingency the presence of one or more words strongly signals the likely presence of the others.",
"This notion of contingency has also been argued to be a critical piece of evidence used by language learners in deciding which linguistic structures to store (e.g., Hay, 2003; O'Donnell, 2015).",
"To aid in visualizing the space of phrase types we expect to find in language, we place our two dimensions on the axes of a 2x2 matrix, where each cell contains phrases that are either high or low on the conventionality scale, and high or low on the contingency scale.",
"The matrix is given in Figure 1, with the types of phrases we expect in each cell.",
"We expect our measures to place idioms primarily in the top left corner of the space.",
"At the same time, we predict a lack of correlation between the measures and a lack of major discontinuities in the space.",
"We take these predictions to be consistent with theories that factorize the problem into two mechanisms (captured by our dimensions of conventionality and contingency).",
"We contend that this factorization provides a natural way of characterizing not just idioms, but also collocations and novel metaphors, alongside regular language use.",
"In this section, we describe the creation of our corpus of idioms and define measures of conventionality and contingency.",
"Given that definitions of idioms differ in which phrases in our dataset count as idioms (some would include semantically transparent collocations, others would not), we do not want to commit to any particular definition a priori, while still acknowledging that people share somewhat weak but broad intuitions about idiomaticity.",
"As we discuss below, our idiom dataset consists of phrases that have at some point been called idioms in the linguistics literature.",
"We built a corpus of sentences containing idioms and non-idioms, all gathered from the British National Corpus (BNC; Burnard, 2000), which is a 100 million word collection of written and spoken English from the late twentieth century.",
"The corpus we construct is made up of sentences containing target phrases and matched phrases , which we detail below.",
"The target phrases in our corpus consist of 207 English phrasal expressions, some of which are prototypical idioms (e.g., rock the boat ) and some of which are boundary cases that are sometimes considered idioms, such as collocations (e.g., bits and pieces ).",
"These expressions are divided into four categories based on their syntax: verb object (VO), adjective noun (AN), noun noun (NN), and binomial (B) expressions.",
"Binomial expressions are fixed pairs of words joined by and or or (e.g., wear and tear ).",
"The phrases were selected from lists of idioms published in linguistics papers (Riehemann, 2001; Morgan and Levy, 2016; Stone, 2016; Bruening et al., 2018; Bruening, 2019; Titone et al., 2019).",
"We added the lists to our dataset one-by-one until we had at least 30 phrases of each syntactic type.",
"We chose these four types in advance to investigate a variety of syntactic types to prevent our results from being too heavily skewed by any potential syntactic confounds in particular constructions.",
"The full list of target phrases is given in Appendix A. The numerical distribution of phrases is given in Table 1.",
"The BNC was constituency parsed using the Stanford Parser (Manning et al., 2014), then Tregex (Levy and Andrew, 2006) expressions were used to find instances of each target phrase.",
"Matched, non-idiomatic sentences were also extracted in order to allow for direct comparison of conventionality scores for the same word in idiomatic and non-idiomatic contexts.",
"To obtain these matches, we used Tregex to find sentences that included a phrase with the same syntactic structure as the target phrase.",
"Each target phrase was used to obtain two sets of matched phrases: one set where the head word remained constant and one where the non-head word remained constant.",
"1 For example, to get head word matches of the adjective noun combination sour grapes , we found sentences where the lemma grape was modified with an adjective other than sour .",
"Below is an example of a sentence found by this method: Not a special grape for winemaking, nor a hidden architectural treasure, but hot steam gushing out of the earth.",
"The number of instances of the matched phrases ranged from 29 (the number of verb object phrases with the object logs and a verb other than saw ) to the tens of thousands (e.g., for verb object phrases beginning with have ), with the majority falling in the range of a few hundred to a few thousand.",
"Issues of sparsity were more pronounced among the target phrases, which ranged from one instance ( word salad ) to 2287 ( up and down ).",
"Because of this sparsity, some of the analyses described below focus on a subset of the phrases.",
"The syntactic consistency between the target and matched phrases is an important feature of our corpus, as it allows us to compare conventionality across semantic contexts while controlling for syntactic structure.",
"Our measure of conventionality is built on the idea that a word being used in a conventional way should have similar or related meanings across contexts, whereas a non-conventional word meaning can be idiosyncratic to particular contexts.",
"In the case of idioms, we expect that the difference between a word's meaning in an idiom and the word's conventional meaning should be large.",
"On the other hand, there should be little difference between the word's meaning in a non-idiom and the word's conventional meaning.",
"Our measure makes use of the language model BERT (Devlin et al., 2019) to obtain contextu-alized embeddings for the words in our dataset.",
"BERT was trained on a corpus of English text, both nonfiction and fiction, with the objectives of masked language modeling and next sentence pre-1 To obtain matched phrases, we follow work such as Gaz-dar (1981), Rothstein (1991), and Kayne (1994) in treating the first element in a binomial as the head.",
"We discuss this further in Section 6.",
"diction.",
"For each of our phrases, we compute the conventionality measure separately for the head and non-head words.",
"For each case (head and non-head), we first take the average embedding for the word across sentences not containing the phrase.",
"That is, for rock in rock the boat , we get the embeddings for the word rock in sentences where it does not occur with the direct object boat .",
"Let O be a set of instances w 1 , w 2 , ..., w n of a particular word used in contexts other than the context of the target phrase.",
"Each instance has an embedding u w 1 , u w 2 , ..., u w n .",
"The average embedding for the word among these sentences is: O = 1 n n (cid:88) i =1 u w i (1) We take this quantity to be a proxy for the prototypical, or conventional, meaning of the word.",
"The conventionality score is the negative of the average distance between O and the embeddings for uses of the word across instances of the phrase in question.",
"We compute this as follows: conv(phrase) = 1 m m (cid:88) i =1 (cid:13)(cid:13)(cid:13)(cid:13) T i O O (cid:13)(cid:13)(cid:13)(cid:13) 2 (2) where T is the embedding corresponding to a particular use of the word in the target phrase, and O is the component-wise standard deviation of the set of embeddings u w i , and m is the number of sentences in which the target phrase is used.",
"Our second measure, which we have termed contingency , refers to whether a particular set of words appears within the same phrase at an unexpectedly high rate.",
"The measure is based on the notion of pointwise mutual information (PMI), which is a measure of the strength of association between two events.",
"We use a generalization of PMI that extends it to sets of more than two events, allowing us to capture the association between phrases that contain more than two words.",
"The specific generalization of PMI that we use has at various times been called total correlation (Watanabe, 1960), multi-information (Stu-den and Vejnarov, 1998), and specific correlation (Van de Cruys, 2011).",
"For the case of three variables, we get: cont( x, y, z ) = log p ( x, y, z ) p ( x ) p ( y ) p ( z )",
"To estimate the contingency of a phrase, we use word probabilities given by XLNet (Yang et al., 2019), an auto-regressive language model that gives estimates for the conditional probabilities of words given their context.",
"Like BERT, XLNet was trained on a mix of fiction and nonfiction data.",
"To estimate the joint probability of the words in rock the boat in some particular context (the numerator of the expression above), we use XLNet to obtain the product of the conditional probabilities in the chain rule decomposition of the joint.",
"We get the relevant marginal probabilities by using attention masks over particular words, as shown below, where c refers to the contextthat is, the rest of the words in the sentence containing rock the boat .",
"The denominator is the product of the probabilities of each individual word in the phrase, with both of the other words masked out: Pr( boat | c ) = ... [ ___ ] [ ___ ] boat ...",
"The conditional probabilities were computed right to left, and included the sentence to the left and the sentence to the right of the target sentence for context.",
"Note that in order to have an interpretable chain rule decomposition for each sequence, we calculate the XLNet-based generalized PMI for the entire string bounded by the two words of the idiomthis means, for example, that the phrase rock the fragile boat will return the PMI score for the entire phrase, adjective included.",
"Our conventionality measure provides an indirect way of looking at how canonical a word's meaning is in context.",
"In order to validate that the measure corresponds to an intuitive notion of unusual word meaning, we carried out an online experiment to see whether human judgments of conventionality 4027 correlated with our automatically-computed conventionality scores.",
"The experimental design and results are described below.",
"(Note that our contingency measure directly computes the statistical quantity we want, so validation is not necessary.) 4.1 Human rating experiment The experiment asked participants to rate the literalness of a word or phrase in context.",
"2 We used twenty-two verb object target phrases and their corresponding matched phrases.",
"3 For each target phrase (e.g., rock the boat ), there were ten items, each of which consisted of the target phrase used in the context of a (different) sentence.",
"Each sentence was presented with the preceding sentence and the following sentence as context, which is the same amount of context that the automatic measure was given.",
"In each item, a word or phrase was highlighted, and the participant was asked to rate the literalness of the highlighted element.",
"We obtained judgments of the literalness of the head word, non-head word, and entire phrase for ten different sentences containing each target phrase.",
"We also obtained literalness judgments of the head word and entire phrase for phrases matched on the head of the idiom (e.g., verb object phrases with rock as the verb and a noun other than boat as the object).",
"Similarly, we obtained literalness judgments of the non-head word and the entire phrase for phrases matched on the non-head word of the idiom (e.g., verb object phrases with boat as the object and a verb other than rock ).",
"Participants were asked to rate literalness on a scale from 1 (Not literal at all') to 6 (Completely lit-eral').",
"We chose to use an even number of points on the scale to discourage participants from imposing a three-way partition into low', 'neutral', and 'high'.",
"Items were presented using a Latin square design.",
"The experiment was run online using the Prosodylab Experimenter (Wagner, 2021), a JavaScript tool building on jsPsych (De Leeuw, 2015).",
"Participants were adult native English speakers 2 Participants were recruited on Amazon Mechanical Turk and compensated at a rate of $15/hour.",
"The study was carried out with REB approval.",
"3 We excluded one target phrase from the analyses ( spill the beans ) based on examination of the BERT-based conventionality scores.",
"The verb spill used in spill the beans scored anomalously high on conventionality; investigation of the target and matched sentences revealed that roughly half of the matched sentences included a different idiom: spill X's guts .",
"We checked the rest of our dataset and did not find other instances of this confound.",
"who gave written informed consent to participate.",
"The experiment took about 10 minutes to complete.",
"The data were recorded using anonymized participant codes, and none of the results included any identifying information.",
"There were 150 participants total.",
"The data from 10 of those participants were excluded due to failure to follow the instructions (assessed with catch trials).",
"To explore whether our conventionality measure correlates with human judgments of literalness, we compare the scores to the results from the rating experiment.",
"Ratings were between 1 and 6, with 6 being the highest level of conventionality.",
"We predicted that the literalness ratings should increase as conventionality scores increased.",
"To assess whether our prediction was borne out, a linear mixed model was fit using the lmerTest (Kuznetsova et al., 2017) package in R (Team, 2017), with conventionality score and highlighted word (head versus non-head) and their interaction as predictors, plus random effects of participant and item.",
"4 All random effects were maximal up to convergence.",
"Results are shown in Table 2 in Appendix B. The results confirm our prediction that words that receive higher conventionality scores are rated as highly literal by humans ( = 0 .",
"185 , SE ( ) = 0 .",
"050 , p < 0 .",
"001 ; see Row 2 of Table 2 in Appendix B).",
"We carried out a nested model comparison to see whether including the BERT conventionality score as a predictor significantly improved the model, and we found that it did.",
"A likelihood ratio test with the above model and one without the BERT conventionality score as a predictor yielded a higher log likelihood for the full model ( 2 = 80 . 043 , p < 0 . 001 ).",
"In this section we present analyses of our two measures individually, showing that they capture the properties they were intended to capture.",
"We then investigate the interaction between the measures.",
"Section 5.3 evaluates our central predictions.",
"We predict that the target phrases will score lower on conventionality than the matched phrases, since we expect these phrases to contain words with (often highly) unconventional meanings.",
"We further predict that the target phrases will 4 Rating Conv*Head + (1|Item) + (1+Conv||Partp) 4028 have higher contingency scores than the matched phrases, due to all of the target phrases being expressions that are frequently reused.",
"Putting the two measures together, we expect idioms to fall at the intersection of low conventionality and high contingency, but not to show major discontinuities that qualitatively distinguish them from phrases that fall at other areas of intersection.",
"We find that the target phrases have lower average conventionality scores than the matched phrases, with a difference of -1.654, with t (145) = -5.829 and p < 0.001.",
"This is consistent with idioms having unconventional word meanings.",
"We find that, averaged across contexts, the target phrases had higher contingency scores, with a difference in value of 2.25 bits, with t (159) = 8.807 and p < 0.001.",
"Figure 2 shows boxplots of the average contingency score for each phrase type.",
"Since many of the target phrases only occurred in a handful of sentences, we have excluded phrases for which the target or matched sets contain fewer than 30 sentences.",
"5 For the most part, there were fewer sentences containing the target phrase than there were sentences containing only the head or only the non-head word in the relevant structural position.",
"This likely explains the greater variance 5 This threshold was chosen to strike a balance between having enough instances contributing to the average score for each datapoint, and having a large enough sample of phrases.",
"We considered thresholds at every multiple of 10 until we reached one that left at least 100 datapoints remaining.",
"For all syntactic structures, the median contingency score was higher for target phrases than matched phrases.",
"The greatest differences were observed for verb object and binomial phrases.",
"We fit another mixed effects model to test whether target idioms have higher contingency scores than matched phrases across syntactic classes (AN, B, NN, VO).",
"The model predicts the contingencies for each instance of a phrase used in context, with the target-matched contrast and syntactic class as fixed effects, and random effects for the target-matched pairs.",
"6 We find that target phrases have significantly higher contingency scores than matched phrases (see Row 2 of Table 3 of Appendix B).",
"Here we show that idioms fall in the expected area of our two-dimensional space, with no evidence of correlation between the measures.",
"Our results provide evidence against the notion of a special mechanism for idioms, whereby conventionality and contingency are expected to covary.",
"Recall the 2x2 matrix of contingency versus conventionality (Figure 1), where idioms were expected to be in the top left quadrant.",
"Figure 3 shows our results.",
"Since the conventionality scores were for individual words, we averaged the scores of the head word and the primary non-head word (i.e., the verb and the object for verb object phrases, the adjective and the noun for adjective noun phrases, the two nouns in noun noun phrases, and the two words of the same category in binomial phrases).",
"The plot shows the average values of the target and matched phrases.",
"As discussed above, the target phrases came from lists of idioms in the literature, and thus include a mix of canonical idioms and (seemingly) compositional collocations.",
"We predicted that the target phrases would be distributed between the top two quadrants, with obvious idioms on the top left and collocations on the top right.",
"As a sample, our results placed the following phrases in the top left quadrant: clear the air , bread and butter , nuts and bolts , red tape , and cut corners .",
"For each of these phrases, the idiomatic meaning cannot be derived by straightforwardly composing the mean-6 Cont Target*Class + (1+Target|Idiom) 4029 0 5 10 15 45 40 35 30 25 Conventionality C o n t i ng en cy Phrase type Target phrases Matched phrases Figure 3: Contingency versus conventionality values of target and matched phrases.",
"ing of the parts.",
"In the top right quadrant (high conventionality, high contingency), we have more or less , rise and fall , back and forth , and deliver the goods .",
"The bottom left quadrant was predicted to contain non-literal phrases whose words are not as strongly associated with one another as those in the most well-known idioms.",
"The phrases in our dataset that fall into this quadrant include hard sell , hit man , and cold feet .",
"A list of which target phrases landed in each quadrant is given in Appendix D. For the matched phrases, we assumed that the majority were instances of regular language use, so we predicted them to cluster in the bottom right quadrant.",
"Our results are consistent with this prediction.",
"The horizontal and vertical black lines on the plot were placed at the mean values for each measure.",
"Recall that our examples of regular language use consist of head-dependent constructions that share one word with an existing idiom.",
"Although obtaining the phrases in this way may have biased our sample of regular language use toward similarity with target phrases, the fact that we still see a clear difference between target and matched average values is all the more striking.",
"Figure 4 shows only the target phrases that received a human annotation of 1 or 2 for head word literalitythat is, the phrases judged to be most non-compositional.",
"As expected, the average score for the target phrases moved more solidly into the idiom quadrant.",
"Our experiments revealed an unexpected but interesting asymmetry between heads and their dependents.",
"Based on conventionality scores, the head word of the target phrases was more conventional on average than the primary non-head word.",
"A two-sample t-test revealed that this difference was significant ( t = 3.029, df = 252.45, p = 0.0027).",
"The matched phrases did not show a significant difference between heads and non-heads ( t = 1.506, df = 277.42, p = 0.1332).",
"Figure 5 presents the data in a different way, with target and matched phrases plotted together.",
"The plots show that the variability in overall phrase conventionality, which helps to distinguish idioms and non-idioms, is largely driven by the dependent word (as indicated by the steeper slopes for the non-head effects).",
"This interaction between phrase conventionality and head/non-head is significant (see Row 10 of Table 4 of Appendix B).",
"In addition, Figure 5 illustrates that this discrepancy between heads and non-heads is largest for verb object phrases.",
"We confirm this by fitting a linear model of word conventionality with predictors for phrase conventionality (average of the component words), head versus non-head word, and syntactic class, plus all interactions, using sum coding to compare factor levels of syntactic class.",
"7 7 WordConv PhraseConv*Class*Head 4030 The effect of headedness on conventionality scores is significantly greater for verb object phrases than the global effect of headedness (see Panel 4 of Figure 5; Row 14 of Table 4 of Appendix B).",
"We raise the possibility that there is an additive effect of linear order, with conventionality decreasing from left to right through the phrase.",
"For verb object phrases, the two effects go in the same direction, whereas for adjective noun and noun noun phrases, the linear order effect counteracts the headedness effect.",
"We are not aware of any other theory positing the attribution of idiomatic meaning to incremental chunks in this way.",
"Our results suggest that syntactic constituency alone is not enough to explain the observed patterns.",
"We note that there is disagreement in the literature about whether binomial phrases (which are coordinate structures) contain a head at all.",
"Some proposals treat the first conjunct as the head (e.g., Rothstein, 1991; Kayne, 1994; Gazdar, 1981), while others treat the conjunction as the head or claim that there is no head (e.g., Bloomfield, 1933).",
"We find that in the binomial case, the first conjunct patterns like the heads of the other phrase types, though how much of this effect may be driven by linear order remains unclear.",
"This may provide suggestive converging evidence for the first-conjunct-as-head theory, though further exploration of this idea is needed.",
"Many idiom detection models build on insights about unconventional meaning in metaphor.",
"A number of approaches use distributional models, such as Kintsch (2000), Utsumi (2011), Sa-Pereira (2016), and Shutova et al. (2012), the latter of which was one of the first to implement a fully unsupervised approach for encoding relationships between words, their contexts, and their dependencies.",
"A related line of work aims to automatically determine whether potentially idiomatic expressions are being used idiomatically or literally, based on contextual information (Katz and Gies-brecht, 2006; Fazly et al., 2009; Sporleder and Li, 2009, 2014).",
"Our measure of conventionality is inspired by the insights of these models; as described in Section 3.2, our measure uses differences in embeddings across contexts.",
"Meanwhile, approaches to collocation detection have taken a probabilistic or information-theoretic approach that seeks to identify colloca-NN VO AN B 4 2 0 2 4 2 0 2 9 6 3 0 3 9 6 3 0 3 Phrase conventionality (rescaled) W o r d c on v en t i ona li t y (r e sc a l ed ) Word type Head Nonhead Figure 5: Change in head versus non-head conventionality scores as phrase conventionality increases, for all phrases (target and matched), separated by phrase type (adjective noun, binomial, noun noun, and verb object).",
"tions using word combination probabilities.",
"PMI is a frequently-used quantity for measuring co-occurrence probabilities (Fano, 1961; Church and Hanks, 1990).",
"Other implementations include selectional association (Resnik, 1996), symmetric conditional probability (Ferreira and Pereira Lopes, 1999), and log-likelihood (Dunning, 1993; Daille, 1996).",
"Like our study, most previous work on idiom and collocation detection focuses specifically on English.",
"While much of the literature in NLP recognizes that idioms share a cluster of properties, including semantic idiosyncrasy, syntactic inflexibility, and institutionalization (e.g., Sag et al., 2002; Fazly and Stevenson, 2006; Fazly et al., 2009), our approach is novel in attempting to characterize idioms along two orthogonal dimensions that correspond to specific proposals from the cognitive science literature.",
"Our measures may offer a new avenue for tackling automatic idiom detection.",
"We investigated whether idioms could be characterized as occupying the intersection between contingency and conventionality, without needing to",
"appeal to idiom-specific machinery that associates the storage of multi-word expressions with the property of unconventional meaning, as has been proposed in previous work.",
"When we plotted conventionality and contingency scores against each other, we found that idioms fell, on average, in the area of low conventionality and high contingency, as expected.",
"Regular, non-idiomatic phrases fell in the high conventionality, low contingency area, also as expected.",
"The lack of correlation between the two measures provides support for theories that divorce the notions of conventionality and contingency.",
"Our results suggest that idioms represent just one of the ways that conventionality and contingency can interact, analogous to collocations or metaphor.",
"We also presented the novel finding that the locus of non-conventionality in idioms resides primarily in the dependent, rather than the head, of the phrase, a result that merits further study.",
"This paper uses computational tools to argue for a theoretical position about idioms.",
"Our idiom dataset was automatically generated from an existing corpus, and so did not involve data collection from human participants on our part.",
"To validate our conventionality measure, we conducted an additional online experiment with crowdworkers on Amazon Mechanical Turk, for which we obtained REB approval.",
"Details about the participants, recruitment, and consent process are given in Section 4.",
"We note that one limitation of this work is that it only investigates English idioms, potentially contributing to an over-focus on English in this domain.",
"We thank Reuben Cohn-Gordon, Jacob Hoover, Alessandro Sordoni, and the Montreal Computational and Quantitative Linguistics Lab at McGill University for helpful feedback.",
"We also gratefully acknowledge the support of the Natural Sciences and Engineering Research Council of Canada, the Fonds de Recherche du Qubec, and the Canada CIFAR AI Chairs Program."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"method",
"abstain",
"objective",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"result",
"objective",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"Recent years have brought about an interest in the challenging task of summarizing conversation threads (meetings, online discussions, etc.).",
"Such summaries help analysis of the long text to quickly catch up with the decisions made and thus improve our work or communication efficiency.",
"To spur research in thread summarization, we have developed an abstractive Email Thread Sum marization (EMAILSUM ) dataset, which contains human-annotated short ( < 30 words) and long ( < 100 words) summaries of 2,549 email threads (each containing 3 to 10 emails) over a wide variety of topics.",
"We perform a comprehensive empirical study to explore different summarization techniques (including extractive and abstractive methods, single-document and hierarchical models, as well as transfer and semi-supervised learning) and conduct human evaluations on both short and long summary generation tasks.",
"Our results reveal the key challenges of current abstractive summarization models in this task, such as understanding the sender's intent and identifying the roles of sender and receiver.",
"Furthermore, we find that widely used automatic evaluation metrics (ROUGE, BERTScore) are weakly correlated with human judgments on this email thread summarization task.",
"Hence, we emphasize the importance of human evaluation and the development of better metrics by the community.",
"1 1 Introduction As one of the major natural language generation tasks, automatic summarization has been studied for decades.",
"Most research efforts were focused on single-document summarization tasks, e.g., news document summarization (Hermann et al., 2015; Narayan et al., 2018).",
"However, living in an information era, we are facing with diverse content 1 Our code and summary data have been made available at: https://github.com/ZhangShiyue/EmailSum Email Thread : Subject : lunch this week Susan : All, Regarding our lunch this week to celebrate the one year anniversaries for Michelle & David, and Mark's birthday, I have a request to make it Wednesday instead of Tuesday.",
"Does anyone have an objection to this?",
"Susan David : I have another lunch engagement Wed, but I will skip it if everyone else wants to move our lunch.",
"David Tamra : Susan, Wednesday works out better for me as well.",
"I have a doctor's appointment tomorrow during lunch.",
"Tamra Short Summary : Susan emails everyone about an anniversary and offers to change the date.",
"Long Summary : Susan emails everyone about a lunch to celebrate a one year anniversary as well as Mark's birthday.",
"She says she would change the date to a different day.",
"David says he is busy that day with his own appointment but is willing to go with the majority and cancel that appointment to make this one.",
"Tamra agrees with Susan's date as she is busy Tuesday with an appointment.",
"in different structures.",
"The summarization need is varied along with different application scenarios.",
"Recently, there is an increasing research interest in diverse summarization tasks (Gao et al., 2020), e.g., timeline (Allan et al., 2001), query-based (Li and Li, 2014), multi-modal (Zhu et al., 2018), meeting (Carletta et al., 2006), dialogue or discussion thread (Misra et al., 2015; Gliwa et al., 2019; Rameshkumar and Bailey, 2020), etc.",
"Following the branch of dialogue or thread summarization, we introduce a new abstractive Email Thread Sum marization (EMAILSUM ) dataset.",
"Email threads are widely used at work.",
"An email thread is a special type of dialogue that usually has a specific structure (sender, receiver, greeting line, main body, and the signature), contains technical information, and involves multiple speakers.",
"Unlike a conversational dialog turn, an email in a thread is much longer with longer sentences, multiple action items or requests, and stylistically similar to written text.",
"Studies have shown that on average a worker sends/receives 122 business emails (Radicati, 2015) and spends more than 3 hours on those emails (Adobe, 2019) per day.",
"One possible reason is that sometimes people have to read through the entire conversation before replying to the latest email.",
"This happens when you forget the main points of previous discussions or you are newly included in a discussion thread.",
"Therefore, automatically summarizing email threads can improve our work efficiency and provides practical benefits.",
"Email Thread Summarization is not a new task.",
"Carenini et al. (2007) collected extractive summaries of 39 email threads from Enron email corpus (Klimt and Yang, 2004) and proposed to use a fragment quotation graph and clue words to conduct summarization.",
"Ulrich et al. (2008) collected both extractive and abstractive summaries of 40 threads from W3C email corpus (Craswell et al., 2006) plus speech acts, meta sentences, etc.",
"However, this task has been much less studied compared to other summarization tasks, partially due to the lack of large labeled email thread datasets.",
"In this paper, we collect human-written short ( < 30 words) and long ( < 100 words) abstractive summaries of 2,549 email threads constructed from Avocado Research Email Collection (Oard et al., 2015), which is 64 the size of previously labeled email thread datasets (Carenini et al., 2007; Craswell et al., 2006).",
"We limit each thread to a minimum of 3 and a maximum of 10 emails, an example is given in Table",
"1. We also extract 8,594 unlabeled email threads from both Avocado and W3C to facilitate semi-supervised learning.",
"2 See Section 2 for details of data collection.",
"Next, we present comprehensive baselines from different learning paradigms as a benchmark for our new email summarization dataset.",
"Specifically, we explore different summarization techniques, including extractive and abstractive summarization methods, single-document and hierarchical models, transfer learning, and semi-supervised learning for both short and long summary generation.",
"Experiments demonstrate that utilizing pretrained language model (e.g., T5 (Raffel et al., 2020)) is critical due to the small size of our data; taking the email thread as a single document sets up a 2 We apply strict criteria for thread extraction (see Section 2).",
"More threads can be extracted by relaxing those constraints.",
"good baseline; transferring from news or dialogue datasets barely improve the performance; using hierarchical encoders only marginally improves it; while semi-supervised learning by using unlabelled email threads significantly ( p < 0 . 01 ) improves ROUGE (Lin, 2004) scores in some cases.",
"Lastly, to better understand how well the email thread summarization models perform and investigate the correlation between automatic metrics and human judgment, we ask humans to rate the salience (how well the model summarizes salient points) and faithfulness (how well the model stays true to the email thread) of model-generated summaries, as well as to perform a pairwise comparison between our best and base models.",
"We find that even though semi-supervised learning improves ROUGE scores, human judges still favor the summary generated by the baseline model (T5 base ).",
"Two frequent errors made by the model are (1) failing to understand the sender's intent and (2) failing to identify the roles of the sender and receiver .",
"Relatedly, human correlation analysis reveals that automatic metrics (ROUGE (Lin, 2004), BERTScore (Zhang et al., 2019)) are poorly correlated with human judgment, which stresses the importance of human evaluation in this task and the requirement for better metrics to be proposed.",
"Overall, in this work, we propose the new EMAILSUM dataset that provides a larger resource for studying the email thread summarization task.",
"We conduct a comprehensive empirical model study and human evaluation analysis, which will serve as an important starting point for future studies.",
"To collect email thread summarization data, we first need to obtain unlabelled email threads.",
"We resort to existing email collections: Enron (Klimt and Yang, 2004), W3C (Craswell et al., 2006), and Avocado (Oard et al., 2015).",
"However, none of them provides explicit thread structure.",
"Therefore, in this section, we will introduce our email thread preprocessing and summary collection procedures.",
"We extract email threads from the flat email collections in the following steps: (1) we give every email a normalized subject by removing the reply or forward tags (e.g., Re:, Fwd:, etc.) from its original subject; (2) we group emails by the normalized subjects and sort emails in the same group (i.e.,",
"thread) by timestamp; (3) we de-duplicate emails in every thread by sender's email plus timestamp; (4) we traverse emails in every thread in temporal order and cut off the thread when none of the senders plus receivers of the current email appears in previous emails; (5) we filter out threads that only contain single repeated content.",
"To obtain a cleaner dataset, we remove threads that do not comply with the following constraints: (1) 3 the number of emails 10; (2) 5 < the number of words in each email < 200; (3) 30 < the total number of words < 1000; (4) does not contain non-English (e.g., German) tokens; (5) does not contain reply or forward tags in the subject of the first email.",
"Emails often contain personal information such as full name, email/physical address, phone number, etc.",
"To protect privacy, we anonymize all email threads before annotation: (1) only keep first names; (2) remove threads that have password, pwd, confidential,",
"etc.; (3) replace email address, physical address, phone number, URL, IP address, local path, and other sensitive numbers with [email protected], ADDRESS, PHONENUMBER, HTTP://LINK, IPADDRESS, PATH, and NUMBER, respectively.",
"We conduct an extensive manual quality scan to make sure that the extracted threads are truly threads (instead of random emails grouped) and properly anonymized.",
"Finally, we obtain 8,116 threads from Avocado and 3,478 threads from W3C.",
"3 We randomly sample 3K Avocado threads for summary annotation, and the remaining threads are used as unlabelled data.",
"use several quality control strategies: (1) We select annotators that are located in the US, have an approval rate greater than 97%, and have at least 10,000 approved HITs; (2) During annotation, we periodically sample summaries, manually check their quality, and reject or block poor-quality annotators; (3) After annotation, we randomly sample 2 examples per annotator and manually categorize annotators into good, fair, and bad groups, then filter examples written by bad annotators.",
"Email threads oftentimes contain technical information, we instruct annotators not to get stuck on technical details, instead, focus on the major concerns, decisions, and consensus.",
"We collect both short ( < 30 words) and long ( < 100 words) abstractive summaries per thread.",
"For the short summary, we instruct annotators to write a concise description of what the thread is mainly talking about ; while for the long summary, we instruct them to write a a narrative of what happens .",
"We are intent to provide summaries with two different levels of abstractiveness, length, and concreteness.",
"We show annotators an example written by an expert (a CS graduate student).",
"More summary collection details can be found in Appendix A. 2.3 Final Dataset Description The summary collection and filtering process yield 2,549 email threads each with a long and a short summary.",
"We randomly sample 500 examples from the good annotator group as our testing set and split the remaining examples into training (1,800 threads) and development (249 threads) sets.",
"Table 2 shows the statistics of EMAILSUM .",
"4 .",
"For ease of benchmarking, we also include statistics on other 4 Since comparing the model-generated summary to only one human-written reference may not be fully informative, recently we have also collected one more reference for each email thread in our test set, i.e., each test example will have two gold references now in our final dataset.",
"The results in the paper are all still based on the original one-reference setup but we will release the updated two-reference results for our best baselines on Github.",
"commonly used summarization datasets: CNN/DM (Hermann et al., 2015) and XSum (Narayan et al., 2018) are about news summarization; SAMSum (Gliwa et al., 2019) is about chit-chat summarization; CRD3 (Rameshkumar and Bailey, 2020) is a role-play dialogue summarization dataset; BC3 (Ulrich et al., 2008) is another email thread summarization with 40 threads from W3C.",
"Compared to the other datasets, the average document length in the EMAILSUM dataset is not very long, containing 233 words; long summaries are more than twice as longer than short summaries.",
"Ext-Oracle-R1 in Table 2 indicates how abstractive the summaries are.",
"It computes the ROUGE-1 scores of an oracle extractive method (see Section 3.1 for details of the oracle extractive method).",
"The lower it is, the more abstractive the dataset is.",
"According to this score, the abstractiveness of the EMAILSUM summaries is lower than the XSum summaries, while higher than the CNNDM summaries.",
"Furthermore, the short summaries of EMAILSUM dataset are more abstractive than its long summaries.",
"The summarization models we explore in this work take the email thread as input and generate the summary as output.",
"We experiment on EMAILSUM short and EMAILSUM long tasks separately.",
"Oracle.",
"This method maximize an evaluation metric w.r.t. the gold summary.",
"Ext-Oracle-R1 in Table 2 is computed from an oracle summary that maximizes ROUGE-1 (Lin, 2004).",
"Lead.",
"This model simply picks the first sentence from the source document as the summary, which has surprisingly good performance on CNN/DM dataset (Narayan et al., 2018).",
"We test two variants by selecting: (1) the first sentence of the email thread, which is usually the subject (see the example in Table 1), referred as Lead-1 ; (2) the first sentence of the email thread (the subject) plus the first sentences of every email, named Lead-1-Email .",
"5 TextRank.",
"This is a graph-based method (Mihal-cea and Tarau, 2004).",
"It first builds a graph between sentences by their embedding similarities; then the PageRank algorithm is applied to obtain the rank 5 We also tested some other heuristics: e.g., the first sentence of the last email, the last 3-5 sentences of the email thread, etc.",
"However, none of them perform better than Lead-1-Email.",
"BertSumExt.",
"Liu and Lapata (2019b) propose to build a sentence extractor upon BERT (Devlin et al., 2019) to perform extractive summarization, which achieves a good performance on CNN/DM.",
"Fast Abs RL.",
"As the simple non-pretrained abstractive baseline, we use Chen and Bansal (2018), which is a hybrid model that first extracts sentences from the source document, then rewrites the extracted sentences by an abstractive rewriter.",
"They pair summary sentences with the extracted sentences to train the abstractive rewriter.",
"Adapting their model to our email thread summarization task, we make two adjustments: (1) We extract emails instead of sentences, which is a natural unit for email thread; (2) Since summary sentences usually follow the temporal order of the emails, we enhance this pairing procedure by using the Neeleman-Wunsch algorithm (Needleman and Wunsch, 1970; Rameshkumar and Bailey, 2020) to impose the order constraint to the alignment (see description and comparison in Appendix B).",
"T5.",
"T5 (Raffel et al., 2020) is a Transformer (Vaswani et al., 2017) based seq-to-seq model pretrained with large-scale English data.",
"It achieves state-of-the-art performances on a lot of NLP tasks including the CNN/DM summarization task.",
"As our main baseline, we take the email thread as a single document and finetune a T5 base to generate the summary ( T5 base ).",
"A similar setup is also used in transfer and semi-supervised learning.",
"Since our training dataset is small, we find that using the pretrained knowledge transfer is crucial.",
"Training a T5 model from scratch performs poorly (see the results in Appendix Table 7).",
"Transfer Learning.",
"To analyze how information from other summarization datasets (listed in Table 2) can be transferred to this new task and its impact on the performance, we investigate two simple transfer learning methods: (1) Pre-finetuning , in which we first finetune T5 on a bigger summarization dataset (e.g., CNN/DM) then continue the finetuning on our dataset, referred as X pre ( X is the bigger dataset's name, e.g., CNNDM pre ) in our result tables.",
"This is analogous to the continual training method proposed for multilingual transfer learning of machine translation (Kocmi and Bojar, Self-Attn Norm Feedforward x N Token-levelEncoder Norm e 1 e 2 e 3 e 4 H 1 H 2 H 3 H 4 Email-levelEncoder H 1 H 2 H 3 H 4 mean pooling Self-Attn Norm Feedforward Norm Cross-Attn Norm Cross-Attn Norm Self-Attn Norm Feedforward Norm x N Decoder x N Figure 1: The architecture of our hierarchical T5. 2018).",
"(2) Joint-training , in which we upsample EMAILSUM data and mix it with another dataset, then use the combined data to finetune T5, similarly denoted as X joint .",
"This is analogous to the multilingual joint training method used in machine translation (Johnson et al., 2017).",
"Semi-supervised learning.",
"Since we only have 2.5K labeled email threads, another important technique to improve the performance is to utilize unlabelled data (i.e., email threads without labeled summaries).",
"As introduced in Section 2.1, in addition to the 3K email threads used for summary collection, we have 8,594 unlabelled email threads (5,116 from Avocado; 3,478 from W3C).",
"We explore semi-supervised learning via the simple self-training technique (Scudder, 1965).",
"We use a trained model (a finetuned T5) to generate summaries for unlabelled threads, then mix the model-labeled and human-labeled data to finetune T5 again, referred as SemiSup x ( x stands for the unlabelled data source we use, i.e., W3C, Avocado, or together).",
"Hierarchical T5.",
"Hierarchical summarization models have been shown to improve the performance of multi-document summarization task (Liu and Lapata, 2019a).",
"Although an email thread can be treated as a single document due to the temporal dependency between consecutive emails, it also has a clear turn structure that encourages using of the hierarchical encoders.",
"Recently, Zhu et al. (2020) proposed a hierarchical model (HMNet) for meeting summarization.",
"Inspired by their work, we propose a hierarchical model that is similar to HMNet in structure but uses T5 as the backbone, therefore, it can take advantage of both the hierarchical structure and the pre-trained knowledge.",
"As shown in Figure 1, this model contains two encoders: the token-level encodes the whole email thread (e.g., e 1 , e 2 , e 3 , e 4 ) while the email-level receives mean-pooled email-level representations as input.",
"The decoder has two cross attentions that attend to the outputs of the email-level and the token-level encoders respectively.",
"Both token-level and email-level encoders are sharing the weights of the T5 encoder.",
"We add a small number of new parameters by adding new cross attention between the decoder and the email-level encoder.",
"ROUGE (Lin, 2004) is a commonly used automatic metric for summarization tasks.",
"It has several variants: (1) ROUGE-1 (R1) measures the unigram overlap between the generated and reference summaries; (2) ROUGE-2 (R2) measures the bi-gram overlap; (2) ROUGE-L (RL) computes the longest common subsequence (LCS); (4) summary-level ROUGE-L (RLsum) computes LCS between each pair of reference and candidate sentences and returns the union-LCS.",
"We use the rouge score package 7 and report F1 scores.",
"BERTScore (Zhang et al., 2019) goes beyond n-gram overlap to provide contextualized semantic similarity.",
"Specifically, it uses BERT (Devlin et al., 2019) (or RoBERTa (Liu et al., 2019)) representations to softly align the words in candidate and reference summaries and then computes a soft uni-gram F1 score.",
"We use the bert score package 8 and report rescaled numbers with a baseline.",
"Table 3 shows the evaluation results on the testing set of different models (the corresponding results on the development set can be found in Appendix Table 7).",
"It can be observed that the Oracle extractive model sets up a high upper bound on all metrics except for BERTScore (BertS).",
"Among non-oracle extractive methods, the Lead-1-Email heuristic works best and even better than the deep extractive method, BertSumExt.",
"The hybrid Fast Abs RL model outperforms purely extractive methods but works worse than purely abstractive methods with large-scale pretraining (e.g., T5).",
"6 The significance test is following the bootstrap test setup (Efron and Tibshirani, 1994) and sample for 100k times.",
"Taking the email thread as one single document and finetuning T5 (i.e., T5 base in Table 3) sets up a strong baseline.",
"Upon this baseline model, we test the transfer learning from four different summarization datasets (CNN/DM, XSum, SAMSum, and CRD3).",
"However, as shown in Table 3, transfer learning barely improves over baseline, and transferring by pre-finetuning always works better than joint-training .",
"Since our EMAILSUM has a quite different domain as existing news or dialogue datasets, we conjecture that it is hard to transfer knowledge between them or better transferring techniques need to be applied.",
"Similarly, we test the semi-supervised learning with unlabelled data from W3C, Avocado, and both of them (together).",
"This method can mostly (or significantly in some cases) outperform the baseline's performance for both EMAILSUM short and EMAILSUM long .",
"Lastly, the hierarchical T5 base model only marginally outperforms the non-hierarchical Figure 2: The impact of the number of emails in the thread on summarization performance (ROUGE-1).",
"Since we focus on generating abstractive summaries for email threads and the human-written summaries are fairly abstractive (as shown in Table 2), we further investigate the abstractiveness of model-generated summaries.",
"We take summaries generated by the baseline (T5 base ) and the best ROUGE-1 models (SemiSup together for EMAILSUM short , SemiSup w 3 c for EMAILSUM long ) as the pseudo ground-truth, respectively.",
"Then, we evaluate the ROUGE-1 of extractive Oracle and Lead-1-Email models; higher scores means more extractive summaries.",
"As shown in Table 4, compared EMAILSUM short EMAILSUM long SemiSup together vs T5 base SemiSup w 3 c vs T5 base Win Lose Tie Win Lose Tie Salience 109 133 55 109 130 50 Faithfulness 116 123 58 126 122 41 Overall quality 120 138 39 125 140 24 Table 5: Pairwise comparison between summaries generated by the best ROUGE-1 models and T5 base .",
"to humans, models generate much more extractive summaries.",
"Moreover, the semi-supervised models (R1-best) are even more extractive than the baseline, which is probably because the self-training procedure amplifies the extraction tendency.",
"Lastly, for both base and best models as well as for both short and long summaries, the model performance (ROUGE-1) decreases as the number of emails in the thread increases (shown in Figure 2).",
"To better understand where the model still falls short and investigate if the automatic metrics correlate well with human judgments, we conduct a human evaluation on Amazon Mechanical Turk.",
"Initially, by manually checking the quality of model-generated summaries, we find that models can mostly generate grammatical, relevant, and flu-ent summaries; however, they often fail to be salient and faithful, i.e., models tend to be overdetailed or do not stay true to the source thread.",
"Therefore, we ask human annotators to rate the salience and faithfulness of model-generated summaries.",
"We choose the best ROUGE-1 models, SemiSup together for EMAILSUM short , SemiSup w 3 c for EMAILSUM long , to evaluate, then we sample 100 examples, and collect 3 responses for each example.",
"Human judges are asked to rate on a 5-point Likert scale for salience and faithfulness respectively and annotate which summary sentences are not salient or unfaithful.",
"We explain the meaning of salience and faithfulness to annotators and instruct them how to rate from 1 to 5.",
"Meanwhile, to verify the improvement obtained by best R1 models over T5 base , we ask them to compare the summaries generated by these models and those from T5 base , and judge which one is more salient, more faithful, and has overall higher quality.",
"More collection details can be found in the Appendix D. We check the average inter-rater agreement (Krippendorff's alpha (Krippendorff, 2011)) of salience and faithfulness ratings.",
"It is around 0.09 to 0.23, i.e., slight to fair agreement (Fleiss and Cohen, 1973).",
"However, when we convert the ratings to 3-point by taking { 3 } , { 4 and 5 } , { 1 and 2 } as 3 classes, the agreement increases to 0.36 to 0.63, i.e., fair to substantial agreement.",
"This indicates that humans' subjectivity affects the ratings and people have a hard time distinguishing bad' from very bad' as well as good' from very good'.",
"Meanwhile, the ratings for short summaries are always less agreed across raters (0.36-0.38) than that for long summaries (0.58-0.63).",
"This indicates that there might be multiple different ways of summarizing an email thread into a short summary.",
"The agreement of pairwise comparison is around 0.20 to 0.24 (fair agreement), which is because the baseline and the best models have non-distinguishable performance (shown in Table 5).",
"Finally, we take the 3-rater average as the final human rating for each example.",
"In addition, we evaluate the correlations ( Pearson Correlation (Benesty et al., 2009)) among different human ratings.",
"The correlation between salience and faithfulness ratings is 0.36/0.45 for short/long summarization.",
"And the correlations among salience, faithfulness, and overall quality pairwise preferences are around 0.53 to 0.79.",
"Overall, moderate to large (Cohen, 2013) correlations are observed.",
"Surprisingly, human evaluators are mostly satisfied with the salience and faithfulness of model-generated summaries, ratings are around 4 out of 5.",
"On average, humans rate 3.89 and 4.04 for the salience and faithfulness of SemiSup together generated short summaries, respectively; and they rate 4.22 and 4.29 for the salience and faithfulness of SemiSup w 3 c generated long summaries, respectively.",
"Examples with low or high ratings are shown in Table 6 or Appendix Table 8.",
"Humans rate higher for model-generated long summaries, which is correlated to the trend of ROUGE, and they are more satisfied with faithfulness than salience.",
"son between the best ROUGE-1 models and T5 base .",
"Except for the faithfulness of EMAILSUM long , the best ROUGE-1 models mostly lose to the baseline (though the loss and win are mostly marginal).",
"Together with Table 4, we conjecture that the improvement obtained by semi-supervised learning exploits n-gram matching accuracy by making the summary more extractive, while humans prefer more abstractive summaries.",
"Lastly, we analyze the non-salient and unfaithful sentences labeled by the human evaluators.",
"We find that two errors are frequently made by the summarization model: (1) Failing to understand the sender's intent.",
"Usually, when we send an email, there is a high-level intention behind the detailed content we write, e.g., start up a discussion, bring up a concern, broadcast a decision, etc.",
"However, models are oftentimes unable to capture the intention and thus overly focus on details.",
"As shown in the first example of Table 6, Om intends to summarize the important points from a meeting, while the model only picks the first piece of detail in that email as the summary.",
"This problem is also related to the over-extractive issue (shown in Table 4).",
"The model tends to extract details from the source thread and the extraction is biased to the first sentence of each email.",
"(2) Failing to identify the roles of the sender and receiver.",
"An email thread is a special type of conversation with multiple speakers involved.",
"for the model is to identify the roles of different speakers and their relations, i.e., who does what to whom.",
"As shown in the second example of Table 6, the model wrongly takes 2 fixes in 382 are in the patch installer as information provided by Nilesh , whereas it is supposed to be by Diana .",
"The same issue can also be observed in the first example: Om is just summarizing what Nihar said instead of telling Nihar .",
"This is considered as a type of unfaithfulness, which has been widely identified as a common issue of abstractive summarization models (Wang et al., 2020; Durmus et al., 2020; Maynez et al., 2020).",
"ROUGE (Lin, 2004) measures n-gram overlap and BERTScore (Zhang et al., 2019) is essentially based on soft uni-gram matching.",
"However, according to our analysis presented above, the email thread summarization models mainly fail to be abstractive, salient, and faithful, which are hard to be evaluated by n-gram overlap.",
"Furthermore, as pointed out by Bhandari et al. (2020), different datasets usually require different evaluation metrics.",
"Therefore, here, we study the correlation between automatic metrics and human judgments.",
"Specifically, we evaluate the Pearson Correlation between human ratings and automatic metric scores on the 100 examples used in the human evaluation.",
"Besides, as described above, we conduct a pairwise model comparison between the best ROUGE-1 models and T5 base for salience, faithfulness, and overall quality.",
"We convert them to a pairwise ranking score, i.e., -1 if T5 base is better; 1 if T5 base is worse; 0 if two models are non-distinguishable.",
"In the same way, we convert different metric scores to ranking scores.",
"Then, we also evaluate the Pearson Correlation between human and metric ranking scores.",
"Figure 3 illustrates the results.",
"Overall, the correlations are fairly poor.",
"The best correlation is between ROUGE-1 and human overall quality ranking for short summary generation (coefficient=0.14, p=0.16).",
"There is little or negative correlation between metrics and human judgment for the long summary generation.",
"Therefore, we emphasize the importance of human evaluation and better automatic proxies need to be proposed in the future.",
"In this work, we propose an abstractive email thread summarization dataset, EMAILSUM , that contains 2,549 email threads with human-written short and long summaries.",
"We explore different summarization paradigms and find that taking the email thread as a single document and finetuning T5 (Raffel et al., 2020) sets up a good baseline.",
"Transferring from other summarization datasets barely improves it.",
"Using hierarchical structure also only marginally improves the performance.",
"Semi-supervised learning by using unlabelled email threads improves automatic metrics (ROUGE) but still loses to the baseline in human evaluation.",
"Finally, our human evaluation reveals that the model fails to understand the sender's main intention and the roles of different speakers.",
"Automatic metrics are poorly correlated with human judgment, which emphasizes the importance of human evaluation and designing new metrics for this task in the future.",
"We use two email collections in this work: Avocado (Oard et al., 2015) and W3C (Craswell et al., 2006).",
"W3C is derived from W3C Public Mailing List that is open-source available online.",
"Avocado consists of emails and attachments taken from 279 accounts of a defunct information technology company referred to as Avocado.",
"Its copyright is protected by Linguistic Data Consortium.",
"Based on the license agreement, we will only open-source our collected summaries and provide scripts to obtain email threads from the original Avocado email collection.",
"To further protect copyright and the privacy of the persons involved in the emails, as introduced in Section 2, we carefully anonymize all the email threads we construct from both email collections.",
"We fairly pay crowd-source workers $1.37 (for threads with 5 or fewer emails) or $2 (for threads with more than 5 emails) for writing the short and long summaries and $0.6 for human rating such that the pay rate is higher than the federal minimum wage requirement.",
"We thank the reviewers for their helpful comments and Xiang Zhou for useful discussions.",
"We thank Saadia Gabriel, Yichen Jiang, Tom McCoy, and Yixin Nie for helping write summary examples (to show as initial examples to MTurk annotators) and estimate the workload for deciding the fair payment.",
"This work was partially done while SZ was interning at MSR and later extended at UNC, where it was supported by NSF-CAREER Award 1846185, ONR Grant N00014-18-1-2871, and a Microsoft Investigator Fellowship.",
"The views contained in this article are those of the authors and not of the funding agency."
] | [
"abstain",
"result",
"objective",
"objective",
"result",
"result",
"objective",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"other",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"other",
"other",
"method",
"abstain",
"abstain",
"result",
"method",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"Continuity of care is crucial to ensuring positive health outcomes for patients discharged from an inpatient hospital setting, and improved information sharing can help.",
"To share information, caregivers write discharge notes containing action items to share with patients and their future caregivers, but these action items are easily lost due to the lengthiness of the documents.",
"In this work, we describe our creation of a dataset of clinical action items annotated over MIMIC-III, the largest publicly available dataset of real clinical notes.",
"This dataset, which we call CLIP, is annotated by physicians and covers 718 documents representing 100K sentences.",
"We describe the task of extracting the action items from these documents as multi-aspect extractive summarization, with each aspect representing a type of action to be taken.",
"We evaluate several machine learning models on this task, and show that the best models exploit in-domain language model pre-training on 59K unannotated documents, and incorporate context from neighboring sentences.",
"We also propose an approach to pre-training data selection that allows us to explore the trade-off between size and domain-specificity of pre-training datasets for this task.",
"Transitioning patient care from hospitals to primary care providers (PCPs) can frequently result in medical errors (Kripalani et al., 2007).",
"When patients are discharged, they often require further actions to be taken by their PCP, who manages their long-term health, such as reviewing results for lab tests once they are available (Moore et al., 2007).",
"Yet PCPs often have many patients and little time to review new clinical documents related to a recent Equal contribution Work done while at ASAPP.",
"Discharge notes are typically lengthy (Weis and Levy, 2014) and written as free text, so PCPs may fail to identify important pending actions, which inadvertently leads patients to poor outcomes.",
"Spencer et al. (2019) found that PCPs considered the lack of a standardized follow-up section to be a key driver in missing follow-up action items.",
"While discharge notes may include follow-up sections, they are typically aimed at the patient and not curated for PCP use.",
"Jackson et al. (2015) found that following up on pending clinical actions is critical for minimizing risk of medical error during care transitions, especially for patients with complex treatment plans.",
"Automatic extraction of action items can make physicians more efficient by reducing the high cognitive load and time-consuming burden of using electronic health records (Tai-Seale et al., 2017; Sinsky et al., 2016; Singh et al., 2013; Farri et al., 2013).",
"To our knowledge, there has been little previous work using machine learning to address this important clinical problem.",
"Potential impact Successful automatic extraction of action items can have several direct benefits.",
"First, it can improve patient safety by fostering more comprehensive and complete care by PCPs.",
"Second, it might make physicians more efficient at performing a comprehensive review of action items, which is critical as physicians spend an increasing amount of time interacting with electronic health record (EHR) systems (Tai-Seale et al., 2017; Sinsky et al., 2016).",
"Further, reviewing and synthesizing lengthy or complicated patient histories places a significant cognitive load on physicians, which has been associated with increased medical error (Singh et al., 2013; Farri et al., 2013), so reducing this cognitive load is an area of opportunity.",
"Finally, a working system might integrate with EHRs to au-ActionType Description Example Appointment Appointments to be made by the PCP, or monitored to ensure the patient attends them.",
"tomatically address certain action items such as scheduling appointments, thereby improving EHR usability and further reducing medical error.",
"Contributions We introduce a new clinical natural language processing task that accomplishes focused information extraction from intensive care unit (ICU) discharge notes by selecting sentences that contain action items for PCPs or patients.",
"An action item is a statement in a discharge note that explicitly or implicitly directs the reader to an action that should be taken as a result of the hospital stay described in the document.",
"Given a discharge note, the task is to extract all action items in the note.",
"We cast this task as a special case of multi-aspect document summarization, with each aspect representing an area of patient care to monitor or on which to take action (see examples in Table 1).",
"We create the first annotated dataset for this new task, CLIP, a dataset of 718 annotated notes from MIMIC-III (Johnson et al., 2016), comprising over 100K annotated sentences.",
"This will be, to our knowledge, one of the largest annotated datasets for clinical NLP, which tend to be smaller due to the expense of expert annotators.",
"We evaluate machine learning methods to tackle this task.",
"Similar to prior work on multi-aspect extractive summarization, we employ sentence-level multi-label classification techniques (Hayashi et al., 2020).",
"Our proposed architecture consists of passing a sentence, and its neighboring sentences on its left and right, through a pre-trained BERT model (Devlin et al., 2019) with minor modifica-tions.",
"Since there is limited annotated data but a wealth of unlabeled in-domain clinical notes, we also explore the impact of unsupervised learning on this task.",
"We develop a method for task-targeted pre-training data selection, in which a model trained on the downstream task selects unlabeled document segments for fine-tuning a BERT model.",
"We find that this focused pre-training is much faster than pre-training on all available data and achieves competitive results.",
"Our results show that unsupervised pre-training of any form is critical to improving results.",
"Our code is available as open-source software 1 , and our annotations are available via PhysioNet 2 , to fully enable reproduction of our results and to provide a benchmark for evaluating future advances in clinical NLP in the context of this clinically 1 https://github.com/asappresearch/clip 2 As they are built on top of MIMIC-III, which PhysioNet maintains, access to our annotations requires the completion of an ethics course and a Data Use Agreement.",
"Clinical information extraction There has been a wealth of previous work on extracting information from clinical notes, much of which also follows an extractive summarization approach.",
"For example, Were et al. (2010) extracts items such as patient smoking status and obesity comorbidities from discharge notes.",
"Liang et al. (2019) created a hybrid system of regex-based heuristics, neural network models trained on pre-existing datasets, and models such as support vector machines for disease-specific extractive summarization.",
"Liu et al. (2018b) developed a pseudo-labelling, semi-supervised approach, using intrinsic correlation between notes, to train extractive summarization models for disease-specific summaries.",
"We differ from these efforts in that we do not aim to generate general-purpose or disease-specific summaries, rather we focus on extracting specific action items from discharge notes to facilitate care transfer.",
"Clinical datasets Datasets and challenges on the extraction of medication, tests, and procedure mentions in clinical text (Uzuner et al., 2010, 2011; Jagannatha et al., 2019) have been released, but without the focus on providing actionable insight to PCPs.",
"Additionally, multiple datasets (Uzuner et al., 2012; Sun et al., 2013) have been introduced for detecting temporal and co-reference relations between parts of a note.",
"While it may be useful for a model to have a good grasp of co-reference and temporal dependencies to understand what constitutes actionable information for a PCP, we choose to optimize directly for the end task, noting recent work demonstrating that modern pre-trained neural networks will identify and exploit such information as needed (Tenney et al., 2019).",
"Although on different tasks, we note that our dataset of 718 annotated documents is larger than recently released datasets, such as those from the n2c2 shared tasks.",
"For comparison, 500 documents were annotated for adverse drug event extraction (Henry et al., 2020a), 150 documents for family history extraction (Liu et al., 2018a), and 100 documents for clinical concept normalization (Henry et al., 2020b).",
"One of the largest annotated clinical datasets, emrQA, is built on 2,425 clinical notes (Pampari et al., 2018).",
"Summarization Prior summarization work, which we build on, uses pre-trained transformer models to construct sentence representations that are contextualized with the entire document (Liu and Lapata, 2019; Hayashi et al., 2020).",
"Liu and Lapata (2019) evaluate on three benchmark summarization datasets consisting of news articles.",
"They are shorter, with average document lengths from 400-800 words, whereas MIMIC-III discharge notes average over 1,400 words.",
"Liu and Lapata (2019) evaluate with ROUGE scores standard in summarization, whereas we take advantage of having ground truth extracted sentences and evaluate with classification metrics, providing a substantially different task.",
"Liang et al. (2019) develop a disease-specific summary dataset, but it is not public and their methods involve combining a mix of outputs from models performing auxiliary tasks such as concept recognition, adverse drug event extraction, and medication change detection, each of which have to be individually developed and maintained.",
"In this section, we describe the process of creating our CLIP dataset, short for CLINICALFOLLOWUP, and report statistics on the dataset.",
"CLIP is created on top of the popular clinical dataset MIMIC-III (Johnson et al., 2016).",
"The MIMIC-III dataset contains 59,652 critical care discharge notes from the Beth Israel Deaconess Medical Center over the period of 2001 to 2012, among millions of other notes and structured data.",
"We annotated 718 randomly sampled discharge notes from the set of patients that were discharged from the ICU (i.e., survived) and thus brought back to the care of their primary care physician or relevant specialists.",
"Though this dataset is orders of magnitude smaller than general summarization datasets such as Nallapati et al. (2016), we note the relatively large expense associated with clinical annotation due to both the length of documents ( 160 sentences on average) and the requirement of domain experts.",
"This dataset is also the first of its kind in the clinical space.",
"The total number of sentences is 107,494, of which 12,079 have at least one label.",
"The sampled MIMIC-III data is further split randomly into training, validation, and test sets, such that all sentences from a document go to the same Patient Instructions 6.55% Appointments 4.59% Medications 1.88% Lab tests 0.69% Procedures 0.28% Imaging 0.18% Other 0.05% Table 2: Prevalence of each label type in CLIP training set.",
"Our dataset was annotated by four physicians and one resident over the course of three months.",
"We underwent several rounds of initial annotations with calibration processes and instruction refinement in between.",
"Additional annotation details are provided in the appendix and the full guidelines are available on our public repository.",
"We estimated inter-rater reliability by having two physician annotators independently annotate a set of 13 documents comprising 2600 sentences.",
"Comparing predictions on a binary reduction of the task, in which a match indicates that both annotators labeled a sentence (regardless of chosen label types), we measured a Cohen's kappa statistic of 0.925.",
"The seven action item aspects that we labeled in the dataset, along with example discharge note snippets for each one, are presented in Table 1.",
"To emphasize the subtlety and complexity of this task, we highlight here some example rules that state what should not be annotated.",
"For the appointment label, we should exclude sentences that refer to as needed appointments, e.g. See your endocrinologist as needed.; this describes no deviation from status quo behavior and thus does not warrant follow-up action.",
"For the medication label, we specifically exclude sentences describing simple additions to the medication list, e.g. Dis-charged on glargine 10u at bedtime, as these typically do not require further action.",
"However we include instructions to hold and restart medications, new medications with an end date (e.g. antibiotics), and medications requiring dosage adjustment (e.g. ...the plan is to keep patient off diuretics with monitoring of his labs and reinstitution once the kidney function improves), as these are likely to require PCP action.",
"Due to the large amount of discharge note text that has information not directly actionable for followup, most sentences remain without a label after the annotation process; 11.2% of training set sentences have a label.",
"Of the sentences with labels, 28.6% have multiple labels.",
"Table 2 shows the frequency of each label type at the sentence level in the training set.",
"To distinguish the contribution of our dataset in the context of existing text summarization datasets, we performed a manual quantitative comparison between CLIP and the summarization datasets CNN (Hermann et al., 2015) and WikiASP (Hayashi et al., 2020).",
"For WikiASP, we chose sentences from the Event genre of summary, as our dataset describes hospital stays which could be considered events.",
"Inspired by Suhr et al. (2017), we identified five phenomena to compare across datasets quantification (in the numerical sense, as in 300 mg or twenty-three people), temporal expressions, conditional expressions, imperative mood or second-person statements, and out of vocabulary (OOV) terms 3 .",
"We gathered 100 sentences from the summaries of each dataset and counted the occurrences of each phenomena.",
"We see that CLIP has a relative wealth of imperative and second-person statements, which is not surprising due to the prevalence of patient-directed language in Patient instructions-labeled sentences.",
"CLIP and WikiASP both have more temporal expressions than CNN, which are contained in around half of the sample sentences of each.",
"Despite the prevalence of clinical jargon in CLIP, WikiASP actually contained the most OOV words, perhaps due to the diversity of sources of that dataset.",
"Conditional language, such as If you miss any doses of this medication, your stents could clot off again..., were uncommon in all datasets but occurred most in CLIP.",
"With a discharge note as input, the task is to output the clinically actionable follow-up items found",
"3 By OOV, we mean any token that must be split into multiple WordPiece tokens given the vanilla BERT vocabulary.",
"For example, the common abbreviation for patient, pt, becomes p, ##t.",
"We will evaluate our experiments on this multilabel classification formulation, as well as on a binary reduction of the problem in which the objective is to simply identify which sentences have any type of label.",
"This binary framing is still useful, as surfacing the sentences for a reader is the primary objective that will save time and effort, with classification of the sentence being a secondary benefit.",
"within the note.",
"There are many summarization methods that could appropriately handle this problem.",
"The length of these documents and the high relative risk of missing information in a clinical setting discourages the option of truncating documents to fit into modern neural network models which may have maximum length requirements, so we develop methods that approach the task as multilabel sentence classification.",
"Summarization of a full document can then be accomplished with the resulting model by feeding each sentence into the model in sequence and aggregating the sentences that the model labels.",
"The BERT architecture (Devlin et al., 2019) has been widely used within clinical NLP in the past year with successful results (Lee et al., 2020; Alsentzer et al., 2019; Mulyar et al., 2019; Johnson et al., 2020; McDermott et al., 2020; Zhang et al., 2020).",
"In particular, Si et al. (2020) has shown the effectiveness of BERT for use on small annotated clinical datasets, such as the one we develop.",
"We use BERT as the basis for our proposed model.",
"BERT-based baselines To demonstrate baseline BERT performance, we fine-tune pre-trained BERT models on our task.",
"As the simplest approach, we feed a sentence into BERT, take the hidden state of the [CLS] token as the sentence-level representation, and train a linear layer over that representation.",
"To adapt BERT to our domain, we also experiment with a previously released version of BERT which has been further pre-trained on MIMIC-III discharge notes (Alsentzer et al., 2019), and fine-tune it on our task in the same way.",
"We refer to this variant as MIMIC-DNOTE-BERT.",
"Alsentzer et al. (2019) also release a version pretrained on all MIMIC-III notes, which we refer to as MIMIC-FULL-BERT.",
"Both MIMIC-Full-BERT and MIMIC-DNote-BERT are initialized with BioBERT (Lee et al., 2020), which is pretrained on a corpus of biomedical research articles.",
"Incorporating neighboring context Surrounding contexts are critical for the task, for two reasons: 1) an individual sentence may not have the full picture on the type of the action; 2) neighboring sentences tend to share the same label (occurs for 27% of sentences).",
"So, we incorporate context beyond an individual sentence into our BERT-based sentence representations, by concatenating the two sentences each that immediately precede and follow the sentence to the input.",
"To do this, we follow the encoder architecture of Liu and Lapata (2019), which concatenates sentences with special tokens and applies alternating segment embeddings to alternating sentences.",
"We make the following mod-ifications: we exclude the additional transformer layers on top of the BERT output, use only SEP tokens to separate sentences, and apply the segment embedding SA to the tokens in the focus sentence and SB to all other tokens, as pictured in Figure 1.",
"We initialize models of this architecture with various pre-trained BERT parameters in experiments.",
"Given the limited amount of annotated data, we are motivated to pursue semi-supervised approaches.",
"We seek to explore the trade-off between generalized and domainor task-specific data for language model pre-training, by introducing a technique for targeted pre-training which we call T askT argeted P re-training (TTP).",
"TTP requires less data and computation, yet attains comparable performance to pre-training on large in-domain datasets that prior work studied (Alsentzer et al., 2019).",
"The goal of this approach is to surface unlabeled sentences that may be positive examples, in the vein of self-supervision techniques such as Snorkel (Ratner et al., 2017).",
"In contrast to Snorkel, which uses model predictions to generate pseudolabels to train with, TTP uses model predictions to select sentences for pre -training, using auxiliary tasks.",
"To create a task-targeted dataset, we first fine-tune a vanilla BERT model on our task, and then we use the learned model to classify all unlabeled sentences.",
"We select all sentences that the model predicts as having action items, using a fixed threshold.",
"Due to the multi-label nature of our task, we apply the threshold across all labels and select sentences in which at least 1 label score is above the threshold.",
"The threshold used to select the task-targeted sentences can be tweaked to create datasets for pre-training that are smaller and more task-focused (for higher thresholds), or larger and more general (for lower thresholds), which we experiment with.",
"This approach is inspired by and similar to task-adaptive pre-training (TAPT) introduced by Gu-rurangan et al. (2020).",
"In that work, a pre-trained bag-of-words language model encodes sentences in labeled and unlabeled datasets, and for each labeled sentence selects its nearest neighbor unlabeled sentences according to the model.",
"In this paper, we select data points using the full prediction model (rather than just an encoder), and use threshold-ing which provides maximal control over the size of the selected dataset.",
"Further, directly applying TAPT to our case may not work well as it does not distinguish positive and negative samples in the in-domain dataset, so the surfaced sentences from TAPT may be less relevant.",
"Our approach benefits from using an encoding method that is trained on the task we are targeting.",
"After selecting data, we pre-train a BERT-Context model on the targeted dataset, pulling in neighboring sentences of the targeted sentences.",
"As auxiliary tasks, we used masked language modeling (MLM) and a sentence switching task (Wang et al., 2019).",
"For MLM, we mask tokens in the context sentence only, independently with probability 0.15.",
"For sentence switching, with probability 0.25 we swap the focus sentence with another randomly chosen sentence from the same document, and predict whether the focus sentence was swapped using the context sentences.",
"Cross entropy losses for both tasks are computed and summed to compute the total loss for an instance.",
"These tasks encourage the model to learn how to incorporate information from the context sentences into its representation.",
"Figure 1 depicts the entire process.",
"This process can be repeated, by using the final resulting model to then select a new set of sentences for pre-training, however we did not experiment with this as one iteration was enough to produce competitive results.",
"We first generate synthetic surrogates for entities redacted during de-identification, apply a custom sentence tokenizer adapted from open-source software 4 5 to tokenize the document into sentences, and lower case every sentence.",
"Discharge notes in MIMIC often have semi-structured sections, with headers denoting them, e.g. BRIEF HOSPITAL COURSE: , which the tokenizer is built to identify.",
"Using TTP, we select pre-training datasets of sizes 250K, 500K, 1M, and 2M sentences from the set of MIMIC-III discharge notes.",
"As baselines, we train a TF-IDF-weighted bag-of-words logistic regression model with L1 regularization and a max-pooling 1-D convolutional neural network (CNN).",
"The CNN is initialized with BioWordVec vectors (Zhang et al., 2019; Chen et al., 2019), which are trained on PubMed and MIMIC-III notes, and the CNN is trained with the binary cross-entropy (BCE) loss.",
"All BERT-based models are loaded, pre-trained as appropriate, and fine-tuned using the transformers library (Wolf et al., 2019), using BCE loss, and backpropagating and applying gradient updates through all of BERT's parameters.",
"We used library default parameters, except for the batch size which we adjusted to 32 based on validation set performance and training stability.",
"All neural models are trained with early stopping on the macro-averaged 4 https://github.com/fnl/syntok 5 https://github.com/wboag/mimic-tokenize AUROC metric.",
"Early stopping is also applied to the pre-training step, using the loss on an unlabeled held-out set as the criterion.",
"We report results on the test set using microand macro-averaged metrics common in multilabel classification, and F1 for the binary reduction of the task.",
"Micro-averaged metrics treat each (sentence, label) pair as an individual binary prediction, and macro-averaged metrics compute the metric per-label and then average these results across labels.",
"For binary F1, we transform the label and model predictions into binary variables indicating whether any type of label was predicted for the sentence, and then calculate metrics, ignoring whether the types of the predicted labels were accurate.",
"To ensure the fairest comparison between models and eliminate some arbitrariness in results that may arise when training on imbalanced data and evaluating with a fixed 0.5 threshold, we also tune thresholds for each label such that its F1 score on the validation set is maximized.",
"For micro metrics, we choose the threshold that provides the highest micro F1 score.",
"We then apply these thresholds when evaluating on the test set.",
"The main set of results are reported in Table 4.",
"Models pre-trained with TTP have the size of their Model Patient Appt Medication Lab Procedure Imaging Other Bag-of-words 0.741 0.792 0.546 0.625 0.302 0.343 0.236 CNN 0.759 0.824 0.595 0.629 0.315 0.431 0.228 BERT 0.780 0.855 0.635 0.719 0.415 0.474 0.275 MIMIC-DNote-BERT 0.783 0.854 0.656 0.741 0.524 0.567 0.294 MIMIC-DNote-BERT+Context 0.830 0.882 0.659 0.744 0.597 0.567 0.349 TTP-BERT+Context (250k) 0.841 0.887 0.668 0.745 0.548 0.566 0.365 Table 5: Average balanced F1 scores on the test set for each label across 10 runs.",
"pre-training dataset denoted in parentheses.",
"BERT and both MIMIC-BERT models outperform the logistic regression and CNN baselines.",
"The results using MIMIC-DNote-BERT demonstrate the importance of domain-specific pre-training; it improves in all metrics over BERT.",
"Using neighboring sentences, as we do in -Context models, also provides a performance boost across all metrics save for Macro AUC, comparing MIMIC-DNote-BERT to MIMIC-DNote-BERT+Context.",
"To compare with human performance, our inter-annotator agreement on the binary task, measured in terms of F-1, was 0.930, and the highest mean binary-F1 from the model evaluations approaches 0.86.",
"When using just 250,000 sentences from the MIMIC discharge notes for pre-training (TTP-BERT-Context 250K), task results are competitive with and in some cases exceed MIMIC-DNote-BERT+Context, which is pre-trained on all MIMIC discharge notes, which contain 9M sentences.",
"Our TTP approach is able to complete domain-specific pre-training within 12 hours, while Alsentzer et al. (2019) report a pre-training time of 17-18 days for MIMIC-Full-BERT.",
"We next investigate results on each label (see Table 5), for a subset of models.",
"The in-domain pre-training for MIMIC-DNote-BERT models provides gains for nearly all label types, and including context also gives a boost to the F1 score of most labels.",
"All models perform poorly predicting the Other label, which encompasses a long tail of many different types of follow-ups which we did not further categorize, making modeling difficult.",
"Imaging and Procedure label performance lags others, likely due to their lower prevalence (Table 2).",
"We examine errors made by TTP-BERT-Context (1M), focusing on false negatives, the most costly type of error in this use case.",
"Inspection of the test set with physician input yields two high-level phenomena of the data that occur repeatedly in error cases: clinical jargon / knowledge, and temporal expressions / conditional language.",
"Clinical jargon Perhaps the most obvious drawback of applying general-purpose language models to clinical language data is that clinical language is heavily laden with clinical jargon, abbreviations, and misspellings.",
"Although the WordPiece tokenization used by BERT-based models can tokenize any input, the more separation of clinical terms happens, the more model capacity is reduced, as lower layers in BERT have to learn how to combine the meaning of the WordPieces into word-level representations.",
"We observed several cases in which even common clinical jargon may have interfered with the model's performance in an otherwise unambiguous sentence.",
"Bolded words are OOV's: <-please take medications as directed -follow up with pcp mark carter using> , <plan for repeat chest xray pa/ lat and lordotic view to reevaluate when returns 12-18 for wound check> .",
"Many cases of this type of error also suggest that a lack of explicit clinical knowledge could be a barrier, in addition to the technical issue of WordPiece tokenization.",
"In this example promethazine is a drug that can be prescribed for a short defined period: 3 .",
"promethazine 25 mg tablet sig : 0.5 tablet po q6h ( every 6 hours ) as needed for nausea .",
"In the following example, the procedures described are required but do not need an appointment, and the model erroneously applied the Appointment label: however , the patient will need aggressive pulmonary toilet including good oral suctioning care and chest pt as pt is at risk for aspiration .",
"Temporal expressions The model may also struggle with temporal expressions, which are common especially in the Medication label type.",
"This label is intended to surface cases of medications that need to be tweaked, started, or stopped after a specified time period.",
"Example: ...you should go back to your regular home dosing of 20 units in the morning and 24 units at dinner time after completing your prednisone .",
"While many training examples gave explicit durations (e.g. for 14 days), many of the false negative examples described dependencies between future patient actions, including with conditional if statements.",
"Example: if he needs further management he may do well with clonidine .",
"Our results show that the common regime of fine-tuning a large pre-trained model is a useful method for our task of extracting clinical action items.",
"Additionally, we investigated the trade-off between task-specificity and pre-training data size, and found our task-targeted pre-training method enables one to navigate this trade-off, producing models with comparable performance on the end task that require less data for pre-training.",
"While trading off these concerns may not be needed if effective public models exist for a given task, we believe this technique is useful in scenarios in which users have large, domain-specific, private datasets and specific tasks in mind.",
"This is often the case for healthcare institutions and developers of clinical machine learning software, as privacy concerns tend to preclude data sharing between institutions.",
"From a modeling perspective, there are many possible avenues for future work.",
"Taking a structured prediction lens and leveraging sentence-level label dependencies or applying structured prediction models could be helpful, although Cohan et al. (2019) note that CRF layers did not improve their performance for a sequential sentence classification task.",
"We acknowledge that our sentence classification approach is a simplification of the more general span detection problem, and this approach could bring improved precision by focusing on which parts of sentences matter, which may be important as we found that sentence tokenization was non-trivial for clinical notes.",
"Finally, the question of whether such an approach to follow-up workflow augmentation is successful in increasing patient safety, clinician effi-ciency, or EHR usability is an empirical one.",
"We hope to evaluate in the future whether a highlighted note such as one these models could provide will reduce the time a physician takes to, for example, answer certain questions about a patient's hospital stay.",
"In alignment with recent calls for increased rigor in the evaluation of machine learning-derived clinical decision support systems (Kelly et al., 2019), future work should include further prospective, controlled evaluation of the generalizability, stability, interpretability, unbiasedness, usability, and efficacy of this approach.",
"We hope that our dataset and initial model development can lay the groundwork for future investigation.",
"We introduce the task of detecting clinical action items from discharge notes to help primary care physicians more quickly and comprehensively identify actionable information, and present the CLIP dataset, which we will release to the community.",
"Given perfect performance, this would reduce the number of sentences a PCP may need to read by 88%.",
"The best model's binary F1 is near 0.9, compared to the human benchmark of 0.93.",
"These models could additionally be used for clinical research.",
"For example, a calibrated model could derive statistics for how often each type of action item is seen for different patient populations, which can provide insight into typical patient or PCP burden after hospital discharge.",
"We evaluated BERT-based models that incorporate multi-sentence context, and introduced a novel task-targeted pre-training approach that can reduce pre-training time while maintaining similar performance to models pre-trained on much larger in-domain datasets.",
"The models have promising results, however we anticipate there is still room for improvement, particularly for the rare labels.",
"We encourage the clinical NLP community to further investigate the problem of detecting action items from hospital discharge notes, which can help improve reliably safe transitions of care for the most vulnerable patients.",
"We thank our team of physician annotators for their fruitful collaboration and the reviewers for their comments which improved this paper."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"method",
"abstain",
"objective",
"objective",
"result",
"result",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"abstain",
"abstain",
"other",
"other",
"method",
"other",
"other",
"method",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"objective",
"other"
] |
[
"To address the challenge of policy learning in open-domain multi-turn conversation, we propose to represent prior information about dialog transitions as a graph and learn a graph grounded dialog policy, aimed at fostering a more coherent and controllable dialog.",
"To this end, we first construct a conversational graph (CG) from dialog corpora, in which there are vertices to represent what to say and how to say, and edges to represent natural transition between a message (the last utterance in a dialog context) and its response.",
"We then present a novel CG grounded policy learning framework that conducts dialog flow planning by graph traversal, which learns to identify a what-vertex and a how-vertex from the CG at each turn to guide response generation.",
"In this way, we effectively leverage the CG to facilitate policy learning as follows: (1) it enables more effective long-term reward design, (2) it provides high-quality candidate actions, and (3) it gives us more control over the policy.",
"Results on two benchmark corpora demonstrate the effectiveness of this framework.",
"How to effectively learn dialog strategies is an enduring challenge for open-domain multi-turn conversation generation.",
"To address this challenge, previous works investigate word-level policy models that simultaneously learn dialog policy and language generation from dialog corpora (Li et al., 2016b; Zhang et al., 2018b).",
"But these word-level policy models often lead to a degeneration issue where the utterances become ungrammatical or repetitive (Lewis et al., 2017).",
"To alleviate this issue, utterance-level policy models have been proposed to decouple policy learning from response generation, and they focus on how to incorporate This work was done at Baidu.",
"high-level utterance representations , e.g., latent variables or keywords, to facilitate policy learning (He et al., 2018; Yao et al., 2018; Zhao et al., 2019).",
"However, these utterance-level methods tend to produce less coherent multi-turn dialogs since it is quite challenging to learn semantic transitions in a dialog flow merely from dialog data without the help of prior information.",
"In this paper, we propose to represent prior information about dialog transition (between a message and its response) as a graph, and optimize dialog policy based on the graph, to foster a more coherent dialog.",
"To this end, we propose a novel conversational graph (CG) grounded policy learning framework for open-domain multi-turn conversation generation ( CG-Policy ).",
"It consists of two key components, (1) a CG that captures both local-appropriateness and global-coherence information, (2) a reinforcement learning (RL) based policy model that learns to leverage the CG to foster a more coherent dialog.",
"In Figure 1, given a user message, our system selects a what-vertex (sleepy) and a how-vertex(responding mechanism M 3 ) to produce a coherent response.",
"1 We first construct the CG based on dialog data.",
"We use vertices to represent utterance content, and edges to represent dialog transitions between utterances.",
"Specifically, there are two types of vertices: (1) a what-vertex that contains a keyword, and (2) a how-vertex that contains a responding mechanism (from a multi-mapping based generator in Section 3.1) to capture rich variability of expressions.",
"We also use this multi-mapping based method to build edges between two what-vertices to capture the local-appropriateness between the two keywords as a message and a response respectively.",
"It can be seen that the what-vertices from the same highly connected region are more likely to constitute coherent dialog.",
"We then present a novel graph grounded policy model to plan a long-term success oriented vertex sequence to guide response generation.",
"Specifi-cally, as illustrated by the three pink lines in Figure 1, given a user message, CG-Policy first links its keywords to CG to obtain hit what-vertices.",
"Next, the policy model learns to select a what-vertex from one-hop what-vertex neighbors of all hit what-vertices, and then select a how-vertex from how-vertex neighbors of the chosen what-vertex.",
"Finally, the two selected vertices are utilized to guide response generation.",
"Thus we leverage the prior dialog-transition information (as graph edges) to narrow down candidate response content for more effective policy decision, instead of using the whole set of keywords as candidate actions.",
"Moreover, to facilitate the modeling of long-term influence of policy decisions in an ongoing dialog, we first present novel CG based rewards to better measure the long-term influence of selected actions.",
"We then employ a graph attention mechanism and graph embedding to encode global structure information of CG into dialog state representations, enabling global information aware decisions.",
"This paper makes the following contributions: This work is the first attempt that represents dialog transitions as a graph, and conducts graph grounded policy learning with RL.",
"Supported by CG and this policy learning framework, CG-Policy can respond better in terms of local appropriateness and global coherence.",
"Our study shows that: (1) one-hop what-vertex neighbors of hit what-vertices provide locally-appropriate and diverse response content; (2) the CG based rewards can supervise the policy model to promote a globally-coherent dialog; (3) the use of how-vertices in CG can improve response diversity; (4) the CG can help our system succeed in the task of target-guided conversation, indicating that it gives us more control over the dialog policy.",
"Policy learning for chitchat generation To address the degeneration issue of word-level policy models (Li et al., 2016b; Zhang et al., 2018b), previous works decouple policy learning from response generation, and then use utterance-level latent variables (Zhao et al., 2019) or keywords (Yao et al., 2018) as RL actions to guide response generation.",
"In this work, we investigate how to use prior dialog-transition information to facilitate dialog policy learning.",
"Knowledge aware conversation generation There are growing interests in leveraging knowledge bases for generation of more informative responses (Dinan et al., 2019; Ghazvininejad et al., 2018; Moghe et al., 2018; Zhou et al., 2018; Liu et al., 2019; Bao et al., 2019; Xu et al., 2020).",
"In this work, we employ a dialog-modeling oriented graph built from dialog corpora, instead of a external knowledge base, in order to facilitate multi-turn policy learning, instead of dialog informativeness improvement.",
"Specifically, we are motivated by (Xu et al., 2020).",
"The method in (Xu et al., 2020) has the issue of cross-domain transfer since it relies on labor-intensive knowledge graph grounded multiturn dialog datasets for model training.",
"Compared with them, our conversational graph is automatically built from dialog datasets, which introduces very low cost for training data construction.",
"Furthermore, we decouple conversation modeling into two parts: what to say modeling and how to NLG CG-Policy Input Output Message !\"#\"$%&'\"()* Subgraphs Message +, history keywords, '#*-(-#\"$,#'\"()*. Selected keyword #*-, responding mechanism Dialog corpus Response TransE based graph embed-dings Policy Reward /(\" what-vertices ,,0$1\"('$.,.$2$'\"$-, 34,5)2('4,#\", '611$*\",\"(7$,.\"$5 0$1\"('$.,8(\"/, /(.\")14,9$48)1-. Graph construction Conversational graph :;< Figure 2: The architecture of our CG-Policy that consists of NLU, state/action, policy, and NLG. We first construct conversational graph from dialog corpus. Then we train CG-Policy with RL. The upper-right part shows the details of input/output of each module. say modeling.",
"The overview of CG-Policy is presented in Figure",
"2. Given a user message, to obtain candidate actions, the NLU module attempts to retrieve contextually relevant subgraphs from CG.",
"The state/action module maintains candidate actions, history keywords that selected by policy at previous turns or mentioned by user, and the message.",
"The policy module learns to select a response keyword and a responding mechanism from the above subgraphs.",
"The NLG module first encodes the message into a representation using a message encoder and the selected mechanism, and then employs a Seq2BF model 2 (Mou et al., 2016) to produce a response 2 It decodes a response starting from the input keyword, and generates the remaining previous and future words subsequently.",
"In this way, the keyword will appear in the response.",
"To address the one-to-many semantic mapping problem for conversation generation, Chen et",
"al.(2019) proposed an end-to-end multi-mapping model in which each responding mechanism (a MLP network) models how to express response content (e.g. responding with a specific sentence function).",
"In test procedure, they randomly select a mechanism for response generation.",
"As shown in Figure 3, the generator consists of a RNN based message encoder, a set of responding mechanisms, and a decoder.",
"First, given a dialog message, the message-encoder represents it as a vector x .",
"Second, the generator uses a responding mechanism (selected by policy) to convert x into a response representation r .",
"Finally, r and a keyword (selected by policy) are fed into the decoder for response generation.",
"To ensure that the given keyword will appear in generated responses, we introduce another Seq2BF based decoder (Mou et al., 2016) to replace the original RNN decoder.",
"Moreover, this generator is trained on a dataset with pairs of [the message, a keyword extracted from a response]-the response.",
"3 3.2 CG Construction Given a dialog corpus D , we construct the CG with three steps: what-vertex construction, how-vertex construction, and edge construction.",
"3 If multiple keywords are extracted from the response, we randomly choose one; and if no keyword exists in the response, we randomly sample a word from the response to serve as keyword.",
"What-vertex construction To extract content words from D as what-vertices, we use a rule-based keyword extractor to obtain salient keywords from utterances in D .",
"4 After removing stop words, we obtain all the keywords as what-vertices.",
"How-vertex construction We obtain a set of N r responding mechanisms from the generator described in Section 3.1.",
"Then they are used as how-vertices.",
"Notice that all the how-vertices in CG share the same set of responding mechanisms.",
"Edge construction There are two types of edges in CG.",
"One is to join two what-vertices and the other is to join a what-vertex and a how-vertex.",
"To build the first type of edges, we first construct another dataset that consists of keyword pairs, where each pair consists of any two keywords extracted from the message and the response respectively in D .",
"To capture natural transitions between keywords, we train another multi-mapping based model on this new dataset.",
"5 For each what-vertex v w , we find appropriate keywords as its responses by selecting top five keywords decoded (decoding length is 1) by each responding mechanism, and then connect v w to vertices of these keywords.",
"To build the second type of edges, for the [message-keyword]-response pair in D (described in Section 3.1), we use the ground-truth response to select the most suitable mechanism for each keyword.",
"Then, given a what-vertex v w , we select top five mechanisms that are frequently selected for v w 's keyword.",
"Then we build edges to connect v w to each of the top ranked how-vertices.",
"These edges lead to responding mechanisms that are suitable to generate v w .",
"To obtain subgraphs to provide high-quality candidate actions, we first extract keywords in the last utterance of the context (message) using the same tool in CG construction, and then link each keyword to the CG through exact string matching, to obtain multiple hit what-vertices.",
"Then we retrieve a subgraph for each keyword, and use vertices (ex-clude hit what-vertices) in these subgraphs as candidate actions.",
"Each subgraph consists of three parts: the hit what-vertex, its one-hop neighboring 4 github.com/squareRoot3/Target-Guided-Conversation 5 We ever tried other methods for edge construction, e.g., PMI (Yao et al., 2018).",
"Finally we found that our method can provide more diverse response keyword candidates, while PMI tends to provide high-frequency keyword candidates.",
"Here we use a RNN based decoder to replace the Seq2BF.",
"what-vertices, and how-vertices being connected to the above neighbors.",
"If there are no keywords to be extracted from the message or to be linked to CG, we reuse the retrieved subgraphs at the last time.",
"6 Thus we leverage the CG to provide high-quality candidate actions, instead of using the whole set of candidates as done in previous work (Yao et al., 2018).",
"This module maintains candidate actions, history keywords that selected by the policy or mentioned by user, and the message.",
"Moreover, we use the message-encoder from Section 3.1 to represent the message as a vector x , and then we use all the responding mechanisms from Section 3.1 to convert x into N r candidate response representations { r j } N r j =1 , which will be used in the policy.",
"State representation The state representation s t at the t -th time step is obtained by concatenating a message representation s Mt and a history keywords representation s Vt that are encoded by two RNN encoders respectively.",
"Formally, s t = [ s Mt ; s Vt ] .",
"To enable global information aware policy decisions, we employ a graph attention mechanism and graph embedding to encode global structure information into state representation.",
"Recall that we have a subgraph for each keyword in the message obtained by NLU.",
"Here each subgraph g i consists of a hit what-vertex, 6 If we encounter this case at the first time step, hit what-vertices are set as what-vertices that contain the top-5 highfrequency keywords in D .",
"its what-vertex neighbors (here we remove how-vertices) and edges between them.",
"Formally, g i = { k } N gi k =1 , where each k is a triple with k = ( head k , rel k , tail k ) , and N g i is the number of triples in g i .",
"For non keywords in the message, a NULL subgraph is used.",
"Then we calculate a subgraph vector g i as a weighted sum of head vectors and tail vectors in the triples.",
"g i = N gi (cid:88) k =1 k [ e head k ; e tail k ] , k = exp( k ) (cid:80) N gi m =1 exp( m ) , k = e Trel k tanh( W h e head k + W t e tail k ) .",
"h t",
"s Mt is obtained by recursively feeding a concatenated vector e i = [ w ci ; g i ] into a vanilla RNN unit, where w ci (as model parameters) is the embedding of the keyword w ci .",
"Thus we encode the global graph structure information into RL state representations, enabling a global-information aware policy model.",
"Moreover, we calculate s Vt in a similar way.",
"Policy decision Each decision consists of two sequential sub-decisions.",
"First the what-policy selects a what-vertex from candidate what-vertices, and then the how-policy selects a how-vertex from how-vertex neighbors of the selected what-vertex.",
"With s t as the state representation, the what-policy what is defined by: what ( s t , v wj ) = exp( s Tt v wj ) (cid:80) N w act l =1 exp( s Tt v wl ) , (3) where v wj (as model parameters, different from both w ci and e ) is the embedding of the j-th candidate what-vertices, and N w act is the number of candidate what-vertices.",
"The how-policy how is defined by: how ( s t , r i ) = i exp( s Tt r i ) (cid:80) N r j =1 j exp( s Tt r j ) , (4) where r i is a candidate response representation in the state module, and i is mechanism mask.",
"i is set as 1 if the i -th responding mechanism is one of neighbors of the selected what-vertex, otherwise",
"0. 3.6 Rewards Following previous works, we consider these utterance-level rewards: Local relevance We use a state-of-the-art multiturn response selection model, DualEncoder in (Lowe et al., 2015), to calculate local relevance.",
"Repetition Repetition penalty is 1 if the generated response shares more than 60% words with any contextual utterances, otherwise",
"0. Target similarity For target-guided conversation, we calculate cosine similarity between the chosen keyword and the target word in pretrained word embedding space as target similarity.",
"7 To leverage the global graph structure information of CG to facilitate policy learning, we propose the following rewards: Global coherence We calculate the average cosine distance between the chosen what-vertex and one of history what-vertices (selected or mentioned previously) in TransE based embedding space (also used in Equation 2) as coherence reward.",
"Sustainability It is reasonable to promote what-vertices with a large number of neighbors to generate more sustainable, coherent, and diverse dialogs.",
"For this reward, we calculate a PageRank score (calculated on the full CG) for the chosen what-vertex.",
"Shortest path distance to the target For target-guided conversation, if the chosen what-vertex is closer to the target what-vertex in terms of shortest path distance when compared to the previously chosen what-vertex, then this reward is 1, or 0 if the distance does not change, otherwise",
"-1. Moreover, we define the final reward as a weighted sum of the above-mentioned factors, where the weight of each factor is set as [0.5, -5, 0, 3, 8000, 0] by default.",
"8 We see that our rewards can fully leverage dialog transition information in training data by using not only utterance based rewards (e.g., local relevance), but also graph based rewards (e.g., coherence, sustainability).",
"pa-7 If no keyword is chosen, as in baseline models, we calculate target similarity for each word in response and select the closest one.",
"8 We optimize these values on Weibo dataset by grid search.",
"The weights of the third/sixth factors are set as 0 by default because they are proposed for target-guided conversation.",
"rameters, and the parameters of other modules stay intact during RL training.",
"As described in Section 3.1, we use the mechanism selected by how-policy to convert x into a response representation r .",
"Then we feed the keyword in the selected what-vertex and r into a Seq2BF decoder (Mou et al., 2016) for response generation.",
"Weibo corpus (Shang et al., 2015).",
"This is a large micro-blogging corpora.",
"After data cleaning, we obtain 2.6 million pairs for training, 10k pairs for validation and 10k pairs for testing.",
"We use publicly-available lexical analysis tools 10 to obtain POS tag features for this dataset and then we further use this feature to extract keywords from utterances.",
"We use Tencent AI Lab Embedding 11 for embedding initialization in models.",
"Persona dialog corpus (Zhang et al., 2018a).",
"This ia a crowd-sourced dialog corpora where each participant plays the part of an assigned persona.",
"To evaluate policy controllability brought by CG-Policy, we conduct an experiment for target-guided conversation on the Persona dataset as done in (Tang et al., 2019).",
"The training set / validation set / testing set contain 101,935 / 5,602 / 5,371 utterances respectively.",
"Embeddings are initialized with Glove (Pennington et al., 2014).",
"Conversational Graph The constructed CG on Weibo corpus contains 4,000 what-vertices and 74,362 edges among what-vertices, where 64% edges are evaluated as suitable for chatting by three human annotators.",
"12 The constructed CG on Persona corpus contains 1,500 what-vertices and 21,902 edges among what-vertices, where 67% edges are evaluated as suitable for chatting by three human annotators.",
"We carefully select three SOTA methods that focus on dialog policy learning as baselines.",
"9 Please see the supplemental material for more details.",
"10 ai.baidu.com/ 11 ai.tencent.com/ailab/nlp/embedding.html 12 We randomly sample 500 edges for evaluation.",
"LaRL It is a latent variable driven dialog policy model (Zhao et al., 2019).",
"We use their released codes and choose the multivariate categorical latent variables as RL actions since it performs the best.",
"For target-guided conversation, we implement another model LaRL-Target , where we add the target similarity factor into RL rewards, and its weight is set as 4 by grid search.",
"ChatMore We implement the keyword driven policy model (Yao et al., 2018) by following their original design.",
"For target-guided conversation, we implement ChatMore-Target , where we add the target similarity factor into RL rewards, and its weight is set as 4 by grid search.",
"TGRM It is a retrieval based model for target-guided conversation, where the keyword chosen at each turn must move strictly closer (in embedding space) to a given target word (Tang et al., 2019).",
"For target-guided conversation, we use the codes released by the original authors, denoted as TGRM-Target , and we use their kernel version since it performs the best.",
"13 To suit the task of open-domain conversation on Weibo, we remove the unnecessary constraint on keyword's similarity with the target word, denoted as TGRM .",
"CG-Policy It is our system presented in Section",
"3. For target-guided conversation, we implement another system CG-Policy-Target , where we use an additional feature, the shortest path distance to the target factor, to augment the original what-vertex representation v wj in the what-policy what .",
"Formally, v wj = W 1 [ v wj ; e d j ] , where v wj is the augmented representation, W 1 is a weighting matrix, e d j is an embedding for the distance value d j , and v wj has the same size with v wj .",
"We also use this factor in reward estimation and its weight is set as 5 by grid search, and we don't use the target similarity factor.",
"Moreover, we use the same dialog corpora to construct CG, train user simulator, reward functions, and the NLG module for CG-Policy.",
"We use the same user simulator for RL training of LaRL, ChatMore and CG-Policy.",
"The user simulator is the original multi-mapping based generator with a RNN decoder, which is pretrained on dialog corpus and not updated during policy training.",
"Please refer to (Chen et al., 2019) for more details.",
"During testing, all the systems share this simulator.",
"13 github.com/squareRoot3/Target-Guided-Conversation 4.4 Evaluation Settings Conversation with user simulator Following previous work (Li et al., 2016b; Tang et al., 2019), we use a user simulator to play the role of human and let each of the models converse with it.",
"Given a randomly selected model, we randomly select an utterance from all the utterances (at the starting position of sessions) in test set for the model to start a conversation.",
"Moreover, we set a maximum allowed number of turns, which is 8 in our experiment.",
"Finally, we collect 100 model-simulator dialogs for evaluation.",
"For single-turn level evaluation, we randomly sample 100 message-response pairs from the dialogs for each model.",
"Conversation with human Following previous work (Tang et al., 2019), we also perform human evaluation for a more reliable system comparison.",
"Given a model to be evaluated, we randomly select a dialogue from test set and pick its first utterance for the model to start a conversation with a human.",
"Then the conversation will continue till 8 turns are reached.",
"Finally, we obtain 50 dialogs for evaluation.",
"For single-turn level evaluation, we randomly sample 100 message-response pairs from the dialogs for each model.",
"Metrics such as BLEU and perplexity have been widely used for dialog evaluation (Li et al., 2016a; Serban et al., 2016), but it is widely debated how well these automatic metrics are correlated with true response quality (Liu et al., 2016).",
"Since the proposed system does not aim at predicting the highest-probability response at each turn, but rather the long-term success of a dialog (e.g., coherence), we do not employ BLEU or perplexity for evaluation, and we propose the following metrics.",
"Global coherence We define incoherence problems as follows: (1) Inconsistent dialogs where the model contradicts with itself, e.g., the model says he is a driver before and then says he is a doctor; (2) One-side dialogs in which the model ignores the user's topics with two or more consecutive turns.",
"A session will be rated 0 if it contains more than three incoherence cases, or +1 if a session contains 2 or 3 cases, otherwise +2.",
"Distinct The metric Dist-i calculates the ratio of distinct i-gram in generated responses (Li et al., 2016a).",
"We use Dist-2 to measure the diversity of generated responses.",
"Dialog-target success rate For target-guided conversation, we measure the success rate of generating the target word within 8 turns.",
"Local appropriateness 14 A response will be rated 0 if it is inappropriate as an reply to the given message, otherwise 1.",
"Informativeness 0 if a response is a safe response, e.g. I don't know, otherwise 1.",
"We ask three annotators to judge the quality of each dialog (at multi-turn level) or utterance pair (at single-turn level) for each model.",
"Notice that model identifiers are masked during evaluation.",
"As shown in Table 2, CG-Policy significantly outperforms (sign test, p-value < 0.01 ) baselines in terms of global coherence and local appropriateness.",
"It indicates that the CG can effectively facilitate policy learning (see the ablation study for further analysis).",
"For LaRL, its single-turn response quality is worse than other models.",
"It might be explained by that their latent variables are not fine-grained enough to provide sufficient information to guide response generation.",
"ChatMore tends to select high-frequency or generic keywords, resulting in its worst performance in terms of Dist-2.",
"TGRM performs the best in terms of Dist-2 and informativeness, indicating that retrieval-based models can produce more diverse responses than generation based models.",
"It is consistent with the conclusions in previous work (Chen et al., 2017; Zhang et al., 2018a).",
"However, TGRM performs the worst in terms of coherence, since TGRM does not use RL framework.",
"It indicates the importance of RL framework for multi-turn dialog modeling.",
"Here the Kappa value for inter-annotater agreement is above 0.4, indicating moderate agreement.",
"As shown in Table 3, CG-Policy outperforms baselines in terms of both global coherence and local appropriateness (sign test, p-value < 0.01 ) , which is consistent with the results in Table",
"2. The Kappa value is above 0.4, indicating moderate agreement.",
"4.6.4 Ablation study We conduct an ablation study for CG-Policy on Weibo corpus to investigate why CG-Policy performs better.",
"First , to evaluate the contribution of CG, we remove the CG from CG-Policy, denoted as CG-Policy-noCG, where we do not use graph structure information for action space pruning and reward design.",
"Moreover, we attempt to use the CG (without how-vertices) to augment the ChatMore model for action space pruning and reward design, denoted as Chatmore-CG.",
"As shown in Table 4, the performance of CG-Policy-noCG drops dramatically in terms of coherence, Dist-2 and appropriateness when compared to the original model.",
"Moreover, CG can boost the performance of ChatMore in terms of most of metrics.",
"It indicates that the use of CG is crucial to the superior performance of CG-Policy, and it can also help other models, e.g., ChatMore.",
"Second , to evaluate the contribution of CG for action space pruning or reward design respectively, we implement two system variants: (1) we use all the what-vertices in CG as action candidates at each turn, denoted as CG-Policy-noCGact; (2) we remove all the CG-based factors from RL rewards, denoted as CG-Policy-noCGrwd.",
"As shown in Table 4, the performance of CG-Policy-noCGact drops significantly in terms of Dist-2 as it tends to select high-frequency keywords like ChatMore, indicating the importance of graph paths to provide both locally-appropriate and diverse response keywords.",
"Moreover, the performance of CG-Policy-noCGrwd drops significantly in terms of coherence, indicating that CG based rewards can effectively guide CG-Policy to promote coherent dialogs.",
"Third , we remove how-vertices from CG, denoted as CG-Policy-noCGhow.",
"As shown in Table 4, how-vertex removal hurts its per-Methods Cohe.",
"Besides maintaining coherence, CG grounded policy learning can enable more control over dialog models, which is important to achieve certain goals for chatbot, e.g. proactive leading to certain chatting topics (keywords) or certain products.",
"Following the setting in (Tang et al., 2019), where we randomly sample a keyword as the target word for each session in testing procedure.",
"Here we use a multi-mapping based user simulator trained on the Persona dataset for evaluation.",
"Table 5 presents the results on 100 dialogs for each model.",
"We see that CG-Policy-Target can significantly outperform baselines in terms of dialog-target success rate (sign test, p-value < 0.01 ).",
"It can be seen that that CG-Policy can successfully lead the dialog to a given target word by learning to walk over the CG, indicating that this graph gives us more control over the policy.",
"LaRL-Target and ChatMore-Target perform badly in terms of success rate.",
"It may be explained by that they lack the ability of proactive dialog content planning.",
"Figure 4 provides representative words of each mechanism.",
"15 For example, for Mech-1, its keywords are mainly subjective words (e.g. think) for 15 We select words that occur frequently in responses guided by this mechanism but rarely occur with other mechanisms.",
"generation of responses with respect to personal opinion or intention.",
"For Mech-2, it tends to respond with a specific type of mood.",
"In this paper we present a novel graph grounded policy learning framework for open-domain multiturn conversation, which can effectively leverage prior information about dialog transitions to foster a more coherent and controllable dialog.",
"Experimental results demonstrate the effectiveness of this framework in terms of local appropriateness, global coherence and dialog-target success rate.",
"In the future, we will investigate how to extend the CG to support hierarchical topic management in conversational systems.",
"We are grateful for the support from Yan Zeng at the initial stage of this work.",
"We also thank the anonymous reviewers for their helpful comments and suggestions.",
"This work is supported by the National Key Research and Development Project of China (No.2018AAA0101900) and the National Natural Science Foundation of China (NSFC) via grant 61976072."
] | [
"objective",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"method",
"objective",
"method",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"method",
"objective",
"objective",
"result",
"other",
"objective",
"other",
"method",
"abstain",
"other",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"method",
"method",
"method",
"method",
"method",
"other",
"other",
"other",
"other",
"objective",
"objective",
"method",
"method",
"method",
"method",
"other",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"other",
"other",
"other"
] |
[
"We study the problem of textual relation embedding with distant supervision.",
"To combat the wrong labeling problem of distant supervision, we propose to embed textual relations with global statistics of relations, i.e., the co-occurrence statistics of textual and knowledge base relations collected from the entire corpus.",
"This approach turns out to be more robust to the training noise introduced by distant supervision.",
"On a popular relation extraction dataset, we show that the learned textual relation embedding can be used to augment existing relation extraction models and significantly improve their performance.",
"Most remarkably, for the top 1,000 relational facts discovered by the best existing model, the precision can be improved from 83.9% to 89.3%.",
"Relation extraction requires deep understanding of the relation between entities.",
"Early studies mainly use hand-crafted features (Kambhatla, 2004; Zhou et al., 2005), and later kernel methods are introduced to automatically generate features (Zelenko et al., 2003; Culotta and Sorensen, 2004; Bunescu and Mooney, 2005; Zhang et al., 2006).",
"Recently neural network models have been introduced to embed words, relations, and sentences into continuous feature space, and have shown a remarkable success in relation extraction (Socher et al., 2012; Zeng et al., 2014; Xu et al., 2015b; Zeng et al., 2015; Lin et al., 2016).",
"In this work, we study the problem of embedding textual relations , defined as the shortest dependency path 1 between two entities in the dependency graph of a sentence, to improve relation extraction.",
"relation extraction models (Bunescu and Mooney, 2005).",
"A number of recent studies have explored textual relation embedding under the supervised setting (Xu et al., 2015a,b, 2016; Liu et al., 2016), but the reliance on supervised training data limits their scalability.",
"In contrast, we embed textual relations with distant supervision (Mintz et al., 2009), which provides much larger-scale training data without the need of manual annotation.",
"However, the assertion of distant supervision, any sentence containing a pair of entities that participate in a knowledge base (KB) relation is likely to express the relation, can be violated more often than not, resulting in many wrongly labeled training examples.",
"A representative example is shown in Figure 1.",
"Embedding quality is thus compromised by the noise in training data.",
"Our main contribution is a novel way to combat the wrong labeling problem of distant supervision.",
"Traditional embedding methods (Xu et al., 2015a,b, 2016; Liu et al., 2016) are based on local statistics , i.e., individual textual-KB relation pairs like in Figure 1 (Left).",
"Our key hypothesis is that global statistics is more robust to noise than local statistics .",
"For individual examples, the relation label from distant supervision may be wrong from time to time.",
"But when we zoom out to consider the entire corpus, and collect the global co-occurrence statistics of textual and KB relations, we will have a more comprehensive view of relation semantics: The semantics of a textual relation can then be represented by its co-occurrence distribution of KB relations.",
"For example, the distribution in Figure 1 (Right) indicates that the textual relation SUBJECT nsubjpass born nmod:in OBJECT mostly means place of birth , and is also a good indicator of nationality , but not place of death .",
"Although it is still wrongly labeled with place of death a number of times, the negative impact becomes negligible.",
"Similarly, 820 Michael_Jackson was born in the US nsubjpass nmod:in Michael_Jackson US place_of_birth place_of_death Text Corpus Knowledge Base Michael_Jackson died in the US nsubj nmod:in Michael_Jackson US nsubjpass born nmod:in nsubj died nmod:in place of birth 1868 14 nationality 389 20 place of death 37 352 ... ... ...",
"we can confidently believe that SUBJECT nsubj died nmod:in OBJECT means place of death in spite of the noise.",
"Textual relation embedding learned on such global statistics is thus more robust to the noise introduced by the wrong labeling problem.",
"We augment existing relation extractions using the learned textual relation embedding.",
"On a popular dataset introduced by Riedel et al. (2010), we show that a number of recent relation extraction models, which are based on local statistics, can be greatly improved using our textual relation embedding.",
"Most remarkably, a new best performance is achieved when augmenting the previous best model with our relation embedding: The precision of the top 1,000 relational facts discovered by the model is improved from 83.9% to 89.3%, a 33.5% decrease in error rate.",
"The results suggest that relation embedding with global statistics can capture complementary information to existing local statistics based models.",
"The rest of the paper is organized as follows.",
"In Section 2 we discuss related work.",
"For the modeling part, we first describe how to collect global co-occurrence statistics of relations in Section 3, then introduce a neural network based embedding model in Section 4, and finally discuss how to combine the learned textual relation embedding with existing relation extraction models in Section 5.",
"We empirically evaluate the proposed method in Section 6, and conclude in Section 7.",
"Relation extraction is an important task in information extraction.",
"Early relation extraction methods are mainly feature-based (Kambhatla, 2004; Zhou et al., 2005), where features in various levels, including POS tags, syntactic and dependency parses, are integrated in a max entropy model.",
"With the popularity of kernel methods, a large number of kernel-based relation extraction methods have been proposed (Zelenko et al., 2003; Culotta and Sorensen, 2004; Bunescu and Mooney, 2005; Zhang et al., 2006).",
"The most related work to ours is by Bunescu and Mooney (Bunescu and Mooney, 2005), where the importance of shortest dependency path for relation extraction is first validated.",
"More recently, relation extraction research has been revolving around neural network models, which can alleviate the problem of exact feature matching of previous methods and have shown a remarkable success (e.g., (Socher et al., 2012; Zeng et al., 2014)).",
"Among those, the most related are the ones embedding shortest dependency paths with neural networks (Xu et al., 2015a,b, 2016; Liu et al., 2016).",
"For example, Xu et al. (2015b) use a RNN with LSTM units to embed shortest dependency paths without typed dependency relations, while a convolutional neural network is used in (Xu et al., 2015a).",
"However, they are all based on the supervised setting with a limited scale.",
"In contrast, we embed textual relations with distant supervision (Mintz et al., 2009), which provides much larger-scale training data at a low cost.",
"Various efforts have been made to combat the long-criticized wrong labeling problem of distant supervision.",
"Riedel et al. (2010), Hoffmann et al. (2011), and Surdeanu et al. (2012) have attempted a multi-instance learning (Dietterich et al., 1997) framework to soften the assumption of distant supervision, but their models are still feature-based.",
"Zeng et al. (2015) combine multi-instance learning with neural networks, with the assumption that at least one of the contextual sentences of an entity pair is expressing the target relation, but this will lose useful information in the neglected sentences.",
"Instead, Lin et al. (2016) use all the contextual sentences, and introduce an attention mechanism to weight the contextual sentences.",
"Li et al. (2017) also use an attention 821 mechanism to weight contextual sentences, and incorporate additional entity description information from knowledge bases.",
"Luo et al. (2017) manage to alleviate the negative impact of noise by modeling and learning noise transition patterns from data.",
"Liu et al. (2017) propose to infer the true label of a context sentence using a truth discovery approach (Li et al., 2016).",
"Wu et al. (2017) incorporate adversarial training, i.e., injecting random perturbations in training, to improve the robustness of relation extraction.",
"Using PCNN+ATT (Lin et al., 2016) as base model, they show that adversarial training can improve its performance by a good margin.",
"However, the base model implementation used by them performed inferior to the one in the original paper and in ours, and therefore the results are not directly comparable.",
"No prior study has exploited global statistics to combat the wrong labeling problem of distant supervision.",
"Another unique aspect of this work is that we focus on com-pact textual relations, while previous studies along this line have focused on whole sentences.",
"In universal schema (Riedel et al., 2013) for KB completion and relation extraction as well as its extensions (Toutanova et al., 2015; Verga et al., 2016), a binary matrix is constructed from the entire corpus, with entity pairs as rows and tex-tual/KB relations as columns.",
"A matrix entry is 1 if the relational fact is observed in training, and 0 otherwise.",
"Embeddings of entity pairs and relations, either directly or via neural networks, are then learned on the matrix entries, which are still individual relational facts, and the wrong labeling problem remains.",
"Global co-occurrence frequencies (see Figure 1 (Right)) are not taken into account, which is the focus of this study.",
"Another distinction is that our method directly models the association between textual and KB relations, while universal schema learns embedding for shared entity pairs and use that as a bridge between the two types of relations.",
"It is an interesting venue for future research to comprehensively compare these two modeling approaches.",
"When using a corpus to train statistical models, there are two levels of statistics to exploit: local and global .",
"Take word embedding as an example.",
"The skip-gram model (Mikolov et al., 2013) is based on local statistics: During training, we sweep through the corpus and slightly tune the nsubjpass SUBJECT born nmod:in OBJECT nsubj SUBJECT died nmod:in OBJECT place_of_birth place_of_death ... ... 0.73 0.89 Figure 2: Relation graph.",
"embedding model in each local window (e.g., 10 consecutive words).",
"In contrast, in global statistics based methods, exemplified by latent semantic analysis (Deerwester et al., 1990) and GloVe (Pen-nington et al., 2014), we process the entire corpus to collect global statistics like word-word co-occurrence counts, normalize the raw statistics, and train an embedding model directly on the normalized global statistics.",
"Most existing studies on relation extraction are based on local statistics of relations, i.e., models are trained on individual relation examples.",
"In this section, we describe how we collect global co-occurrence statistics of textual and KB relations, and how to normalize the raw statistics.",
"By the end of this section a bipartite relation graph like Figure 2 will be constructed, with one node set being textual relations T , and the other being KB relations R .",
"The edges are weighted by the normalized co-occurrence statistics of relations.",
"Given a corpus and a KB, we first do entity linking on each sentence, and do dependency parsing if at least two entities are identified 2 .",
"For each entity pair ( e, e 0 ) in the sentence, we extract the fully lexicalized shortest dependency path as a textual relation t , forming a relational fact ( e, t, e 0 ) .",
"There are two outcomes from this step: a set of textual relations T = { t i } , and the support S ( t i ) for each t i .",
"The support of a textual relation is a multiset containing the entity pairs of the textual relation.",
"The multiplicity of an entity pair, m S ( t i ) ( e, e 0 ) , is the number of occurrences of the corresponding relational fact ( e, t i , e 0 ) in 2 In the experiments entity linking is assumed given, and dependency parsing is done using Stanford Parser (Chen and Manning, 2014) with universal dependencies.",
"the corpus.",
"For example, if the support of t i is S ( t i ) = { ( e 1 , e 0 1 ) , ( e 1 , e 0 1 ) , ( e 2 , e 0 2 ) , . . . } , entity pair ( e 1 , e 0 1 ) has a multiplicity of 2 because the relational fact ( e 1 , t i , e 0 1 ) occur in two sentences.",
"We also get a set of KB relations R = { r j } , and the support S ( r j ) of a KB relation r j is the set of entity pairs having this relation in the KB, i.e., there is a relational fact ( e, r j , e 0 ) in the KB.",
"The number of co-occurrences of a textural relation t i and a KB relation r j is n ij = X ( e,e 0 ) S ( r j ) m S ( t i ) ( e, e 0 ) , (1) i.e., every occurrence of relational fact ( e, t i , e 0 ) is counted as a co-occurrence of t i and r j if ( e, e 0 ) S ( r j ) .",
"A bipartite relation graph can then be constructed, with T and R as the node sets, and the edge between t i and r j has weight n ij (no edge if n ij = 0 ), which will be normalized later.",
"The raw co-occurrence counts have a heavily skewed distribution that spans several orders of magnitude: A small portion of relation pairs co-occur highly frequently, while most relation pairs co-occur only a few times.",
"For example, a textual relation, SUBJECT nsubjpass born nmod:in OBJECT , may co-occur with the KB relation place of birth thousands of times (e.g., Michelle Obama was born in Chicago ), while a synonymous but slightly more compositional textual relation, SUBJECT nsubjpass born nmod:in city nmod:of OBJECT , may only co-occur with the same KB relation a few times in the entire corpus (e.g., Michelle Obama was born in the city of Chicago ).",
"Learning directly on the raw co-occurrence counts, an embedding model may put a disproportionate amount of weight on the most frequent relations, and may not learn well on the majority of rarer relations.",
"Proper normalization is therefore necessary, which will encourage the embedding model to learn good embedding not only for the most frequent relations, but also for the rarer relations.",
"A number of normalization strategies have been proposed in the context of word embedding, including correlationand entropy-based normalization (Rohde et al., 2005), positive pointwise mutual information (PPMI) (Bullinaria and Levy, 2007), and some square root type transformation (Lebret and Collobert, 2014).",
"A shared goal is to reduce the impact of the most frequent words, e.g., the and is, which tend to be less informative for the purpose of embedding.",
"We have experimented with a number of normalization strategies and found that the following strategy works best for textual relation embedding: For each textual relation, we normalize its co-occurrence counts to form a probability distribution over KB relations.",
"The new edge weights of the relation graph thus become w ij = p ( r j | t i ) = n ij / P j 0 n ij 0 .",
"Every textual relation is now associated with a set of edges whose weights sum up to 1.",
"We also experimented with PPMI and smoothed PPMI with = 0 .",
"75 (Levy et al., 2015) that are commonly used in word embedding.",
"However, the learned textual relation embedding turned out to be not very helpful for relation extraction.",
"One possible reason is that PPMI (even the smoothed version) gives inappropriately large weights to rare relations (Levy et al., 2015).",
"There are many textual relations that correspond to none of the target KB relations but are falsely labeled with some KB relations a few times by distant supervision.",
"PPMI gives large weights to such falsely labeled cases because it thinks these events have a chance significantly higher than random.",
"Next we discuss how to learn embedding of textual relations based on the constructed relation graph.",
"We call our approach Glo bal R elation E mbedding (GloRE) in light of global statistics of relations.",
"Given the relation graph, a straightforward way of relation embedding is matrix factorization, similar to latent semantic analysis (Deerwester et al., 1990) for word embedding.",
"However, textual relations are different from words in that they are sequences composed of words and typed dependency relations.",
"Therefore, we use recurrent neural networks (RNNs) for embedding, which respect the compositionality of textual relations and can learn the shared sub-structures of different textual relations (Toutanova et al., 2015).",
"For the examples in Figure 1, an RNN can learn, from both textual relations, that the shared dependency relation nmod:in is indicative of location modifiers.",
"It is worth noting that other models like convolutional neural networks can also be used, but it is not the focus of this paper to compare all the alternative embedding models; rather, we aim to show 823 -nsubjpass born nmod:in -nsubjpass born nmod:in <GO> place_of_birth : 0.73 Figure 3: Embedding model.",
"the effectiveness of global statistics with a reasonable embedding model.",
"For a textual relation, we first decompose it into a sequence of tokens { x 1 , ..., x m } , which includes lexical words and directional dependency relations.",
"For example, the textual relation SUBJECT nsubjpass born nmod:in OBJECT is decomposed to a sequence of three tokens { nsubjpass, born, nmod:in } , where represents a left arrow.",
"Note that we include directional dependency relations, because both the relation type and the direction are critical in determining the meaning of a textual relation.",
"For example, the dependency relation nmod:in often indicates a location modifier and is thus strongly associated with location-related KB relations like place of birth .",
"The direction also plays an important role.",
"Without knowing the direction of the dependency relations, it is impossible to distinguish child of and parent of .",
"An RNN with gated recurrent units (GRUs) (Cho et al., 2014) is then applied to consecutively process the sequence as shown in Figure 3.",
"We have also explored more advanced constructs like attention, but the results are similar, so we opt for a vanilla RNN in consideration of model simplicity.",
"Let denote the function that maps a token x l to a fixed-dimensional vector, the hidden state vectors of the RNN are calculated recursively: h l = GRU (cid:0) ( x l ) , h l 1 (cid:1) .",
"We use global statistics in the relation graph to train the embedding model.",
"Specifically, we model the semantics of a textual relation as its co-occurrence distribution of KB relations, and learn textual relation embedding to reconstruct the corresponding co-occurrence distributions.",
"We use a separate GRU cell followed by softmax to map a textual relation embedding to a distribution over KB relations; the full model thus resembles the sequence-to-sequence architecture (Sutskever et al., 2014).",
"Given a textual relation t i and its embedding h m , the predicted conditional probability of a KB relation r j is thus: p ( r j | t i ) = softmax ( GRU ( ( < GO > ) , h m )) j , (3) where () j denotes the j -th element of a vector, and < GO > is a special token indicating the start of decoding.",
"The training objective is to minimize = 1 |E| X i,j : p ( r j | t i ) > 0 (log p ( r j | t i ) log p ( r j | t i )) 2 , (4) where E is the edge set of the relation graph.",
"It is modeled as a regression problem, similar to GloVe (Pennington et al., 2014).",
"Baseline.",
"We also define a baseline approach where the unnormalized co-occurrence counts are directly used.",
"The objective is to maximize: 0 = 1 P i,j n ij X i,j : n ij > 0 n ij log p ( r j | t i ) .",
"(5) It also corresponds to local statistics based embedding, i.e., when the embedding model is trained on individual occurrences of relational facts with distant supervision.",
"Therefore, we call it Lo cal R elation E mbedding (LoRE).",
"Learned from global co-occurrence statistics of relations, our approach provides semantic matching information of textual and KB relations, which is often complementary to the information captured by existing relation extraction models.",
"In this section we discuss how to combine them together to achieve better relation extraction performance.",
"We follow the setting of distantly supervised relation extraction.",
"Given a text corpus and a KB with relation set R , the goal is to find new relational facts from the text corpus that are not already contained in the KB.",
"More formally, for each entity pair ( e, e 0 ) and a set of contextual sentences C containing this entity pair, a relation extraction model assigns a score E ( z | C ) to each candidate relational fact z = ( e, r, e 0 ) , r R .",
"On the 824 other hand, our textual relation embedding model works on the sentence level.",
"It assign a score G ( z | s ) to each contextual sentence s in C as for how well the textual relation t between the entity pair in the sentence matches the KB relation r , i.e., G ( z | s ) = p ( r | t ) .",
"It poses a challenge to aggregate the sentence-level scores to get a set-level score G ( z | C ) , which can be used to combine with the original score E ( z | C ) to get a better evaluation of the candidate relational fact.",
"One straightforward aggregation is max pooling, i.e., only using the largest score max s CG ( z | s ) , similar to the at-least-one strategy used by Zeng et al. (2015).",
"But it will lose the useful signals from those neglected sentences (Lin et al., 2016).",
"Because of the wrong labeling problem, mean pooling is problematic as well.",
"The wrongly labeled contextual sentences tend to make the aggregate scores more evenly distributed and therefore become less informative.",
"The number of contextual sentences positively supporting a relational fact is also an important signal, but is lost in mean pooling.",
"Instead, we use summation with a trainable cap : G ( z | C ) = min ( cap, X s CG ( z | s )) , (6) In other words, we additively aggregate the signals from all the contextual sentences, but only to a bounded degree.",
"We simply use a weighted sum to combine E ( z | C ) and G ( z | C ) , where the trainable weights will also handle the possibly different scale of scores generated by different models: E ( z | C ) = w 1 E ( z | C ) + w 2 G ( z | C ) .",
"The original score E ( z | C ) is then replaced by the new score E ( z | C ) .",
"To find the optimal values for w 1 , w 2 and cap , we define a hinge loss: Merge = 1 KKX k =1 max (cid:8) 0 , 1 + E ( z k ) E ( z + k ) (cid:9) , (8) where { z + k } K k =1 are the true relational facts from the KB, and { z k } Kk =1 are false relational facts generated by replacing the KB relation in true relational facts with incorrect KB relations.",
"In this experimental study, we show that GloRE can greatly improve the performance of several recent relation extraction models, including the previous best model on a standard dataset.",
"Dataset.",
"Following the literature (Hoffmann et al., 2011; Surdeanu et al., 2012; Zeng et al., 2015; Lin et al., 2016), we use the relation extraction dataset introduced in (Riedel et al., 2010), which was generated by aligning New York Times (NYT) articles with Freebase (Bollacker et al., 2008).",
"Articles from year 2005-2006 are used as training, and articles from 2007 are used as testing.",
"Some statistics are listed in Table 1.",
"There are 53 target KB relations, including a special relation NA indicating that there is no target relation between entities.",
"We follow the approach described in Section 3 to construct the relation graph from the NYT training data.",
"The constructed relation graph contains 321,447 edges with non-zero weight.",
"We further obtain a training set and a validation set from the edges of the relation graph.",
"We have observed that using a validation set totally disjoint from the training set leads to unstable validation loss, so we randomly sample 300K edges as the training set, and another 60K as the validation set.",
"The two sets can have some overlap.",
"For the merging model (Eq. 8), 10% of the edges are reserved as the validation set.",
"Relation extraction models.",
"We evaluate with four recent relation extraction models whose source code is publicly available 3 .",
"We use the optimized parameters provided by the authors.",
"CNN+ONE and PCNN+ONE (Zeng et al., 2015): A convolutional neural network (CNN) is used to embed contextual sentences for relation classification.",
"Multi-instance learning with at-least-one (ONE) assumption is used to combat the wrong labeling problem.",
"In PCNN, piecewise max pooling is 3 https://github.com/thunlp/NRE 825 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 Recall 0.4 0.5 0.6 0.7 0.8 0.9 1.0 P r e c i s i o n CNN+ATT CNN+ATT+GloRE 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Recall 0.4 0.5 0.6 0.7 0.8 0.9 1.0 P r e c i s i o n CNN+ONE CNN+ONE+GloRE 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 Recall 0.4 0.5 0.6 0.7 0.8 0.9 1.0 P r e c i s i o n PCNN+ONE PCNN+ONE+GloRE Figure 4: Held-out evaluation: other base relation extraction models and the improved versions when augmented with GloRE.",
"CNN+ATT and PCNN+ATT (Lin et al., 2016): Different from the at-least-one assumption which loses information in the neglected sentences, these models learn soft attention weights (ATT) over contextual sentences and thus can use the information of all the contextual sentences.",
"PCNN+ATT is the best-performing model on the NYT dataset .",
"Evaluation settings and metrics.",
"Similar to previous work (Riedel et al., 2010; Zeng et al., 2015), we use two settings for evaluation: (1) Held-out evaluation, where a subset of relational facts in KB is held out from training (Table 1), and is later used to compare against newly discovered rela-0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 Recall 0.4 0.5 0.6 0.7 0.8 0.9 1.0 P r e c i s i o n BASE BASE+(CNN+ONE) BASE+(CNN+ATT) BASE+(PCNN+ONE) BASE+GloRE Figure 6: Held-out evaluation: GloRE brings the largest improvement to BASE (PCNN+ATT), which further shows that GloRE captures useful information for relation extraction that is complementary to existing models.",
"tional facts.",
"This setting avoids human labor but can introduce some false negatives because of the incompleteness of the KB.",
"(2) Manual evaluation, where the discovered relational facts are manually judged by human experts.",
"For held-out evaluation, we report the precision-recall curve.",
"For manual evaluation, we report P recision @ N , i.e., the precision of the top N discovered relational facts.",
"Implementation.",
"Hyper-parameters of our model are selected based on the validation set.",
"For the embedding model, the mini-batch size is set to 128, and the state size of the GRU cells is 300.",
"For the merging model, the mini-batch size is set to 1024.",
"We use Adam with parameters recommended by the authors for optimization.",
"Word embeddings are initialized with the 300-dimensional word2vec vectors pre-trained on the Google News corpus 4 .",
"Early stopping based on the validation set is employed.",
"Our model is implemented using Tensorflow (Abadi et al., 2016), and the source code is available at https://github.com/ ppuliu/GloRE .",
"Existing Models + GloRE.",
"We first show that our approach, GloRE, can improve the performance of the previous best-performing model, PCNN+ATT, leading to a new state of the art on the NYT dataset.",
"As shown in Figure 5, when PCNN+ATT is augmented with GloRE, a consistent improvement along the precision-recall curve is observed.",
"It is worth noting that although PCNN+ATT+GloRE seems to be inferior to PCNN+ATT when recall < 0 .",
"05 , as we will show via manual evaluation, it is actually due to false negatives.",
"We also show in Figure 4 that the improvement brought by GloRE is general and not specific to PCNN+ATT; the other models also get a consistent improvement when augmented with GloRE.",
"To investigate whether the improvement brought by GloRE is simply from ensemble, we also augment PCNN+ATT with the other three base models in the same way as described in Section 5.",
"The results in Figure 6 show that pairwise ensemble of existing relation extraction models does not yield much improvement, and GloRE brings much larger improvement than the other models.",
"In summary, the held-out evaluation results suggest that GloRE captures useful information for relation extraction that is not captured by these local statistics based models.",
"LoRE v.s. GloRE.",
"We compare GloRE with the baseline approach LoRE (Section",
"4) to show the advantage of normalization on global statistics.",
"We use PCNN+ATT as the base relation extraction model.",
"As shown in Figure 7, GloRE consistently outperforms LoRE.",
"It is worth noting that LoRE can still improve the base relation extraction model when recall > 0 .",
"15 , further confirming Precision@ N 100 300 500 700 900 1000 PCNN+ATT 97.0 93.7 92.8 89.1 85.2 83.9 PCNN+ATT+LoRE 97.0 95.0 94.2 91.6 89.6 87.0 PCNN+ATT+GloRE 97.0 97.3 94.6 93.3 90.1 89.3 Table 2: Manual evaluation: false negatives from held-out evaluation are manually corrected by human experts.",
"the usefulness of directly embedding textual relations in addition to sentences.",
"Due to the incompleteness of the knowledge base, held-out evaluation introduces some false negatives.",
"The precision from held-out evaluation is therefore a lower bound of the true precision.",
"To get a more accurate evaluation of model performance, we have human experts to manually check the false relational facts judged by held-out evaluation in the top 1,000 predictions of three models, PCNN+ATT, PCNN+ATT+LoRE and PCNN+ATT+GloRE, and report the corrected results in Table 2.",
"Each prediction is examined by two human experts who reach agreement with discussion.",
"To ensure fair comparison, the experts are not aware of the provenance of the predictions.",
"Under manual evaluation, PCNN+ATT+GloRE achieves the best performance in the full range of N .",
"In particular, for the top 1,000 predictions, GloRE improves the precision of the previous best model PCNN+ATT from 83.9% to 89.3%.",
"The manual evaluation results reinforce the previous observations from held-out evaluation.",
"Table 3 shows two examples.",
"For better illustration, we choose entity pairs that have only one contextual sentence.",
"For the first example, PCNN+ATT predicts that most likely there is no KB relation between the entity pair, while both LoRE and GloRE identify the correct relation with high confidence.",
"The textual relation clearly indicates that the head entity is ( appos ) a criminologist at ( nmod:at ) the tail entity.",
"For the second example, there is no KB relation between the entity pair, and PCNN+ATT is indeed able to rank NA at the top.",
"However, it is still quite confused by nationality , probably because it has learned that sentences about a person and a country with many words about profession (poet, playwright, and novelist) 827 Contextual Sentence Textual Relation PCNN+ATT Predictions LoRE Predictions GloRE Predictions [ Alfred Blumstein ] head , a criminologist at [ Carnegie Mellon University ] tail , called ... appos criminologist nmod:at NA (0.63) employee of (1.00) employee of (0.96) employee of (0.36) NA (0.00) NA (0.02) founder of (0.00) founder of (0.00) founder of (0.02) [ Langston Hughes ] head , the American poet, playwright and novelist, came to [ Spain ] tail to ... -nsubj came to NA (0.58) place of death (0.35) NA (0.73) nationality (0.38) NA (0.33) contain location (0.07) place lived (0.01) nationality (0.21) employee of (0.06) Table 3: Case studies.",
"likely express the person's nationality.",
"As a result, its prediction on NA is not very confident.",
"On the other hand, GloRE learns that if a person came to a place, likely it is not his/her birthplace.",
"In the training data, due to the wrong labeling problem of distant supervision, the textual relation is wrongly labeled with place of death and nationality a couple of times, and both PCNN+ATT and LoRE suffer from the training noise.",
"Taking advantage of global statistics, GloRE is more robust to such noise introduced by the wrong labeling problem.",
"Our results show that textual relation embedding trained on global co-occurrence statistics captures useful relational information that is often complementary to existing methods.",
"As a result, it can greatly improve existing relation extraction models.",
"Large-scale training data of embedding can be easily solicited from distant supervision, and the global statistics of relations provide a natural way to combat the wrong labeling problem of distant supervision.",
"The idea of relation embedding based on global statistics can be further expanded along several directions.",
"In this work we have focused on embedding textual relations, but it is in principle bene-ficial to jointly embed knowledge base relations and optionally entities.",
"Recently a joint embedding approach has been attempted in the context of knowledge base completion (Toutanova et al., 2015), but it is still based on local statistics, i.e., individual relational facts.",
"Joint embedding with global statistics remains an open problem.",
"Compared with the size of the training corpora for word embedding (up to hundred of billions of tokens), the NYT dataset is quite small in scale.",
"Another interesting venue for future research is to construct much larger-scale distant supervision datasets to train general-purpose textual relation embedding that can help a wide range of downstream relational tasks such as question answering and textual entailment.",
"The authors would like to thank the anonymous reviewers for their thoughtful comments.",
"This research was sponsored in part by the Army Research Laboratory under cooperative agreements W911NF09-2-0053 and NSF IIS 1528175.",
"The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government.",
"The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notice herein."
] | [
"method",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"Information Extraction (IE) from scientific texts can be used to guide readers to the central information in scientific documents.",
"But narrow IE systems extract only a fraction of the information captured, and Open IE systems do not perform well on the long and complex sentences encountered in scientific texts.",
"In this work we combine the output of both types of systems to achieve Semi-Open Relation Extraction, a new task that we explore in the Biology domain.",
"First, we present the Focused Open Biological Information Extraction (FO-BIE) dataset and use FOBIE to train a state-of-the-art narrow scientific IE system to extract trade-off relations and arguments that are central to biology texts.",
"We then run both the narrow IE system and a state-of-the-art Open IE system on a corpus of 10k open-access scientific biological texts.",
"We show that a significant amount (65%) of erroneous and uninformative Open IE extractions can be filtered using narrow IE extractions.",
"Furthermore, we show that the retained extractions are significantly more often informative to a reader.",
"1 1 Introduction Identifying the central theme and concepts in scientific texts is a time-consuming task for experts and a hard task for laymen (Alper et al., 2004; El-Arini and Guestrin, 2011; Pain, 2016).",
"This problem is even more pronounced in inter-disciplinary fields of study, where experts in a target domain often lack the deeper knowledge of a source domain (Carr et al., 2018).",
"A specific example is biomimetics, an engineering problem-solving process in which one draws on analogous biological solutions (Kruiper et al., 2016).",
"A major issue is that engineers (target domain) know little biology (source domain) or characteristics of plants or 1 We release FOBIE and code at https://github.",
"animals (Vattam and Goel, 2013).",
"This domain-mismatch complicates searching for and reasoning over relevant scientific information, rendering biomimetics adventitious and solutions serendipitous (Kruiper et al., 2018).",
"Recently, TRADE-OFF relations have become of interest to biomimetics (Adriaens, 2019) because a trade-off defined in technology can be directly used to search for relevant texts in biology (Vincent, 2016).",
"TRADE-OFF relations express a problem space in terms of mutual exclusivity constraints between competing demands.",
"Therefore, tradeoffs play a prominent role in evolutionary thinking (Agrawal et al., 2010) and are the principal relation under investigation in a significant portion of biology research papers (Garland, 2014).",
"The functional demands that are traded off are usually abstract and domain-independent terms, such as safety ' and efficiency ' in Figure 1. A gap remains in quickly comprehending the central information in a text, e.g., the biological mechanisms that are used to manipulate a trade-off.",
"Information Extraction (IE), and specifically Relation Extraction (RE), can improve the access to central information for downstream tasks (Santos et al., 2015; Zeng et al., 2014; Jiang et al., 2016; Miwa and Bansal, 2016; Luan et al., 2018a).",
"However, the focus of current RE systems and datasets is either too narrow , i.e., a handful of semantic relations, such as U SED-FOR ' and S YNONYMY ', or too broad , i.e., an unbounded number of generic relations extracted from large, heterogeneous corpora (Niklaus et al., 2018), referred to as Open IE (OIE) (Etzioni et al., 2005; Banko et al., 2007).",
"Narrow approaches to IE from scientific text (Augenstein et al., 2017; Gabor et al., 2018; Luan et al., 2018a) cover only a fraction of the information captured in a paper usually what is within an abstract.",
"It has been shown that scientific texts contain many unique relation types and, therefore, it is not feasible to create separate narrow IE classifiers for these (Groth et al., 2018).",
"On the other hand, OIE systems are primarily developed for the Web and news-wire domain and have been shown to perform poorly on scientific texts.",
"What laymen really need is a bit of both: the accuracy of narrow RE systems to extract central relations from scientific texts and the flexibility of an OIE system to capture a much larger fraction of the possible relations expressed in scientific texts.",
"This work aims to enable rapid comprehension of a large scientific document by identifying",
"a) the central concepts in a text and",
"b) the most significant relations that govern these central concepts.",
"To this end, we introduce the task of Semi-Open Relation Extraction (SORE); Figure 1 illustrates the SORE process.",
"First, we find the central concepts safety ' and efficiency ' involved in a TRADE-OFF relation.",
"Then, by using the argument concepts of the relation as anchor points, we can explore further concepts and relations, e.g., xylem ' in Figure 1. Uncovering these relations can elucidate the meaning of unfamiliar concepts to a layperson (Mausam, 2016).",
"The SORE approach is hypothesized to reduce the number of uninformative extractions without limiting RE to a finite set of relations, which could generally benefit IE from scientific articles, e.g., materials discovery (Kononova et al., 2019) and drug-gene-mutation interactions (Jia et al., 2019).",
"To address SORE we create the Focused Open Biological Information Extraction (FOBIE) dataset.",
"FOBIE includes manually-annotated sentences that express explicit trade-offs, or syntactically similar relations, that capture the central concepts in full-text biology papers.",
"We train a span-based RE model used in a strong scientific IE system (Luan et al., 2018a) to jointly extract these relation structures.",
"We explore SORE and use the output of our model to filter the output of an OIE system (Saha and Mausam; Saha et al., 2017; Pal and Mausam, 2016; Christensen et al., 2011) on a corpus of biology papers.",
"Qualitative analyses show that the output of a narrow RE model can speed up expert analysis of trade-offs in biological texts, and be used to filter out both erroneous and uninformative OIE extractions.",
"OIE systems use a set of handcrafted or learned extraction rules and rely on dependency features to extract open-domain relational tuples from text (Yu et al., 2017; Niklaus et al., 2018).",
"As OIE systems rely on syntactic features they require little fine-tuning when applied to different domains and the extraction rules work for a variety of relation types (Mausam, 2016).",
"These properties can be especially useful on scientific texts where additional knowledge on unknown concepts can ease the textual comprehension for non-experts.",
"Consider the example OIE extractions for xylem ' in the top part of Figure 1. Existing OIE systems have been shown to perform significantly worse on the longer and more complex sentences found in scientific texts than on Wikipedia texts (Groth et al., 2018).",
"Common issues of OIE systems on Web, News, and Wikipedia texts include the correct identification of the boundaries of an argument, handling latent n -ary relations, difficulty handling negations, and generating uninformative extractions (Schneider et al., 2017).",
"Groth et al. (2018) evaluate the output of two state-of-the-art OIE systems based on correctness, rather than, e.g., the number of missed extractions.",
"They note that the crux of the IE challenge is that extractions reflect the consequence of the sentence.",
"As an example of an uninformative extraction Fader et al. (2011) note how (Faust, made, a deal) ' captures the consequence, but not the critical information of whom Faust made a deal with in the sentence Faust made a deal with the devil. .",
"In this work, we explore filtering both incorrect and uninformative OIE extractions from scientific texts using the central concepts that we extract through narrow IE (cf. Section 5.3).",
"Narrow RE entails identifying two or more related entities in a text and classifying the relation that holds between them.",
"Early works on the combined task of Named Entity Recognition and labeling of relations between extracted entities used precomputed dependency features (Liu et al., 2013; Chen et al., 2015; Lin et al., 2016), word position embeddings (Zeng et al., 2014), or considered only the Shortest Dependency Path between two entities as input (Bunescu and Mooney, 2005; Santos et al., 2015; Zeng et al., 2015).",
"Later work aimed to reduce errors propagated by pre-computed dependency features (Nguyen and Grishman, 2015), or by joint modeling of entities and relations (Miwa and Bansal, 2016).",
"Poor performance of these RE systems on scientific texts has led to the development of domain-specific datasets 2 .",
"The SCIENCEIE dataset focuses on the extraction of 3 types of key-phrases, rather than Named Entities, and hyponymy and synonymy relations between these (Augenstein et al., 2017).",
"The SemEval 2018 task 7 dataset focuses on 6 narrow relations between 7 entity types (Gabor et al., 2018).",
"And the SCIERC dataset focuses on 7 relation types, including co-reference, between 6 types of entities (Luan et al., 2018a).",
"Top systems developed for both SemEval tasks adapt the LSTM-based approach of Miwa & Bansal (2016), combined with semi-supervised learning and ensem-bling (Ammar et al., 2018), as well as pre-trained concept embeddings (Luan et al., 2018b).",
"In the past, several BioNLP and BioCreAtIvE shared tasks were organized that aimed at identifying relations in the biology domain (Hirschman",
"2 SCIENCEIE SemEval 2017: 500 paragraphs from full-text Computer Science, Material Science, and Physics journal articles, SemEval 2018: 500 abstracts within the domain of Computational Linguistics.",
"SCIERC: 500 abstracts from Artificial Intelligence conference and workshop proceedings.",
"et al., 2005; Kim et al., 2009; Nedellec et al., 2013; Zhou et al., 2014).",
"Many datasets focus primarily on a predefined set of biomedical relations, such as interactions between known proteins, genes, diseases, drugs, and chemicals (Kim et al., 2003; Krallinger et al., 2017; Cohen et al., 2017; Islamaj Dogan et al., 2019).",
"Examples of more biology-oriented corpora include the BB corpus (Deleger et al., 2016) and the SEEDEV corpus (Chaix et al., 2016).",
"The BB corpus includes 4 entity types and 2 relation types that revolve around microorganisms of food interest.",
"Besides abstracts and titles, it contains paragraphs and sentences from 20 full-text documents (Bossy et al., 2019).",
"Similarly, SEEDEV consists of 86 paragraphs from 20 full-text articles about seed development in a specific plant, the Arabidopsis thaliana .",
"Considering the small size of the dataset, a relatively large number of many entity and relation types are used; 16 types of Named Entities and 21 types of relations.",
"This results in an imbalanced dataset with 7 relations making up less than 1% of all relations.",
"Furthermore, there is some overlap in source documents for the train/dev/test split (Chaix et al., 2016).",
"In contrast to the previously described datasets, FOBIE does not classify arguments of relations into specific entity-types.",
"FOBIE contains annotations of key-phrases found in full-text scientific papers, similar to SCIENCEIE.",
"The key-phrases and relations are annotated in 1,548 relatively long and complex sentences, which were sourced from 1,215 full-text scientific biological texts using a Rule-Base System.",
"Table 1 provides an overview of the size of FOBIE in comparison to SCIENCEIE, the SemEval 2018 task 7 dataset and SCIERC.",
"Both the BB and SEEDEV corpus contain approximately 3,500 relations within a small sub-domain of biology, while FOBIE focuses more generally on the domain of biology.",
"Section 3 describes the collection of FOBIE and dataset statistics in detail.",
"A variety of words are able to indicate a trade-off, e.g., compromise, optimization, balance, interplay and conflict (Kruiper et al., 2018).",
"We adapt these terms as trigger words in a Rule-Based System (RBS) and run it on 10k open-access papers that were collected from the Journal of Experimental Biology (JEB) and BioMed Central (BMC) journals on Biology ', Evolutionary Biology , and Systems Biology '.",
"The selection of journals was made only to the extent that the articles focus on the biological domain.",
"We retained the abstract, introduction, results, discussion and conclusion sections.",
"We used spaCy 3 to split the texts into sentences and identify POS tags and dependency structure.",
"The FOBIE dataset contains only sentences that the RBS identified as expressing a TRADE-OFF relation.",
"The initial annotations extracted by the RBS were manually corrected and extended by a biology expert using the BRAT interface (Stenetorp et al., 2012).",
"We define three relation types: TRADE-OFF , ARGUMENT-MODIFIER and NOT-A-TRADE-OFF .",
"The latter denotes phrases that are related to a trigger word, but not by a TRADE-OFF relation.",
"These syntactically similar relations provide useful training signal as negative samples.",
"Negative samples are important because possible trigger words can be contiguous, e.g., the phrase negative correlation ' denotes a TRADE-OFF relation, whereas correlation ' by itself does not.",
"As a result, the annotation of training examples is harder, and lexical and syntactic patterns that correctly signify the relation are sparse (Peng et al., 2017).",
"For simplicity's sake, with some abuse of terminology, we refer to all such relations collectively as trade-offs .",
"We found a substantial amount of arguments to be nested or in a non-projective relationship.",
"In Figure 2 the prepositional phrase in jumping ', conceptually refers to both central concept arguments of the relation, i.e., the need for energy storage ' and the presence of resilin '.",
"We adopt the following annotation heuristic: prepositional phrases are treated as modifying phrases when they apply to multiple arguments (as is the case in Figure",
"2) or can be distinctly separated from the argument, e.g., by punctuation.",
"We randomly selected 250 sentences (16.1%) for re-annotation and quality control by a second domain expert.",
"The inter-annotator agreement Cohen k is found to be 0.93.",
"Table 2 summarizes statistics on FOBIE.",
"The final dataset consists of 1,548 single sentences from 1,292 unique documents, split into 1,248/150/150 train/dev/test.",
"The split is controlled for source document overlap to avoid having identical arguments of relations appearing both during training and testing.",
"FOBIE contains relatively long key-phrases with an average of 3.44 tokens and only 12% of them consist of a single token.",
"In comparison SCIENCEIE and SCIERC both contain 31% singleton key-phrases, and the average entity length in SCIERC is 2.36.",
"Furthermore, sentences taken from full-text documents are longer than those found in abstracts.",
"The average sentence length in SCIERC is 24.31 tokens, while 79.26% of the sentences in FOBIE are longer than 25 tokens.",
"Following Peng et al. (2017) we extract n -ary relations by (1) identifying the trigger and (2) extracting the binary relations between this trigger and the arguments inspired by Davidsonian semantics.",
"We define key-phrases as spans of consecutive words s S , with S all possible spans in a sentence, and relation-types as r R d , with d the total number of unique relations.",
"Then a binary relation is a triple <governor, relation, dependent> with governor and dependent elements of S .",
"The union of the following binary relations found in a sentence may constitute a non-projective graph: Figure 2: Example of an annotation in BRAT showing a trigger word, correlation ', that is related to two arguments, which in turn are related to a single modifier.",
"Def.",
"1. An explicit trade-off is an instance of a directed relation t T o , indicated by trigger word p P u with u the set of unique trigger words and P S .",
"A trade-off is a binary relation, t | = o , with governor P and dependent S .",
"A single trigger word p can be in n multiple relations.",
"Def.",
"2. An argument-modifier is a directed binary relation a A m , where we omit the classification of a into a set of possible modification types m .",
"An instance of a is then a tuple <governor, relation, dependent> where one of the arguments is related to a trigger word p , and both arguments S .",
"We adapt a span-based approach that has been used previously for the tasks of co-reference resolution (Lee et al., 2017), Semantic Role Labeling (He et al., 2018), and scientific IE (Luan et al., 2018a).",
"The use of span representations as classifier features enables end-to-end learning by propagating information between multiple tasks without increasing the complexity of inference.",
"We train the SCIIE system (Luan et al., 2018a) on FOBIE to extract spans that constitute trigger words and key-phrases, as well as the binary relations between these spans.",
"Figure 3 illustrates the input that we provide to SCIIE.",
"All tokens are embedded using GloVe (Pennington et al., 2014) and ELMo embeddings (original) (Peters et al., 2018).",
"For a single sentence D = { w 1 , ..., w n } all possible spans S = { s 1 , ..., s N } are computed, which are within-sentence word sequences.",
"The model deals with O ( n 4 ) possible combinations of spans, where n is the number of words in a sentence.",
"Therefore, pruning is required to make the classification of span-pairs into relation labels tractable at both training and test time (Lee et al., 2017; He et al., 2018).",
"First, a score mr of how likely a span is mentioned in a relation is computed.",
"These mention scores enable beam pruning the number of spans considered for relation classification with a variable beam of size n , where n is the number of tokens in the input sentence (Luan et al., 2018a).",
"Second, the maximum width W of spans is limited to reduce the total number of spans.",
"We set to .",
"8 and W to 14 tokens, the maximum span length in FOBIE.",
"After pruning, a label e i LE is predicted for the remaining spans s i .",
"Here LE is the set of possible span labels, including a non-span class (cid:15) .",
"For pairs of spans ( s i , s j ) the model predicts which relation r ij LR holds between them.",
"The set of possible relation types is LR , which includes a nonrelation class (cid:15) .",
"The output consists of labeled spans and relation labels for pairs of spans.",
"For a detailed description of the SCIIE system we refer to Luan et al. (2018a).",
"We evaluate SCIIE on two sub-tasks: (1) Argument Recognition, and (2) Relation Extraction.",
"Table 3 summarizes the results on the sub-tasks of Argument Recognition and RE.",
"With regards to the first sub-task, we train two SCIIE models.",
"One model only predicts whether a span is a valid span or not, while a second model predicts whether the span is a trigger word or a key-phrase.",
"For the first sub-task we also report the results of the RBS described in Section 3.1.",
"The RBS performs significantly worse; it identifies trigger words exceptionally well (F1=95.89 on test set) but does not correctly recognize many of the remaining key-phrases (F1=22.36 on test set), resulting in a low overall performance.",
"Figure 4 shows example outputs of the narrow RE model.",
"The predicted relation (NOT-A-TRADEOFF ) and its accompanying structure for the first example are completely correct.",
"Note how the argument modifiers result in a non-projective structure.",
"The second example is more challenging, with a longer range dependency between the tradeoff span and the second dependent argument.",
"Our model predicts the correct relation, TRADE-OFF , but only extracts partial argument spans and essentially fragments them into several modifying argument relations.",
"The third example exhibits a relatively long argument which is common in scientific literature where only a small part of the span is predicted.",
"A qualitative analysis confirms the ability of the trained narrow IE system to support a domain expert during trade-off annotation.",
"We predict tradeoffs for 523 unlabeled, scientific papers that have been annotated with a trade-off in an ontology of biomimetics (Vincent, 2014, 2016).",
"A domain expert compares the trade-offs found in the ontology of biomimetics against the output of the SCIIE system, see Table 4.",
"Narrow IE is found to locate the central TRADE-OFF relations and arguments for 41.68% of the total 523 papers.",
"Explicit tradeoffs were found in 243 documents.",
"At least one of the extracted TRADE-OFF relations for each document is identical to the expert annotation in 77.37% of these documents.",
"For 89.71% of the 243 documents a trade-off was found to be correct after some interpretation by the expert.",
"Two main types of uninformative trade-offs were found: trade-offs from a cited source and trade-offs between generic terms, e.g., a trade-off between cost and benefit without defining what the cost and benefit are.",
"Documents with identified trade-offs 243 Exact match 77.37% Match after interpretation 89.71% Sentences with identified trade-off 998 Exact match 68.04% Match after interpretation 84.47% Table 4: Manual analysis of extractions from 523 scientific documents that were used in the creation of an ontology of biomimetics (Vincent, 2014, 2016).",
"We define the aim of SORE as extracting the relations and concepts in a text that capture the most central information.",
"The application of SORE is especially of interest to scientific IE where OIE systems perform poorly and narrow IE systems are unable to cover the wealth of different relations types.",
"One possible approach is to automatically filter out uninformative and incorrect extractions generated by OIE systems.",
"In this approach, SORE relies on the output of both types of systems, providing a middle ground between precise, narrow IE and unbounded, but unreliable, OIE.",
"The resulting extractions are expected to be useful for human readers, but can also be used to collect data for annotation and training of scientific IE systems.",
"We explore SORE on scientific biology texts using the output of the SCIIE system trained on FOBIE, predicting trade-offs for the unlabeled 10k open access biology papers (see section 3.1).",
"The narrow IE output consists of 2,216 trade-offs found in 1,279 documents.",
"We pre-process arguments by appending their modifier, removing stop words, and embedding the remaining sequences using ELMo (PubMed) 4 .",
"We use the K-means algorithm to compute clusters on the IDF-weighted average of the resulting argument representations.",
"A domain expert inspected the centroids qualitatively.",
"Table 5 provides insight into some of the resulting argument clusters and their interrelations.",
"The exact number of clusters does not seem to greatly affect SORE.",
"For the given narrow IE output 50 clusters seems to provide a good balance between generic and more fine-grained topics.",
"The IDF weights are computed over the subword units found in the dataset; we use SentencePiece 5 with a vocabulary of 16K.",
"We then run OpenIE 5, a state-of-the-art OIE system (Saha and Mausam; Saha et al., 2017; Pal and Mausam, 2016; Christensen et al., 2011), on the same 1,279 documents that were found to contain one or more TRADE-OFF relations.",
"We retain only OIE extractions that contain one or more arguments that are classified into the same cluster as the TRADE-OFF arguments found in that text.",
"Furthermore, we omit OIE arguments that belong to noisy clusters containing mostly math symbols or long nested phrases.",
"We compute a simple IDF-weighted cosine similarity (Galarraga et al., 2014) between the vector representations of the remaining OIE and trade-off arguments.",
"We notice a striking drop in the number of irrelevant and noisy OIE arguments that remain after applying SORE.",
"The total amount of OIE extractions reduces from 401k before filtering to 140k (34.95%) after filtering.",
"As a result, the number of OIE extractions per document reduces from 314 to 110.",
"The unfiltered OIE extractions are found in 170k sentences, of which 67k (39.55%) are retained after applying SORE.",
"To test our hypothesis that SORE can reduce the number of uninformative extractions, without limiting RE to a narrow set of relations, we randomly select representative samples of unfiltered and filtered OIE extractions (400 each).",
"A domain expert manually annotated whether each extraction or sentence was thought to be informative, e.g., provides relevant information to understanding a biological text.",
"As an example, consider the sentence We have used this approach in a previous study to investigate the molecular factors governing the altered liver regeneration dynamics caused 4 https://allennlp.org/elmo 5 https://github.com/google/sentencepiece Cluster name Immunity Size Locomotion Top-5 arguments immunity size swimming immune function number sprinting the immune system volume running incompetence age locomotion immune response time diving Top-3 related clusters Mating Temperature Attribute of Animal Reproduction Sperm Length Verbs Life History Traits Offspring Number Capacity/Endurance Table 5: Examples of clusters found using the K-means algorithm on trade-off arguments from 1279 documents. For the related clusters only TRADE-OFF relations are taken into account. by ablation of the gene adiponectin (Adn)",
"(Cook et al., 2015).",
"OIE extractions such as",
"(We, have used, this [...] study)",
"' are considered uninformative, in contrast to",
"(the molecular [...] dynamics, caused by, ablation [...] adiponectin)",
"'.",
"Many OIE extractions are found to be poorly structured.",
"Like Groth et al.",
"(2018)",
"we relax the requirement of extractions being well-formed, e.g., we consider extractions that incorrectly identify the boundaries of one or more arguments as possibly capturing relevant information.",
"Different from their evaluation on correctness, we evaluate whether an extraction captures information that is relevant to understanding a text.",
"As a result, we consider poorly structured OIE extractions that contain relevant information to be informative, e.g.:",
"('the resumption of respiration', ' can lead to an increase of superoxide anions in the cytosol perhaps driving', ' increased elevation of Cu-ZnSOD')",
".",
"",
"('transcriptional coregulation amongst many genes', ' will give', ' rise to indirect interaction effects in mRNA expression data')",
".",
"The annotation relies on the correctness of the information captured by OIE extractions and whether this information is useful to a reader.",
"However, this does not imply informative extractions are relevant to the central theme of the text captured in a tradeoff.",
"We consider OIE extractions uninformative if the extraction: contains an uninformative argument class, e.g.,",
"('Miller et al . , 2012', ' to minimize', ' their swimming effort')",
".",
"contains incomplete arguments, e.g.,",
"('the RDME requirement', ' reactions', ' only fire')",
".",
"is non-sensible, e.g.,",
"('P. magellanicus', ' would have resulted', ' in a 1.6-fold higher Vmax for the scallop muscle')",
".",
"is unlikely to help understand a text, e.g.,",
"('DeepBind', ' was trained', ' on data from RNAcompete , CLIP RIP seq [ 10')",
"and",
"('microlepidopteran superfamilies', ' are heavily entombed', ' L:in amber')",
".",
"We also randomly select representative samples from the 170k unfiltered and 67k filtered sentences from which the OIE extractions are sourced.",
"The reason is that erroneous OIE extractions, e.g., not well-formed tuples, can guide a reader to informative passages in a text.",
"We see similar errors as described by Schneider et al.",
"(2017)",
"and Groth et al.",
"(2018), e.g., long sentences lead to incorrect extractions and errors in argument boundaries.",
"To illustrate the complexity of sentences that an OIE system encounters in scientific texts, consider the following examples: the arity of relations can be high, e.g.,",
"(49 tokens)",
"A large genome size tends to correlate with delayed mitotic and meiotic divi-sion [68] decreased plant invasiveness of disturbed sites [9] lower maximum photosynthetic rates in plants [2] and lower metabolic rates in mammals [10] and birds [11, 12].",
"(Warringer and Blomberg, 2006).",
"many phrases are nested and express nonverbal relations, e.g.,",
"(45 tokens)",
"However, for arboreal animals that regularly jump between branches (often when elevated quite high above the ground), jumping accurately ( which we define as the ability to land close to the intended target ) may also be important to fitness. (Kuo et al., 2011).",
"Table 6 provides an overview of the annotation results.",
"Filtering is found to increase the informativeness of both OIE extractions ( 2 =6.39, p < .025) and sentences ( 2 =11.75, p < .01).",
"The percentage of informative OIE extractions increases by 5.75% and of the percentage of informative sentences by Filtering # Nr.",
"8.25%.",
"A second domain expert annotated 25% of each set (400 total), the inter-annotator agreement Cohen k was found to be 0.84.",
"Manual inspection of the retained OIE extractions shows that many relevant extractions are retained, e.g., see Table 7. These extractions are useful to a reader in determining whether a document is worth reading in full, and can be used to identify informative sections in a text.",
"The presented approach to SORE shows promising results w.r.t. automatically filtering out a large proportion of irrelevant, incorrect, or uninformative OIE extractions.",
"Considering the poor quality of OIE extractions, however, we propose presenting a reader with the sentences that entail the filtered OIE extractions.",
"Furthermore, SORE provides a method to collect data for annotation and training of scientific OIE systems.",
"We introduce the task of Semi-Open Relation Extraction (SORE) on scientific texts and the Focused Open Biological Information Extraction (FOBIE) dataset.",
"We adapt off-the-shelf IE systems to show that SORE is feasible, and that our approach is worth improving upon both in terms of performance, as well as reducing the system's complexity.",
"A strong scientific IE system is used as a baseline, and its output is used to filter the relations found by a state-of-the-art OIE system.",
"OIE from scientific text is a hard task.",
"The large number of errors that we find in OIE extractions from scientific texts render them near-useless to downstream computing tasks.",
"A human reader may, nevertheless, find many incorrect extractions informative.",
"An issue for humans is the sheer amount of OIE extractions and the high proportion of uninformative extractions.",
"We show that our approach TRADE-OFF relations Trade-off arguments Argument modifiers sleepcognitive abilities energy conservation memory retention (the keeping of memory over prolonged periods of time) memory consolidation (in bats) (without a food reward) (shift from shortto longterm memory) (using torpor) Examples filtered OIE extractions (A memory; is normally formed; after repeated learning events; sleep enhances this process) (learning; is associated; with a food reward) (Sleep deprivation; has; negative effects on both memory consolidation) (torpor; has; a negative influence on memory consoli-dation)(digestion; prevents; the bats; from falling into torpor quickly)(torpor; indeed affects ; learning abilities) Table 7: SORE extractions from a scientific biology text (Ruczy ski et al., 2014).",
"to SORE reduces the number of OIE extractions by 65%, while increasing the relative amount of informative extractions by 5.75%.",
"As a result, SORE improves the ability for a reader to quickly skim through the remaining extractions, or sentences that they are sourced from, and analyze how central concepts are related in a scientific text.",
"The presented approach is currently limited to the domain of biology and the use of trade-off relations, but we expect that central relations can be identified for other scientific domains that enable SORE.",
"We show that creating a dataset for narrow RE can be done relatively cheaply by re-annotating the output of a simple RBS.",
"Similarly, SORE may aid the collection of a dataset for scientific OIE.",
"The authors would like to gratefully acknowledge the financial support of the Engineering and Physical Sciences Research Council (EPSRC) Centre for Doctoral Training in Embedded Intelligence under grant reference EP/L014998/1 and the EPSRC Innovation Placement fund.",
"We also thank Ben Trevett for proof-reading the document."
] | [
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"result",
"abstain",
"other",
"other"
] |
[
"Although current state-of-the-art Transformer-based solutions succeeded in a wide range for single-document NLP tasks, they still struggle to address multi-input tasks such as multi-document summarization.",
"Many solutions truncate the inputs, thus ignoring potential summary-relevant contents, which is unacceptable in the medical domain where each information can be vital.",
"Others leverage linear model approximations to apply multi-input concatenation, worsening the results because all information is considered, even if it is conflicting or noisy with respect to a shared background.",
"Despite the importance and social impact of medicine, there are no ad-hoc solutions for multi-document summarization.",
"For this reason, we propose a novel discriminative marginalized probabilistic method (DAMEN) trained to discriminate critical information from a cluster of topic-related medical documents and generate a multi-document summary via token probability marginalization.",
"Results prove we outperform the previous state-of-the-art on a biomedical dataset for multi-document summarization of systematic literature reviews.",
"Moreover, we perform extensive ablation studies to motivate the design choices and prove the importance of each module of our method.",
"1 1 Introduction The task of multi-document summarization aims to generate a compact and informative summary from a cluster of topic-related documents, which represents a very challenging natural language processing (NLP) application due to the presence of redundant and sometimes conflicting information among documents (Radev, 2000).",
"In the medical domain, in which machine learning plays an increasingly significant role (Domeniconi et al., 2014a; di Lena et al., 2015), multi-document summarization finds application in the generation of 1 The solution of this paper is available at https:// disi-unibo-nlp.github.io/projects/damen systematic literature reviews, a biomedical paper that summarizes results across many studies (Khan et al., 2003).",
"DeYoung et al. (2021) are the first that address this task, showing the related issues.",
"State-of-the-art approaches leverage two leading solutions: hierarchical networks that capture cross-document relations via graph encodings (Wan and Yang, 2006; Liao et al., 2018; Li et al., 2020; Pa-sunuru et al., 2021) or hidden states aggregation (Fabbri et al., 2019; Liu and Lapata, 2019a; Jin et al., 2020), and long-range neural models that apply multi-input concatenation (Xiao et al., 2021).",
"While effective, these solutions struggle to process clusters of many topic-related documents in low computational resource scenarios (Moro and Ragazzi, 2022) because they need to truncate the inputs.",
"Moreover, pre-trained state-of-the-art Transformers are not leveraged despite showing strong performance when fine-tuned in downstream tasks such as single-document summarization (Liu and Lapata, 2019b; Lewis et al., 2020a; Raffel et al., 2020; Beltagy et al., 2020; Zaheer et al., 2020).",
"Multi-document summarization requires models to have more robust capabilities for analyzing the cluster to discriminate the correct information from noise and merge it consistently.",
"In this work, we propose a discriminative marginalized probabilistic neural method (DAMEN) that selects worthy documents in the cluster with respect to a shared background and generates the summary via token probability marginalization.",
"The marginalization of the probability has been successfully applied in past NLP models such as pLSA (Hofmann, 1999) to learn the word probability distribution in documents by maximizing the likelihood.",
"Recently, new deep neural models that use probability marginalization approaches have been proposed for the question-answering task, where, however, each input/output sentence is generally several orders of magnitude shorter than sets of documents in multi-document summarization 180 (Guu et al., 2020; Lewis et al., 2020b).",
"To the best of our knowledge, we are the first that propose such a method for multi-document summarization.",
"To this aim, we conduct experiments on the only medical dataset for multi-document summarization of systematic literature reviews (MS2).",
"Besides, we perform extensive ablation studies to motivate the design choices and prove the importance of each component of our method.",
"To sum up, our contributions are as follows: We propose a novel probabilistic neural method for multi-document summarization (DAMEN) that discriminates the summary-relevant information from a cluster of topic-related documents and generates a final summary via token probability marginalization.",
"We advance the research in the medical domain, experimenting with a biomedical multi-document summarization dataset about the generation of systematic literature reviews.",
"We show that our solution outperforms previous state-of-the-art solutions, achieving better ROUGE scores.",
"Furthermore, we extensively prove the contribution of each module of our method with ablation studies.",
"We describe related works on multi-document summarization categorized on model architectures.",
"Flat solutions.",
"Flat concatenation is a simple yet powerful solution because the generation of the multi-document summary is treated as a single-document summarization task, thus it can leverage state-of-the-art pre-trained summarization models.",
"Consequently, processing all documents as a flat input requires models capable of handling long sequences.",
"As previously experimented by DeYoung et al. (2021), Xiao et al. (2021) proposed to leverage the Longformer-Encoder-Decoder model (Beltagy et al., 2020) pre-trained with a novel multi-document summarization specific task.",
"They proved that a long-range Transformer that encodes all documents is a straightforward yet effective solution, and they achieved new state-of-the-art results in several multi-document summarization datasets.",
"However, such models may struggle to handle a massive cluster of topic-related documents since they need to truncate them because of architectural limits.",
"Further, processing all documents in a cluster could be noisy if some of them are not relevant or factual with respect to the summary.",
"Hierarchical solutions.",
"To better preserve cross-document relations and obtain semantic-rich representations, hierarchical concatenation solutions leverage graph-based techniques to work from word and sentence-level (Wan and Yang, 2006; Liao et al., 2018; Nayeem et al., 2018; Antognini and Faltings, 2019; Li et al., 2020) to document-level (Amplayo and Lapata, 2021).",
"Other hierarchical approaches include multi-head pooling and inter-paragraph attention architectures (Liu and Lapata, 2019a), attention models with maximal marginal relevance (Fabbri et al., 2019), and attention across different granularity representations (Jin et al., 2020).",
"Such models are often dataset-specific because of the custom architecture, so they struggle to adapt to other datasets and effectively leverage pre-trained state-of-the-art Transformers.",
"Our solution.",
"In this work, we show how the summary-relevant information can be discriminated from a cluster of medical documents by a probabilistic neural method trained end-to-end.",
"In detail, our solution fully leverages pre-trained state-of-the-art Transformers without applying input truncation that causes performance drop and discards important contents, unacceptable for a high-social impact domain such as the medical one.",
"We introduce DAMEN, a discriminative marginalized probabilistic neural method for the multi-document summarization of medical literature based on three components:",
"Indexer : it is a neural language model based on BERT architecture (Cohan et al., 2020) that creates a dense representation of documents in the cluster, according to the best practices for information retrieval systems.",
"Discriminator : it leverages a BERT model to create the background embedding, which is used to compute a distance score between the embedding of each document in the cluster in order to select the top K ones.",
"Generator : it uses a BART model (Lewis et al., 2020a) to produce the final summary via token probability marginalization from the top K documents combined with the background.",
"While the Indexer is a frozen pre-trained model based on BERT, the Discriminator and Generator are trained end-to-end during the learning phase (Fig. 1).",
"The overall task can be mathematically formalized as follows.",
"The training tuple is composed of three elements ( y i , x i , C i ) , where y i is the ground-truth target summary, C i is the cluster of documents used to generate the multi-document summary, and x i is the background, which is a textual context shared by all c j C i used as input of the method, similar to the query in the query-focused multi-document summarization (Su et al., 2020).",
"The whole pipeline is trained end-to-end to maximize the conditional probability of generating y i from x i and C i through gradient descend: p ( y i | x i , C i ) (1) 3.1 Indexer In this phase, we index each document in the cluster with an embedding generated by a BERT-based model.",
"Such a pre-trained language model is the state-of-the-art in semantic modeling from textual data thanks to the vast knowledge learned during pre-training (Chen et al., 2019), achieving ground-breaking results across an extensive range of NLP downstream tasks even without fine-tuning.",
"For this reason, we use it to create a dense latent representation of each document, called document embedding, which is a vector of continuous numbers that indicates a point in a latent semantic space.",
"The technique we use is known as dense passage retriever (DPR) (Karpukhin et al., 2020), and it is widely adopted in the information retrieval domain (e.g., Lin et al., 2021; Moro and Valgimigli, 2021).",
"We choose the DPR method because it does not interrupt the backpropagation, differently from other solutions, e.g., BM25, TF-IDF (Domeniconi et al., 2014b) or LSA (Domeniconi et al., 2016a).",
"We formalize this step as B ( C i ) = E , where B is a BERT-based model, represents its parameters, and E is a matrix of shape ( len ( C i ) , 768) , where each row j of the matrix is the latent representation of the document c j .",
"The main idea of the Discriminator is to discriminate the critical information from noise in a cluster of topic-related documents with respect to a shared background without breaking the backpropagation chain.",
"For this reason, we use a probabilistic deep neural model to draw a probability distribution over documents in the cluster < c 0 , c 1 , ..., c n > C i , with the following formula: p ( C i | x i ) (2) where represents the parameters of the neural network.",
"a BERT-based pre-trained language model as the one used for indexing, but this is trained during the learning process while the first is frozen.",
"In detail, the Discriminator creates a latent projection for each background, which is used to fetch the more related documents in the cluster.",
"More precisely, it applies the inner product to create a score for each document and selects the top K ones.",
"We use the pre-trained encoder-decoder generative Transformer BART (Lewis et al., 2020a) to summarize the C i weighted by the Discriminator .",
"This component is trained to predict the next output token, creating a probability distribution over the dictionary for each c j C i before marginalizing.",
"The process is then repeated for all the target tokens.",
"Before giving the documents to the model, we concatenate them with the background x i , creating c ij = [ x i , tok, c ij ] , where tok is a special text separator token ( <doc> ) we add between x i and c ij to make BART aware of the background text boundary.",
"The behavior of the Generator can be formally defined as follows: p ( y i | c ij ) = N (cid:89) z p ( y iz | c ij , y i, 1: z 1 ) (3) where are the Generator parameters, N = | y i | is the target length, and y i, 1: z are the tokens from position 1 to z of the target y i .",
"The entire model aims to draw the probability distribution over the dictionary to generate the output tokens y i conditioned by x i and C i that we formally define as:",
"the following loss:",
"This section starts with describing the dataset in 4.1 and training details in 4.2.",
"We then analyze model performance in 4.3 and finally conduct ablation studies in 4.4.",
"We tested and evaluated our proposed method on the only medical dataset for multi-document summarization, as far as we know, about the generation of systematic literature reviews: the MS2 dataset.",
"The dataset is provided in DeYoung et al. (2021), and it is freely distributed.",
"2 It contains over 470K document abstracts and 20K summaries derived from the scientific literature.",
"Each sample of the dataset is composed of three elements:",
"i) the background statement, which is a short text that describes the research question or topic shared by all documents in the cluster,",
"ii) the target statement, which is the multi-document summary to generate, and",
"iii) the studies , also defined as cluster for consistency with our notation, which is a set of abstracts of medical studies related to the topic covered in the background statement.",
"The problem can be formalized as follows: we have a target statement to generate about the background source, containing the topic specifications, and a cluster of related document abstracts from which to fetch and discriminate helpful knowledge with respect to the background.",
"From here on, we use the terms document and abstract interchangeably since the elements in the cluster are just the abstracts of medical documents.",
"We report the dataset statistics in Table 1.",
"We trained our solution for 3 epochs using a batch size of 1 and a learning rate with a linear schedule set to 1 10 5 .",
"We set the number of K equal to 6 because it gave best results and used 1024 tokens as the max input size for the Generator .",
"During the evaluation, we adopted a beam size of 4 with a min and a max length set to 32 and 256, respectively.",
"We implemented the code using PyTorch for tensor computations and Hugging Face 3 for language model checkpoints.",
"We performed the experiments on a workstation with a GPU Nvidia RTX 3090 of 24GB memory, 64GB of RAM, and a processor Intel(R) Core(TM) i9-10900X CPU @ 3.70GHz.",
"Table 2 shows the results on multi-document summarization of systematic literature reviews, comparing our method with two solutions proposed in DeYoung et al. (2021).",
"The BARTHIERARCHICAL solution is trained to encode each document independently and then concatenate the representation of hidden states before decoding, whereas LEDFLAT takes as input all documents concatenated as a single document.",
"Experimental results show we outperform the state-of-the-art in all the ROUGE metrics, proving better capability to discriminate relevant information across many related documents and merge it consistently (Fig. 2).",
"We conducted ablation studies on the MS2 dataset to prove the importance of each module of our method.",
"In detail, for all experiments we trained our solution for 1 epoch with the same training details reported in 4.2, and we performed the evaluation on the first 400 instances of the test set.",
"The importance of a highly abstractive large-sized Generator.",
"We report in Table 3 the performance using several pre-trained checkpoints of the Generator that differ in size and training.",
"In detail, we tested two BARTBASE checkpoints and three BARTLARGE checkpoints: facebook/bart-base : the actual BART model pre-trained with a denoising masked language modeling.",
"gayanin/bart-mlm-pubmed : the BART model pre-trained exclusively on scientific corpora.",
"facebook/bart-large : the same BART model as the base version with a large architecture.",
"facebook/bart-large-cnn : the large BART fine-tuned on single-document summarization on the CNN/DailyMail dataset (Nallapati et al., 2016).",
"facebook/bart-large-xsum : the large BART fine-tuned on single-document summarization on the XSum dataset (Narayan et al., 2018).",
"Results prove that a large-sized BART model already fine-tuned on a summarization task achieves better performance.",
"More precisely, the checkpoint fine-tuned on the XSum dataset obtains better results thanks to the higher abstractiveness and the shortness of the target summaries, which are made up of just 1-2 sentences, similar to the MS2 dataset.",
"The importance of a full-sized chunked representation of documents in the cluster.",
"Table 4 reports experiments with three cluster configurations, where each document is treated with a different text representation, described as follows: Document-level : the simpler configuration that considers the entire abstracts in the cluster.",
"We truncated documents taking only the first 512 tokens before encoding by the Indexer .",
"Sentence-level : we considered the sentences of each document obtained using the state-of-the-art tokenizer PySBD (Sadvilkar and Neumann, 2020).",
"The sentences are encoded up to 128 tokens in length and they are then treated as individual textual units.",
"Chunk-level : our configuration, where each document is split into chunks of exact 512 tokens to consider all text information without",
"input truncation.",
"This configuration is similar to the sentence-level one but with the difference that each textual unit is 512 tokens in length and not 128.",
"The results prove the better performance on a cluster with chunked documents.",
"By considering 512 tokens for each document, we fully leverage the capability of BERT language modeling without truncating any information.",
"Input truncation required by the document-level configuration plays an important role in final accuracy because it discards and ignores potential summary-relevant information, leading to a performance drop.",
"The sentence-level setting lets us increase the top K sentences to retrieve, but it worsens the final summary because single sentences are too fine-grained.",
"The importance of a background-first concatenation with special token.",
"Table 5 reports the experiments with a different configuration of the concatenated inputs to give to the Generator .",
"We experimented with four types of concatenation: [Document + Background] [Background + Document] [Document + <doc> + Background] [Background + <doc> + Document] Results prove the importance of a background-first concatenation with the special token separator to make BART aware of the textual difference between the background and the documents.",
"model checkpoints for the Indexer and Discriminator .",
"First, we leveraged the checkpoint sentence-transformers/allenai-specter (Cohan et al., 2020), which is a scientific BERT-based model trained to create document embeddings by using paper citations.",
"Thus, we used this pre-trained model for both the Indexer and Discriminator .",
"Second, we used two different checkpoints with a specific DPR training, such as facebook/dpr-question_encoder-single-nq-base for encoding the background and facebook/dpr-ctx_encoder-single-nq-base for encoding each document in the cluster.",
"Results prove the importance of the DPR checkpoints for both the Indexer and Discriminator .",
"We proposed a novel probabilistic method based on the combination of three language models to tackle multi-document summarization in the medical domain.",
"This task is characterized by redundant information, noise, and the possible presence of vital information in each sentence that makes arbitrary input truncation unacceptable.",
"For this reason, we proposed a multi-document summarization method able to discriminate salient contents from irrelevant before summarizing.",
"In detail, the solution first leverages a BERT-based model ( Indexer ) for creating dense indices for each chunk of each document in the cluster.",
"Then, a second BERT-based model ( Discriminator ) is used to process the shared background and select only the most relevant chunks.",
"The final BART model is trained to perform a probability marginalization over each token prediction for each selected chunk.",
"In this way, our solution reads all document information and selects just the most relevant chunks, discarding noise before feeding the Generator .",
"The Discriminator and Generator are trained end-to-end, backpropagat-ing the probability distribution as explained in 3.",
"The Indexer is frozen; training would lead to some MS2 Encoders Rouge-1 Rouge-2 Rouge-L w/o ad-hoc encoders SPECTER-based 27.79 8.60 21.23 w/ ad-hoc encoders DPR-based 28.35 8.96 21.62 Table 6: The ablations to validate the contribution of the Indexer and Discriminator model checkpoints.",
"problems, such as the time to learn improved embeddings at each iteration and the larger memory occupation to save the gradient for each document.",
"We tested our method on MS2, the only dataset on systematic literature reviews, and compared it with state-of-the-art models, finding that our novel approach outperforms competitors on the ROUGE evaluation metrics.",
"Further, we performed extensive ablation studies to highlight the contribution of each component and motivate the design choices.",
"At the edge of our knowledge, this is the first work that applies a probability marginalization method for multi-document summarization.",
"We believe this work can inspire novel research towards end-to-end multi-model collaboration instead of solutions with a single large model addressing the entire task.",
"According to the divide et impera pattern, each model learns a specific sub-task, creating a more efficient and transparent cooperating solution.",
"Tasks such as related work generation or text generation from multi-sourced inputs can get the most from our method, improving pre-existing solutions to discriminate helpful knowledge from noise.",
"Further possible directions to deal with multi-inputs are the following:",
"i) extracting relevant snippets from documents with term weighting techniques (Domeniconi et al., 2015) or semantic relations with unsupervised methods (Domeniconi et al., 2016b, 2017) to better model interpretable representations based on knowledge graph learning techniques (Frisoni and Moro, 2020; Chen et al., 2021a,b) or event extraction methods (Frisoni et al., 2021);",
"ii) training models to write and read cross-document information with self-supervised representation learning methods (Domeniconi et al., 2014c) and memory-based neural layers (Moro et al., 2018; Cui and Hu, 2021).",
"The advancement of deep neural network architectures and the availability of large pre-trained language models has led to significant improvements for the multi-document summarization task, which has applications in high-impact domains, particularly in the medical one.",
"Here, systematic literature reviews play an essential role for the medical and scientific community, and for that reason, they require strong guarantees about the factuality of the output summary.",
"Current state-of-the-art NLP solutions cannot establish such assurance, so we do not believe our solution, like previous ones, is ready to be deployed.",
"The research should explore more effective evaluation measures for text summarization to make it happen, and large-scale accuracy guarantees by medical experts are still needed.",
"Finally, if the method will be applied to sensitive data such as medical patient records, it should also include privacy-preserving policies (da Silva et al., 2006).",
"We thank the Maggioli Group 4 for granting the Ph.D. scholarship to L. Ragazzi and L. Valgimigli.",
"The solution presented in this work has been designed by G. Moro."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"objective",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"method",
"result",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"other"
] |
[
"We present a targeted, scaled-up comparison of incremental processing in humans and neural language models by collecting by-word reaction time data for sixteen different syntactic test suites across a range of structural phenomena.",
"Human reaction time data comes from a novel online experimental paradigm called the Interpolated Maze task.",
"We compare human reaction times to by-word probabilities for four contemporary language models, with different architectures and trained on a range of data set sizes.",
"We find that across many phenomena, both humans and language models show increased processing difficulty in ungrammatical sentence regions with human and model accuracy' scores (`a la Marvin and Linzen (2018)) about equal.",
"However, although language model outputs match humans in direction, we show that models systematically under-predict the difference in magnitude of incremental processing difficulty between grammatical and ungrammatical sentences.",
"Specifically, when models encounter syntactic violations they fail to accurately predict the longer reaction times observed in the human data.",
"These results call into question whether contemporary language models are approaching human-like performance for sensitivity to syntactic violations.",
"A substantial body of work has investigated contemporary language models (LMs) by assessing whether their behavior is consistent with the rules of syntax (Hu et al., 2020; Marvin and Linzen, 2018; Warstadt et al., 2020).",
"1 Among other structures, these studies have investigated agreement (Linzen et al., 2016; Gulordava et al., 2018) 1 Data and code for this paper can be found online at https://github.com/wilcoxeg/ targeted-assessment-imaze long distance dependencies (Wilcox et al., 2018), pronominal and particle licensing (Jumelet and Hupkes, 2018; Futrell et al., 2019), and expectations for phrase-level constituents (Futrell et al., 2018).",
"Many of the studies which report aggregate behavior across a broad number of phenomena focus on accuracy scores, or the proportion of time LMs or human subjects in an online experiment prefer a grammatical variant in matching grammatical / ungrammatical sentence pairs.",
"While these investigations provide much insight, they collapse a crucial dimension of comparison, namely the difference in magnitude between the grammatical and ungrammatical conditions.",
"As long as the direction of their predictions are the same, an LM which finds grammatical conditions only marginally worse than their corresponding ungrammatical counterpart will receive the same score as a model that displays large differences between the two conditions.",
"At the same time, a related line of work has investigated the quantitative relationship between incremental predictions of language models and human reaction times (Hale, 2001; Levy, 2008).",
"Smith and Levy (2013) found that this relationship is log-linear across multiple orders of magnitude for 3 -gram models, and recent investigations have shown that this holds for contemporary neural network models as well (Wilcox et al., 2020; Goodkind and Bicknell, 2018).",
"So far, this work has largely focused on the aggregate relationship, instead of isolating individual phenomena in targeted testing environments.",
"We combine these two approaches with a targeted assessment of incremental processing in neural language models and humans.",
"We collect incremental processing data on a series of sixteen test suites, adapted from Hu et al. (2020), each of which targets a different syntactic phenomenon.",
"For LM incremental processing data, we collect Test Suite Name Tag Example Wh-Cleft Structures Cleft What she did/spied was see the giraffe/the giraffe Filler-Gap Dependency, Subject Gap FGD-subj I know who/that /my mother sent the present to Taylor.",
"by-word probabilities for four contemporary neural network architectures.",
"For human incremental processing data, we use by-word reaction times (RTs).",
"We collect these by deploying a novel online measurement paradigm called the Interpolated Maze , which is based on the Maze task (Forster et al., 2009).",
"In the Maze task, participants must read a sentence incrementally by selecting the correct word from two possible continuations, one of which is ungrammatical.",
"The time it takes participants to select the correct choice has been shown to effectively capture incremental processing cost and can be deployed at scale (Boyce et al., 2020).",
"We deploy three analysis techniques to investigate how well models capture the human incremental processing data.",
"First, we compute accuracy metrics (for LMs) and consistency scores (for humans) for each of our test suites, which correspond to the proportion of the time behavior is consistent with the relevant grammatical rules.",
"We find that, for this analysis, humans and machine performance is about equal.",
"Next, we compare the observed reaction-time slowdown between grammati-cal/ungrammatical conditions within a test suite to the slowdown predicted by each of our models.",
"For this analysis we use the methodology developed by Van Schijndel and Linzen (2018), who use a ms/bit (milliseconds of reaction time per bit of surprisal) conversion metric derived from a fitted regression model to convert between the outputs of LMs and slowdowns in human reaction times.",
"We find that models systematically under-predict the observed human data.",
"In our third analysis, we train a linear regression models to predict reaction times from probabilities in non-critical sentence regions, and show that these models are relatively poor at predicting reaction times in critical sentence regions.",
"That is, in areas of the sentence where human reaction time is influenced by grammatical violations, LM probabilities routinely under-predict human processing difficulty as measured by reaction time.",
"Taken together, these results indicate that contemporary neural network languages models are systematically less sensitive to grammatical violations compared to humans.",
"We collect incremental processing data on a series of test suites, each of which targets an individual syntactic phenomenon.",
"Composition of the test suites is described in Section 2.1.",
"Methods used to collect incremental processing data are outlined in Section 2.2, for human reaction times.",
"Section 2.3 describes the models tested.",
"Linear Regression Models used to predict reaction times from model outputs will be referred to as Linear Fits' to avoid confusion with Language Models.",
"We use sixteen test suites for syntactic generalization, adapted from Hu et al. (2020).",
"Test suites consist of 20-25 items.",
"Each item appears in four conditions, two grammatical and two ungrammatical.",
"2 Table 1 gives the name of each test suite, an example, as well as a tag, which we will use to refer to that suite in figures.",
"When test suites have modifiers they always included distractors of the opposite grammatical category.",
"For example singular reflexive anaphora sentences with subject relative clause modifiers would have a plural noun in the relative clause (e.g. The bishop who likes the kings saw *themselves/himself in the mirror. ) Following the logic from Hu et al. (2020), each test suite comes with two or more criteria, which specifies an inequality that should hold in a particular critical region if model behavior follows the rules of the relevant grammatical construction.",
"Accuracy scores for each test suite are generated by computing the proportion of the time the inequality holds within the critical region, across items in a test suite.",
"In Hu et al., test suites include criteria that correspond to 2-way contrasts between gram-matical/ungrammatical conditions as well as 2x2 interactions between four conditions.",
"We only look at the 2-way contrasts, here.",
"The incremental processing measure we derive from a language model to determine its accuracy according to a suite's inequality predictions is surprisal .",
"Surprisal is the inverse log probability of a word given its context: S ( x i ) = log 2 p ( x i | x 1 ...x i 1 ) , measured in bits.",
"In this paper, we novelly extend the usage of these inequalities to determine a human consistency score for each test suite, by checking the mean reaction times for the various conditions of each item in the suite against the suite's criteria.",
"For naturalistic corpus materials, the effect of surprisal on human reaction times has been shown to be linear (Smith and Levy, 2013; Goodkind and Bicknell, 2018; Wilcox et al., 2020), motivating this usage of syntactic generalization criteria on human reading patterns.",
"We use the same criteria as described in Appendix B of Hu et al. (2020).",
"2 For the MVRR test suites, the ungrammatical' conditions are plausibly licensed by the grammar, but are unlikely.",
"Following convention in linguistics, ungrammatical sentences will be marked with a *.",
"gives an example of all four conditions of the Main Verb / Reduced Relative Clause suite, with critical regions underlined.",
"(1)",
"a. The artist drawn a portrait was impressed with the work.",
"[ UN-REDUCED , UNAMBIGUOUS ]",
"b. The artist that was drawn a portrait was impressed with the work.",
"[ REDUCED , UNAMBIGUOUS ]",
"c. The artist painted a portrait was impressed with the work.",
"[ UN-REDUCED , UNAMBIGUOUS ]",
"d. The artist that was painted a portrait was impressed with the work.",
"[ REDUCED , AMBIGUOUS ] The logic of the test suite relies on the fact that strings like painted are ambiguous between active past-tense main verbs and passive participles that introduce a reduced relative clause.",
"On the other hand, verbs like drawn unambiguously introduce a reduced relative clause.",
"If subjects believe that the ambiguous form of the verb introduces a main verb, they should find the critical-region verb was impressed surprising.",
"That is, relative to the [ REDUCED , AMBIGUOUS ] conditions, not reducing the verb or not using an ambiguous verb should make the critical region less surprising (1 and 2 below).",
"Furthermore, the effect of not reducing the relative clause should be smaller for unambiguous verbs than for ambiguous ones (3).",
"If we denote for convenience S x ( w i ) as the surprisal of word w i in the context of version x of a test suite item, then the following list outlines these three predictions as inequalities, which we used to determine accuracy scores on our test suites.",
"1. S d (was impressed) < S c (was impressed) 2. S d (was impressed) < S b (was impressed) 3. ( S d (was impressed) S c (was impressed)) < ( S b (was impressed) S a (was impressed)) To foreshadow our results, the MVRR panels of Figures 3 and in Appendix A show that all three of these criteria are met for most items both by all models and by human average reaction times.",
"Unlike our other test suites, these predictions do not correspond to contrasts between sentences that vary based on their grammaticality, but rather on predictive processing that prefers the main-verb analysis for locally ambiguous strings.",
"Human reaction time data was collected via a novel implementation of the Maze Task (Forster et al., 2009) which we call the Interpolated Maze .",
"In a maze task participants read through a sentence; at each index they are presented with two possible continuations, one word is a plausible next-word The x-x-x beside beaver slapped pretty its of ago tail Time The x-x-x bli or beaver slapped sulped its eet twul tail Time The x-x-x bli or beaver slapped pretty its of twul tail Time Grammatical Maze (G-Maze) Lexical Maze (L-Maze) Interpolated Maze (I-Maze) Figure 1: The Maze Task: Participants read the sentence word-by-word.",
"in the sentence and the other word is a distractor.",
"Participants must select the correct continuation by pressing a key on their keyboard.",
"Figure 1 shows a cartoon of this process for three variants of the Maze Task.",
"In the G(rammatical)-Maze version, the distractor word is a word of English, only it does not constitute a grammatical continuation.",
"In the L(exical)-Maze variant, the word is a non-English nonce word.",
"If participants select the wrong continuation, the trial ends and they begin reading the next sentence.",
"The time it takes participants to select the correct word by pressing a key has been shown to be a robust measure of incremental processing difficulty, with slowdowns occurring on target words instead of in subsequent spillover regions as is the case with other online processing measures such as self-paced reading (Boyce et al., 2020).",
"Of these two variants, G-Maze has been shown to produce higher sensitivity results than L-Maze (Boyce et al., 2020), however because each index must present one possible continuation, it cannot be used be used for items that have ungrammatical conditions.",
"At the critical choice point, both the distractor and the continuation would be ungrammatical and participants would not know which continuation to select.",
"To solve this problem we deploy a novel variant of the maze task called Interpolated Maze , or I-Maze.",
"In I-Maze, we interweave G-Maze and L-Maze choices, with L-Maze distractors in critical regions where one of the conditions is ungrammatical.",
"Participants are instructed to choose English words over nonce-words, thus making the right' choice in these regions unambiguous.",
"In order not to clump L-Maze distractors only in critical regions, we randomly sample 25% of all other words and render them as L-Maze choices.",
"For a full comparison of I-Maze, G-Maze and L-Maze see Vani et al. (2021).",
"G-Maze distractors were generated with the scripts provided in Boyce et al. (2020), which uses a neural-network based language model to automatically generate high surprisal distractor words.",
"Nonce words were generated with Wuggy (Keuleers and Brysbaert, 2010).",
"Experiments were hosted on Ibex Farm (Drum-mond, 2013), with participants recruited on Amazon M-Turk.",
"reaction time data for each item was collected from thirty separate participants.",
"JRNN is the BIG LSTM+CNN Inputs' from Joze-fowicz et al. (2016).",
"It was trained on the One Billion Word Benchmark (Chelba et al., 2013) with two hidden layers of 8196 units each and CNN character embeddings as input.",
"GRNN is the best-performing model described in the supplementary materials of Gulordava et al. (2018).",
"It was trained on 90 million tokens of English Wikipedia with two hidden layers of 650 hidden units.",
"GPT-2 is the model presented in Radford et al. (2019), and was trained on 40GB of internet text.",
"We use the version of GPT-2 available through the Language Modeling Zoo distribution 3 RNNG (Dyer et al., 2016) jointly models a sentence as well as its syntactic parse.",
"The model explicitly represents parse trees and composes partially built phrase structures.",
"Models are supervised with Penn-Treebank style parses during training.",
"We use the average of the three RNNG-BLLIP-LG models from Hu et al. (2020).",
"Thus, any potential badness of our linear fits in critical regions is an epiphenomenon of the fact that they were trained in regions where the linearity holds and tested in regions where it does not.",
"While there is some evidence that the linear relationship between surprisal may flatten off in high surprisal regions for self-paced reading (see,",
"e.g.",
"Figure 1 in Wilcox et al. (2020)), data collected for Maze task for both GRNN and a large Transformer model shows that the linear relationship holds even in very high surprisal regions, even exceeding 20 bits (Boyce and Levy, 2020) (see, especially Figure 3).",
"The second confound has to do with the Interpolated Maze task.",
"It may be the case that switching between tasks incurs a cognitive load, thus ungrammatical sentence regions might be read more slowly, but only because they are always associated with a switch from grammatical to lexical distractors.",
"This could be worrisome, however we find that reaction times in non-critical regions for L-Maze decisions are actually slightly faster than G-Maze decisions ( p < 0 . 001 by a t -test).",
"Furthermore, all of our reported contrasts are between L-Maze items, so this is controlled for in our analyses.",
"In this section we discuss test suite accuracy scores, which are computed using the predictions associated with each test suite.",
"For models, success on a prediction means that the model found material in a specified critical region more probable in the grammatical condition than the ungrammatical condition.",
"For humans, a corresponding metric, consistency scores , report the proportion of times the critical region material was read more quickly in the grammatical condition than in the ungrammatical condition.",
"Scores are calculated across the total number of items in a test suite.",
"Because multiple subjects provided reaction time data for each item, we first average item-level data across all participants before calculating consistency scores.",
"The accuracy/consistency scores for each of our test suites can be seen in Figure 2. In this figure each facet represents the results from a single test suite, which aggregates across two or more predic-gp t 2 r nng j r nn g r nn hu m an C l e ft FGDo b j FGDp p FGDs b j MVR R NPLa n y o r c NPLa n y s r c NPLe v e ro r c NPLe v e rs r c R NAf o r c R NAf s r c R NAm o r c R NAm s r c S VNAo r c S VNAp p S VNAs r c 0 200400 0 200400 0 200400 0 200400 0 200400 Test Suite S l o w do w n i n M illi s e c ond s Predicted vs. Observed Slowdown Between Conditions Figure 3: Comparison between human and (predicted) model reaction-time slowdows between grammatical and ungrammatical conditions.",
"tions.",
"A full breakdown of test suite by prediction can be seen in Appendix B. Chance, which is 50% accuracy, is marked with a dashed blue line.",
"Humans perform above chance on 13/16 test suites.",
"Human RTs are at or below chance for 3/4 of the Reflexive Anaphora agreement tests and the Subject-Verb Number Agreement with an Object Relative Clause modifier.",
"For the Reflexive Anaphora tests, the low scores are driven by poor performance when the noun that must be matched is singular, such as in The lawyer who the judges fear hurt herself/*themselves .",
"Notably, human reaction times for negative polarity items and for number agreement on verbs and reflexive pronouns are known to be susceptible to facilitatory interference effects from intervening attractors of the sort that are used in our test suites (Vasishth et al., 2008; Jager et al., 2020).",
"In general, human consistency scores in this study are below that reported in Marvin and Linzen (2018), who use an offline forced-choice paradigm, in which participants must judge which of two sentences sounds more natural.",
"Nevertheless, for the vast majority of test suites, humans show robust sensitivity to the grammatical effects being tested, and failure is due to spe-cific biases, such as the singular reflexive behavior discussed above, not general insensitivity to the manipulations.",
"Table 2 shows the cross-suite correlations between human consistency scores and model accu-Model Correlation p -value GRNN 0.45 0 .",
"racy scores.",
"The relatively strong correlation scores indicate that the strength of signal for a syntactic generalization in model surprisal differentials is predictive of the signal-to-noise ratio for the generalization in human reaction times.",
"In this section we turn to the size of the contrast between grammatical and ungrammatical conditions.",
"For humans, this contrast indicates a slowdown, where critical regions of ungrammatical sentences are read more difficultly than their corresponding grammatical variants.",
"For LMs, this contrast indicates a surprisal difference, where ungrammatical conditions are more surprising than their grammatical counterparts.",
"Do differences in surprisal accurately predict the slowdowns observed in human reaction time data?",
"To derive a predicted reaction-time slowdown from the model surprisals, we followed the method 170 180 190 200 210 gpt2 grnn jrnn rnng model M ean A b s o l u t e R e s i dua l E rr o r C r i t i c a l R eg i on N on C r i t i c a l R eg i on Residual Error gp t 2 g r nn j r nn r nng 400 200 0 200 400 600 0.0000.0010.002 0.0000.0010.002 0.0000.0010.002 0.0000.0010.002 Residual den s i t y Histogram of Residual Values 7 6 8 5 11 10 12 9 15 14 16 13 Grammatical Conditions Grammatical Conditions Grammatical Conditions Grammatical Conditions Grammatical Conditions Grammatical Conditions Grammatical Conditions Grammatical Conditions Grammatical Conditions Grammatical Conditions Grammatical Conditions Grammatical Conditions Ungrammatical Conditions Ungrammatical Conditions Ungrammatical Conditions Ungrammatical Conditions Ungrammatical Conditions Ungrammatical Conditions Ungrammatical Conditions Ungrammatical Conditions Ungrammatical Conditions Ungrammatical Conditions Ungrammatical Conditions Ungrammatical Conditions 0 200 400 600 800 FGD-obj FGD-pp FGD-sbj Test M ean R e s i dua l Critical Region Residuals (GRNN) Figure 4: Residuals for reaction times in critical regions from a linear fit trained to predict reaction times from surprisal values in non-critical sentence regions.",
"ology outlined in Van Schijndel and Linzen (2018).",
"This approach draws on the fact that the relationship between surprisal and human reaction time is linear across multiple orders of magnitude (Smith and Levy, 2013; Wilcox et al., 2020), including for Maze data (Boyce and Levy, 2020).",
"For each LM, we trained a linear fit that predicts reaction time from surprisal value at the word-level.",
"The model is fit on RTs from all L-Maze distractor trials, critical and non-critical region alike, and includes word frequency and word length as additional predictors, with random slopes for each item and each participant.",
"The linear model's surprisal estimate, therefore, is the slowdown in processing time predicted for each bit of surprisal.",
"We treat this number as a scalar and multiply it by the difference in surprisal between conditions to derive the total predicted slowdown due to syntactic violation from the language models.",
"For all of our fits, we found a significant effect for all of our predictors.",
"The estimates for each model's surprisal term are given in Table 3. The results from this analysis can be seen in Figure 3, with the various test suites on the x-axis and observed or predicted slowdowns on the y-axis.",
"As with accuracy scores, we average across predictions within each test suite.",
"Humans demonstrate positive slowdowns in 11/16 test suites, with reflexive anaphora again proving the exception to the general trend.",
"As is evident from the height of the bars, models systematically under-predict the slowdown observed in the human data.",
"Models' predictions are outside of the 95% confidence intervals for the humans slowdowns in 7/16 test suites for GPT2, 8/16 for RNNG, 9/16 for GRNN and 12/16 for JRNN.",
"The mean predicted difference between models and humans across all test suites is 95 ms (GPT2), 107 ms (RNNG), 117 ms (GRNN) and 126 ms (JRNN).",
"These data indicate that models are less sensitive to the contrast between grammatical and ungrammatical conditions than are humans, at least in this controlled testing environment.",
"In this section, we discuss a follow-up analysis conducted to validate the conclusion that models are under-predicting reaction times in critical regions.",
"To do this, we train linear fits on data from the noncritical regions, and get their residuals on data from these regions as well the critical regions.",
"The linear fits are exactly the same as the ones described in the previous section, except instead of being trained on both critical and non-critical L-Maze trials, they are trained on non-critical L-Maze trials alone.",
"If the conclusion from the last section is correct, then we should see larger residuals for the critical-region data then for the non-critical region data.",
"shows the mean absolute value of the residuals for each of our LMs, both for the critical and noncritical region.",
"The center facet shows a histogram of the same data.",
"From both plots it is clear that the critical region residuals are greater than the residuals computed for words in other regions of the sentence.",
"From the histograms, we can see that the critical region residuals are systematically higher on average than the non-critical region residuals.",
"This indicates that the models under-predict the RT values in the critical regions.",
"The difference between residuals provides additional evidence that models under-predict reaction times in critical regions compared to words in other parts of the sentence.",
"However, it does not show that models under predict reaction times specifically for ungrammatical sentences.",
"To investigate this, we break down average residual by condition, within each of our sixteen test suites.",
"The full results for this breakdown can be seen in Appendix B, with the results for the FillerGap dependency tests for the GRNN model in the right facet of Figure 4. 4 Across all tests, we find that ungrammatical conditions show much higher residual error.",
"The mean absolute value of the residual error is 163 ms in grammatical conditions, but in ungrammatical conditions it is 244 ms .",
"The values of the two conditions are significantly different ( p < 0 . 001 by a t -test).",
"Generally, residuals are largest for Cleft, FillerGap Dependency and MVRR suites, and smaller for suites that involve NPI Licensing, Anaphora agreement and Subject-Verb Number agreement.",
"Human reaction-times are known to be susceptible to interference effects from distractors for these syntactic phenomena (Jager et al., 2020), which may explain why residuals are smaller for these suites.",
"Taken together this analysis demonstrates that model surprisal values specifically under predict human reaction times in ungrammatical critical regions, suggesting that they are less sensitive to syntactic violations than are humans.",
"Our experiments have tackled the question of whether syntactic difficulty can be reduced to byword probabilities by providing a comparison of Language Model and human behavior that is both incremental and targeted.",
"Our methods build on 4 With the MVRR test suite, no conditions are technically ungrammatical, however we treat the reduced ambiguous condition as ungrammatical for the purposes of this analysis.",
"those presented in Van Schijndel and Linzen (2018) and van Schijndel and Linzen (2020), but differ from theirs in a number of key respects, which we review briefly below to highlight to novel aspects of our own investigation.",
"First, all of our test suites target grammatical/ungrammatical contrasts (except for the MVRR gardenpath test), whereas van Schijndel and Linzen test locally ambiguous sentence regions that (may) require re-analysis for proper processing.",
"Second, we assess a broad range of grammatical violations across sixteen test suites that target seven distinct structures.",
"Third, we deploy a novel measurement of processing time ( Interpolated Maze ), instead of self-paced reading.",
"We fit our own linear models from the I-Maze data, and use a ms/bit scalar term derived from lexical distractor items.",
"Finally, we provide a novel analysis that compares the residuals of linear fits between critical and non-critical regions, and we break down these residuals based on the grammaticality of the condition.",
"While none of our models is able to capture humanlike sensitivity in ungrammatical critical regions, we do see some variation between them, with RNNG and GPT-2 in particular showing the most humanlike results.",
"To compare model performance for accuracy scores (i.e. the results presented in Section 3.1), we fit pairwise logistic regression models, with the model class as the sole predictor, and random slopes for nested item/test suite combinations and predictions (this because predictions are shared across test suites of the same type).",
"We find that GPT-2 performs significantly beter than both JRNN and GRNN ( p < 0 . 01 ) and the contrast between RNNG and GRNN approaches significance ( p = 0 . 07 ) None of the other pairwise comparisons are significant.",
"To compare model performance at predicting human slowdown in critical regions, we look at the difference in residual errors between the models from Section 3.3 in the critical regions.",
"We fit liner regression models with the residual as predictor variable, nested item/test suite combinations, and condition as random slopes.",
"We find a significant contrast between GPT-2 and JRNN ( p < 0 . 05 ), with GPT-2 performing better, and a near-significant contrast between RNNG and JRNN ( p = 0 . 053 ).",
"Overall, these results support the conclusion that GPT-2 and RNNG have 0.4 0.6 0.8 1.0 0 25 50 75 100 ms/bit Effect Term Scalar P r opo r t i on o f T e s t s w i t h i n 95 % CI s o f H u m an s model rnng grnn gpt2 jrnn Theoretical Model Performance (Task from Section 3.2) Figure 5: The effect of an additional ms/bit scalar term on model performance from tests in Section 3.2.",
"a mild advantage over the other models.",
"This is especially interesting for the RNNG model, given that it was trained on orders of magnitude less data than GPT-2.",
"For the last decade, a single-stage theory of incremental processing (Levy, 2008), in which word surprisal in a left-to-right language model (with a large or unlimited beam for models that explicitly represent multiple incremental parses) is the sole determinant of the processing difficulty that arises due to the relationship between a word and the context it appears in, has been a prominent candidate theory for both experimental (Staub, 2011) and computational (Frank and Bod, 2011) psycholinguistic investigations.",
"Although such a single-stage can capture the qualitative difficulty patterns induced by garden-pathing and other grammar-based expectation violations (Hale, 2001; Levy, 2013), we now see that it quantitatively under-predicts the difficulty induced when grammatical expectation violations are involved, as measured by self-paced reading (van Schijndel and Linzen, 2020) and response times in the Maze task (here).",
"But just how bleak is the outlook for single-stage models?",
"To investigate this, we re-analyze the results from Section 3.2 with theoretical model performance that includes an additional scalar term that corresponds with an increase in the slope for surprisal relative to that obtained from the fit to reaction times.",
"The results in Figure 5.",
"Here, the y-axis shows the proportion of tests for which the models are within the confidence intervals of human results, and the x-axis shows this scalar term.",
"We find that models achieve 90% accuracy levels when the scalar term is 4 for GPT2, 11 for RNNG and 23 for GRNN.",
"What this means is that if either the ms/bit scalar term, or the surprisal in ungrammatical conditions were (slightly under) an order of magnitude greater, then the models' performance would match humans.",
"While we agree with the assessment from van Schijndel and Linzen (2020) that these results pose a challenge for contemporary implemented models, we do not necessarily believe that they cannot be overcome within the framework of single-stage models, especially ones that are mediated by symbolic representations like the RNNG.",
"Multiple options exist that could magnify surprisal values in locally ambiguous or ungrammatical regions, such as a reduced beam size (Roark, 2001) or particle filters (Levy et al., 2009).",
"Taken together, these recent results highlight a key question for future researchwhat additional modeling mechanisms will be needed to accurately predict not only qualitative but also quantitative patterns of human difficulty in language processing.",
"RPL gratefully acknowledges NSF grant BCS-1551866, a Google Faculty Research Award, and funding from the MITIBM AI Lab.",
"Data were collected under an Institutional Review Board (IRB) approved protocol for online human subject experimentation.",
"Participants were compensated $2.00 for their participation in I-Maze experiments.",
"Experiments took 15 minutes, which meant participants were being compensated $8.00/hour.",
"We chose this rate because it is slightly above federal minimum wage, which we take to be a fair baseline for compensation.",
"All information associated with experimental participants was anonymized prior to analysis."
] | [
"method",
"abstain",
"method",
"result",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"method",
"abstain",
"result",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"result",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain"
] |
[
"Tables store rich numerical data, but numerical reasoning over tables is still a challenge.",
"In this paper, we find that the spreadsheet formula, a commonly used language to perform computations on numerical values in spreadsheets, is valuable supervision for numerical reasoning in tables.",
"Considering large amounts of spreadsheets available on the web, we propose FORTAP , the first exploration to leverage spreadsheet formulas for table pretraining.",
"Two novel self-supervised pretraining objectives are derived from formulas, numerical reference prediction (NRP) and numerical calculation prediction (NCP).",
"While our proposed objectives are generic for encoders, to better capture spreadsheet table layouts and structures, we build FORTAP upon TUTA, the first transformer-based method for spread-sheet&web table pretraining with tree attention.",
"FORTAP outperforms state-of-the-art methods by large margins on three representative datasets of formula prediction, question answering, and cell type classification, showing the great potential of leveraging formulas for table pretraining.",
"The code will be released at https://github.com/microsoft/TUTA_ table_understanding .",
"Tables store rich numerical data, so a wide range of tasks require numerical reasoning over (semi-)structured tabular context, such as question answering over tables (Chen et al. , 2021b; Zhu et al. , 2021; Cheng et al. , 2021), table-to-text (Suadaa et al. , 2021; Moosavi et al. , 2021; Cheng et al. , 2021), spreadsheet formula prediction (Chen et al. , 2021a), and table structure understanding (Koci et al. , 2019).",
"Take Table#2 in Figure 1 as an example, both suggesting the formula (C4-B4)/B4 for cell D4 and answering 0.61% to the question require The first two authors contribute equally.",
"numerical reasoning capabilities of (1) understanding the contextual meaning of individual numerical cells, e.g., 11.49 at B4 and 11.56 at C4 are populations of Belgium in 2019 and 2020; (2) inferring calculational relationships of numerical cells, e.g., percentage change from 11.49 to 11.56.",
"As Figure 1 shows, same capabilities also benefit table structure recognition and table-to-text.",
"So it's a fundamental need to empower table modeling with stronger numerical reasoning capabilities.",
"However, it is challenging to endow a tabular model with robust numerical reasoning capabilities.",
"First, understanding a local numerical cell needs dimension inference (Chambers and Erwig, 2008), unit inference (Shbita et al. , 2019), and index inference (Dong et al. , 2019a), e.g., popula-tion (dimension), million (unit), 2020 (index), and Belgium (index) jointly describe 11.56 in Figure 1.",
"It is non-trivial concerning the great flexibility of table semantic structures (Wang et al. , 1150 2021b).",
"Second, calculational relationships among two or more numerical cells are various and often compositional, e.g., F1 Score = 2 (Recall Precision) / (Recall + Precision) in machine learning papers and Profit Margin = Net Income / Sales in financial reports.",
"To make matters more challenging, human labeling for numerical reasoning in relevant tasks (Chen et al. , 2020; Suadaa et al. , 2021; Koci et al. , 2019) is labor-intensive and error-prone, largely restricting the generalization ability of large models that are rather data-hungry.",
"Recently, table pretraining on large amount of unlabeled tables shows promising results on table understanding and reasoning.",
"Self-supervised objectives are derived from tables and text such as Masked Language Models (MLM) (Herzig et al. , 2020), masked column prediction (Yin et al. , 2020), masked entity recovery (Deng et al. , 2020b), cell cloze and corrupt detection (Wang et al. , 2021b; Tang et al. , 2020; Iida et al. , 2021), table-text matching and alignment (Wang et al. , 2021a,b; Deng et al. , 2020a).",
"However, numerical and calculational relationships of cells lack sufficient attention.",
"Then (Yoran et al. , 2021) and (Liu et al. , 2021; Yu et al. , 2020) synthesize questions and SQL queries, respectively, as training corpus for reasoning purpose, but SQL is only applicable to database-like relational tables, and importantly, it's challenging to ensure synthesized questions and SQLs be realistic, meaningful, and diverse.",
"Gladly, tens of millions of real spreadsheet formulas are publicly available on the web and can be valuable for numerical reasoning in tables.",
"The spreadsheet formula is an expressive yet simple language consisting of operators (e.g., +,/,% ), functions (e.g., SUM,MAX,COUNT ), referenced cells (e.g., B4 ), and constant values (e.g., 100 ) (Aival-oglou et al. , 2015).",
"Since writing the formula does not require formal programming education, it's widely used by non-programmers such as business professionals or other kinds of domain specialists whose jobs involve computational tasks.",
"So spreadsheet formulas cover real numerical calculations in a great variety of domains.",
"To this end, we propose FORmula-driven TAble Pretraining (FORTAP ) for numerical reasoning.",
"One should master two basic concepts to use the formula language: cells as variables and opera-tors/functions as relationships between variables.",
"So we explicitly decompose information in formulas into numerical reference and numerical calculation and devise two complementary tasks.",
"Given a table as well as a formula cell in it, we mask the formula and then (1) the model classifies whether header A references header B (we consider that header A references header B if the formula cell belonging to header A references a numerical cell belonging to header B , as illustrated in Figure 2); (2) the model predicts the operator/function of two or more referenced numerical cells.",
"Furthermore, to better encode and represent formulas, we also apply MLM to the token sequence of formulas.",
"Considering the flexibility of table structures in spreadsheets, we base FORTAP on TUTA (Wang et al. , 2021b), the first transformer-based method for spreadsheet tables with carefully-designed textual, numerical, positional, and formatting embedding layers.",
"Importantly, its tree-based position encoding and attention are highly effective in representing generally structured tables.",
"TUTA is pretrained with MLM, cell cloze, and table-text matching.",
"Experiment results on three tasks demonstrate that the significance of leveraging formulas for table pretraining.",
"For formula prediction, FORTAP achieves 55 .",
"8% top-1 accuracy, significantly surpassing TUTA ( 48 . 5% ), TaPEx ( 43 . 2% ), and SpreadsheetCoder ( 40 . 4% ) on Enron.",
"For table question answering, TUTA achieves comparable accuracy with the best system on HiTab.",
"After pretraining with formulas, FORTAP delivers a huge improvement of +6 .",
"3% as over previous SOTA, comparable to TaPEx.",
"For cell type classification, on dataset DeEx, FORTAP largely improves TUTA by +6 .",
"6% on derived type and +3 .",
"2% on overall Macro-F1.",
"TUTA (Wang et al. , 2021b) is the first pretraining architecture for spreadsheet tables.",
"It is effective in capturing table semantic structures, achieving SOTA results on cell type and table type classification.",
"As mentioned in Section 1, understanding table semantic structures is critical to numerical reasoning, so we choose TUTA to be the encoder of FORTAP .",
"Since our pretraining tasks are generic for encoders of tables, future works can also explore other encoders such as (Herzig et al. , 2020).",
"Header Recognition.",
"Headers usually provide short yet informative descriptions of table contents in Natural Language (NL), so TUTA leverages the detected header regions and hierarchies, as pre-1151 sented in Section 2.2.",
"(Chen et al. , 2021a) also shows that using headers (even without considering hierarchies) greatly helps formula prediction.",
"FORTAP follows to place detected headers in inputs.",
"Architecture.",
"TUTA bases on BERT (Devlin et al. , 2019) with several enhancements: (1) a positional encoding layer based on a unified bi-dimensional coordinate tree to describe both the spatial and hierarchical information of cells; (2) a number encoding layer to encode magnitude, precision, the first digit, and the last digit; (3) a tree-based attention mechanism that enables local cells to aggregate their structurally neighbouring contexts within a tree-based distance threshold.",
"Model Input/Output.",
"The input consists of a table T and optional NL texts C .",
"By traversing the cell matrix of a table from left to right and from top to bottom, the input is linearized to [CLS] , C 0 , ..., CK 1 , [SEP] , T (0 , 0) , [SEP] , T (0 , 1) , ..., [SEP] , T ( M 1 ,N 1) , where K is the token length of NL texts, and M and N are the numbers of rows and columns of the table, respectively.",
"Note that T ( i,j ) refers to the token sequence of the cell string in the ( i + 1) th row and ( j + 1) th column, and each token has token, number, position, and format input embeddings.",
"The output of the encoder contains token-level, cell-level, and table-level embeddings.",
"FORTAP follows these input/output settings except when inputting formula token sequence.",
"Spreadsheet Source and Preprocessing.",
"We use the same spreadsheet table corpus as TUTA: (1) 13 .",
"5 million public spreadsheet files are crawled from 1 .",
"75 million websites; (2) table ranges and headers are detected using TableSense (Dong et al. , 2019b,a); (3) header hierarchies are extracted with effective heuristics; (4) extreme size tables are filtered out; (5) duplicated tables are discarded.",
"In the end, 4 .",
"5 million spreadsheet tables are left.",
"Formula Preprocessing.",
"Spreadsheet Formula is a widely-used end-user language for table organization and calculation.",
"A formula consists of four types of formula tokens: operator (e.g., +,/,% ), functions (e.g., SUM ), referenced cells (e.g., B4 ) and constant values (e.g., 100), which we denote as OP , FUNC , CELL and CONST in the rest part of the paper.",
"We use XLParser (Aivaloglou et al. , 2015), a highly-compatible formula parser with compact grammar, to analyze formula.",
"In this way, we derive the AST of each formula (an example AST in Figure 2) and the type of each formula token.",
"Since we focus on single table setting, we discard the cross-table, cross-sheet, and cross-file formulas.",
"Formulas with Array or User-Defined-Function are also discarded.",
"The absolute reference sign $ is deleted from formula strings, without changing their meanings.",
"We only keep the first five occurrences of formulas in the same row/column because some spreadsheets contain hundreds of duplicated or dragged formulas in one row/column, which are inefficient for training.",
"Formulas are linearized as formula token sequences in prefix representation of AST following SpreadsheetCoder (Chen et al. , 2021a).",
"Finally, 10.8 million formulas are derived.",
"As mentioned in Section 1, empowering table modeling with stronger numerical reasoning capabilities is a fundamental need.",
"Spreadsheet formulas naturally contain information of numerical references ( CELL ) and calculations ( OP / FUNC ), motivating us to devise effective tasks to leverage them for numerical-reasoning-aware pretraining.",
"Based on information parsed from the formula expression, we carefully devise two complementary objectives, Numerical Reference Prediction (NRP) and Numerical Calculation Prediction (NCP), to exploit the reasoning process behind referencing local cells (as operands) and applying calculations (on operands), respectively.",
"Meanwhile, to get better representations of the spreadsheet formula, which could be further used in downstream applications like formula error detection (Cheung et al. , 2016), we extend MLM (De-vlin et al. , 2019) from NL contexts to formulas.",
"Figure 2 gives an illustration of these tasks.",
"Numerical Reference Predication (NRP) We consider header A references header B in a table if: in a formula, the formula cell (cell with formula) belonging to header A references a cell belonging to header B .",
"Take the table in Figure 2 as an example, the header %Increase references headers 2016 and 2021 since E3 in column %Increase references C3 and D3 in columns 2016 and 2021 .",
"We let the model learn header reference relationship since a cell belonging to a referenced header is more likely to be involved in the calculation.",
"It is important but usually unknown as a priori, especially when tables are from diverse or unfamiliar domains.",
"Note that we use header cells instead of data cells in this task since headers provide high-1152 (D3-C3)/C3 Numerical Reference Prediction Numerical Calculation Prediction Formula MLM ( %Increase , 2016 ) ( %Increase , Vegetable ) D3 C3 / C3 Predict Calculation (D3 C3) / (D3-C3)/C3 ( %Increase , 2021 ) + Example table Formula-based Prompt: Symphony %Increase passages cheer Onion over.",
"level descriptions of the data (Chen et al. , 2021a) and thus header reference relationships have more generic semantics across tables.",
"Given extracted header regions and hierarchies in corpus preprocessing, we first formulate NRP as a binary classification task over header pairs: given a formula cell t f and its referenced cells { t ( i ) p } , we first find their non-shared headers h f (for t f ) and { h ( i ) p } (for { t ( i ) p } ), then we group them as positive pairs { ( h f , h ( i ) p ) } .",
"Usually a formula cell shares a header with referenced cells in the same row/column (e.g., in Figure 2, Onion is the shared header for E3 , C3 , D3 ).",
"As it does not reflect header reference relationships, we exclude the shared header in this task.",
"The negative pairs { ( h f , h ( i ) n ) } are sampled among those unreferenced headers on the same direction (either on top or left headers) of h f .",
"Number of negative samples is at most 3:1 to positive ones to balance samples.",
"The binary classification probability of the i th pair p ( i ) = f ( h f , h ( i ) p/n ) , where h is the header cell embedding derived by the encoder and f ( ) is a two-layer binary classification module.",
"To inject table-text joint reasoning skills into FORTAP , which TUTA does not excel at, we further extend NRP task to table-text setting.",
"Given a table with a formula cell, we first construct a formula-based prompt as context by picking 1 to 10 tokens randomly from the vocabulary as a noisy sentence and then inserting the row and column header of formula cell into it at random positions.",
"Next, we jointly input the formula-based prompt and the table, and the task is to classify (1) formula header cell, (2) formula cell, (3) reference header cell, (4) other cells from the table.",
"To precisely classify these cells, model needs to first align formula header cells in table with prompt (alignment skill), then infer the intersection cell of formula header cells as formula cell (spatial reasoning).",
"Finally, it has to identify referenced cells (numerical reasoning) by the formula headers.",
"The NRP loss L nr is calculated as the sum of binary cross entropy loss and multi-class cross entropy loss under table-only and table-text setting.",
"Numerical Calculation Prediction (NCP) Given data cells as operands, a model then needs to find out which operators/functions should be applied.",
"For example, in Figure 2, subtraction and division are applied on C3 and D3 in the formula.",
"We hope the model can speculate the target op-erator/function based on the semantics, numeracy, and positions of given operands (data cells).",
"Thus, we design the task to predict the operator/function for a group of data cells with their contextual cell embeddings produced by the encoder.",
"We formulate it as a multi-class classification task: given a formula and its AST parsed in pre-rpocessing, we select the operators/functions { o ( i ) } satisfying that all direct children nodes { d ( j ) } ( i ) on the formula AST of o ( i ) are in CELL type with integer or float data.",
"The probability of predicting the operator/function of these data cells is p ( i ) = f ( POOL ( { d ( j ) } ( i ) )) , where d is the output cell embedding by the encoder, f ( ) is a two-layer classification module, and POOL is a mean-pooling layer.",
"Note that we only include the op-erator/function o whose all direct children nodes are in CELL type in this task, because otherwise some descendant data cells will first be calculated via other operators/functions and thus have indirect connections with o (e.g., in Figure 2, / is 1153 not a target operator since its left child is an operator ).",
"We include 17 common calculation operators/functions (see Appendix A) covered in spreadsheet formulas in this task.",
"The NCP objective L nc is the multi-class cross entropy loss.",
"Formula MLM To encode formulas, we expand 41 tokens in the vocabulary for all four formula token types, covering 99 .",
"1% formulas in corpus.",
"Added tokens are listed in Appendix A. Note that a special case is the CELL type, like D4 , because it references another cell.",
"Since referenced cells can be anywhere in a large table, it is infeasible to explicitly insert all cell positions into the vocabulary.",
"Thus, for CELL type token in formula, we use a [RANGE] tag as input token and copy all cell-level embeddings (position, format, numeric, ...) from the referenced cell to this CELL type token.",
"We then apply MLM to formula tokens.",
"Masking and recovering operators/functions is straightforward.",
"When masking or recovering a referenced cell in a formula, we need to avoid label leakage from embeddings of the referenced cell.",
"Thus, to mask a referenced cell, besides using the [MASK] token embedding, the number embedding is set to default to mask the number, and the position and format embeddings are set to the same as the formula cell.",
"To recover a masked referenced cell t r , the cell t ( i ) in input sequence with the highest probability p ( i ) = Softmax ( f ( t r , t ( i ) )) is selected as the predicted cell, where t is output cell embedding of the encoder and f ( ) is a two-layer classification module.",
"The objective L fmlm is calculated as the sum of cross entropy loss over operator/function recovery and referenced cell recovery.",
"Finally, the total pretraining objective is L = L nr + L nc + L fmlm (1) 4 Experiments In this section, we describe the pretraining details and validate the effectiveness of FORTAP on three downstream tasks: formula prediction, question answering, and cell type classification.",
"The statistics of datasets we use are listed in Table 1.",
"We initialize FORTAP with parameters of the pretrained TUTA.",
"The input is linearized following TUTA by concatenating the text (the prompt built in NRP pretraining task) and the flattened table traversed in row order.",
"Due to memory limit, we only Dataset Enron HiTab DeEx # samples (train/dev/test) 125 k 10 .",
"place (1) header cells, (2) data cells on the same row/column of the formula cell, into the input sequence and skip the other cells.",
"Our input pattern is reasonable as a tradeoff between performance and memory since we find that more than 89% formulas only reference cells on the same row/column.",
"To match different downstream tasks, for the cell with formula, we input its formula token sequence (e.g. (C4-B4)/B4 ) with 40% probability, formula tag [FORMULA] with 30% (the number embedding is set to default) and cell literal value with 30% (e.g. number 42 . 1 ).",
"In experiments, we find it is more effective in Formula MLM to mask either all operators/functions or all referenced cells, so we implement it this way.",
"We first pretrain 400 K steps on sequence length 256 with batch size 32 , and 250 K steps on sequence length 512 with batch size 8 .",
"The whole pretraining phase takes about 4 days on 4 Tesla V100 GPUs.",
"Formula prediction (Chen et al. , 2021a) facilitates spreadsheet end-users by recommending formulas since writing formulas could be time-consuming and error-prone.",
"Given a table and a target cell in table, the task is to predict a formula for the target cell.",
"Formula prediction requires complex in-table numerical reasoning capabilities to predict both referenced cells and involved calculations.",
"Datasets.",
"Enron (Hermans and Murphy-Hill) is a massive database of public Excel Spreadsheet, containing over 17K spreadsheets with rich table structures and formula types.",
"We exclude Enron from our pretraining corpus to prevent data leakage.",
"Tables and formulas are preprocessed in the same way as the pretraining corpus.",
"We divide Enron by sheet and the final dataset contains 100 .",
"3 K / 12 .",
"3 K / 12 .",
"9 K table-formula pairs for train/dev/test.",
"The formula cell in table is regarded as the target cell and the formula is seen as the ground truth in formula prediction task.",
"We follow the evaluation metrics in SpreadsheetCoder (Chen et al. , 2021a): (1) Formula Accu-1154 racy, (2) Sketch Accuracy, (3) Range Accuracy measuring the percentage of correctly predicted formulas, formula sketches (formula using placeholder [RANGE] as referenced cells), and formula ranges (only the referenced cells of formula).",
"Previous to our work, SpreadsheetCoder evaluates formula prediction on collected Google Sheets and Enron.",
"However, we do not directly use its datasets for three reasons: (1) The Google Sheet corpus is not released, and for Enron, SpreadsheetCoder only adopts formulas referencing cells within a limited rectangular neighborhood region ( 21 20 ) of the formula cell, while we argue in real tables the referenced cells can be easily beyond this region.",
"(2) A large proportion of table headers are not properly detected (mentioned in its paper), while we adopt ranges and headers detected by TableSense (Dong et al. , 2019b) and extract table header hierarchies.",
"(3) Despite the inconsistencies above, we try to backtrack the original file to align with SpreadsheetCoder and apply our preprocessing.",
"However, the document IDs of tables in SpreadsheetCoder are mostly empty.",
"Thus, we build our dataset based on Enron and evaluate SpreadsheetCoder on it for a fair comparison.",
"Baselines.",
"We adopt SpreadsheetCoder (Chen et al. , 2021a), TaPEx (Liu et al. , 2021), and TUTA as our baselines.",
"SpreadsheetCoder is a BERT-based model for formula prediction, incorporating headers and contextual information of neighbouring cells of the target cell.",
"TaPEx is a BART-based (Lewis et al. ) table pretraining model, which implicitly learns a SQL executor.",
"Fine-tune.",
"FORTAP consumes all header cells in the table and data cells lying on the same row/column of the target cell just like the manner in pretraining, with a max sequence length, 512 .",
"The [FORMULA] tag is placed at the target cell position in input, whose number embedding is set to default.",
"A two-stage LSTM formula decoder (Dong and Lapata, 2018; Chen et al. , 2021a) accepts the formula cell embedding as input, and generates the formula by first generating formula sketches and then selecting referenced cells.",
"All models in experiments are fine-tuned 800 K steps on Enron.",
"The beam size is 5 for generating formula.",
"Since SpreadsheetCoder only published part of its code, we re-implement it in PyTorch (Paszke et al. , 2019) based on its paper.",
"Appendix B presents details about SpreadsheetCoder.",
"TaPEx is built on BART model and thus naturally supports generation task.",
"We follow the TaPEx table linearization strategy, assign the formula position in the source, and modify the target vocabulary as SpreadsheetCoder (Chen et al. , 2021a) to support generating referenced cells.",
"We use the TaPEx-base model.",
"It is fine-tuned for 30 K steps (converge at about 25 K ) and evaluated on the checkpoint with the best dev performance.",
"Results.",
"Table 2 summarizes the results of formula prediction on the test set.",
"As shown, FORTAP delivers a big improvement over SpreadsheetCoder by +15 .",
"4% and TaPEx by +12 .",
"6% on formula accuracy.",
"We deduce that TaPEx falls behind TUTA and FORTAP because (1) the learnt executor may not be suitable for formula prediction, (2) it doesn't leverage hierarchical table structures.",
"FORTAP also outperforms TUTA by +7 .",
"3% , showing formula pretraining effectively assists formula prediction.",
"We also experiment under a low-resource setting ( 20% training data), and the improvements of FORTAP are more significant, surpassing TUTA by +10 .",
"2% .",
"Since Enron is not included in our pretraining corpus, this result well indicates formula pretraining can largely benefit formula prediction after seeing large numbers of real formulas.",
"Moreover, we conjecture that formula pretraining potentially improves numerical reasoning capabilities of the model, because the two-stage prediction of formula sketches and ranges relies on numerical calculation and reference capabilities, respectively.",
"Table QA (Pasupat and Liang, 2015; Cheng et al. , 2021) contains a table and an NL question over the table as the model input.",
"Its output can be cell value(s) or number(s) calculated over numerical cell value(s).",
"Table QA calls for both in-table numerical reasoning and table-text joint reasoning.",
"Datasets.",
"There are several datasets (Pasupat and Liang, 2015; Cheng et al. , 2021; Zhu et al. , 2021; Chen et al. , 2021b) focusing on Table QA 1155 or Table-text hybrid QA.",
"We choose to evaluate on HiTab (Cheng et al. , 2021), a hierarchical web table dataset for question answering and data-to-text.",
"First, tables in HiTab contain rich table structures ( 98 . 1% tables are hierarchical) from 29 domains, posing a challenge to numerical reasoning.",
"Second, a large proportion of questions ( 40% ) from Statistical Reports demands complex numerical inference over table and text.",
"Moreover, questions in HiTab are revised from sentences written by professional analysts to ensure naturalness and meaningfulness.",
"The QA evaluation metric is Execution Accuracy measuring the percentage of correctly predicted answers.",
"Baselines.",
"We employ TaPas (Herzig et al. , 2020), HiTab model (Cheng et al. , 2021), TaPEx (Liu et al. , 2021), and TUTA as our baselines.",
"TaPas is an end-to-end table parsing model without generating logical forms, which enjoys pretraining on the large-scale table-text corpus from Wikipedia.",
"HiTab devises a hierarchy-aware logical form for hierarchical tables, and predicts the answer using a weakly supervised semantic parser MAPO (Liang et al. , 2018), which is a reinforcement learning framework to systematically explore and generate programs.",
"The question and table are encoded by BERT and the logical forms are generated by an LSTM decoder.",
"TaPEx is introduced in Section 4.2.",
"Fine-tune.",
"We replace the BERT encoder of HiTab model with TUTA and FORTAP , and follow the fine-tuning settings of HiTab.",
"We find that NRP pretrain task under table-text setting mentioned in Section 3 is quite essential for QA performance and thus pretrain 80 , 000 steps more with it on FORTAP in QA before fine-tuning.",
"For TaPEx, we adopt the same table QA strategy in its paper by inputting the table and text as source, and generating the answer as target.",
"The TaPEx-base model is trained for 20 , 000 steps on HiTab.",
"Results.",
"Table 3 summarizes QA results on HiTab.",
"FORTAP achieves SOTA ( 47 . 0% ) using MAPO as the semantic parser, surpassing the best system in HiTab paper with +6 .",
"3% .",
"Meanwhile, replacing BERT with TUTA does not see a significant performance gain.",
"We conjecture one of the reasons is that TUTA may be not skilled at table-text joint reasoning, and FORTAP enhances this skill by the table-text setting of the NRP task.",
"Finally, FORTAP performs comparatively with TaPEx, a recent pretraining tabular model as a powerful neural SQL executor targeting table reasoning.",
"Note that this (%) Development Test TaPas 39 .",
"result is inspiring since FORTAP is pretrained on spreadsheet tables and can generalize to web table domain (HiTab) with SOTA performance, indicating that the numerical reasoning skills learnt by FORTAP are robust to distinct scenarios.",
"Cell type classification (CTC) (Koci et al. , 2019; Gol et al. , 2019; Gonsior et al. , 2020) aims to interpret tabular data layouts automatically via classifying table cells by their roles in data layouts (e.g., top attribute, data, derived).",
"It requires understanding of table semantics, structures, and numerical relationships considering diverse table layouts.",
"Datasets.",
"DeEx (Koci et al. , 2019) is a widely-studied CTC dataset with tables of various structures and semantics.",
"DeEx includes tables from various domains by mixing three public corpora: Enron (Hermans and Murphy-Hill), Euses (Fisher and Rothermel, 2005), and Fuse(Barik et al. , 2015).",
"Cells in DeEx are categorized into six fine-grained types: metadata, notes, data, left attribute, top attribute , and derived .",
"The evaluation metric is the Macro-F1 score over all cell types.",
"Baselines.",
"We compare FORTAP with two learning-based methods CNNBERT (Dong et al. , 2019a) and Bi-LSTM (Gol et al. , 2019), and three table-pretraining methods TaBERT (Yin et al. , 2020), TaPas (Herzig et al. , 2020), and TUTA.",
"split tables into chunks with a max input sequence length ( 512 ) and distribute headers to each chunk.",
"For cells with formulas, [FORMULA] tags are used as input tokens.",
"We fine-tune 100 epochs on five folds and report the average scores.",
"All these settings are the same as TUTA.",
"Table 4 lists the CTC results on DeEx.",
"FORTAP achieves a SOTA Macro-F1 of 79 .",
"6% .",
"Specifically, FORTAP largely improves the performance on type derived and notes , surpassing TUTA by 6 .",
"6% and 7 .",
"5% .",
"The improvement on derived indicates formula pretraining helps identifying cells derived by calculations over some other cells.",
"Note that derived in DeEx not only includes cells with explicit formulas, but also those cells with hidden (missing) formulas (Koci et al. , 2019), which poses a great challenge to existing methods since it requires discovery of numerical relationships between cells.",
"Thus, this is a strong signal that formula pretraining endows the model with better numerical reasoning capabilities.",
"We think that the improvement on notes mainly benefits from the NRP pretraining task with formula-based prompts as the context, enhancing FORTAP 's capability on table-text joint modeling.",
"In this section, we analyze our method in terms of (1) the effects of different pretraining tasks, (2) whether and to what extent our model learns numerical reasoning skills.",
"Effects of pretraining tasks.",
"We conduct ablation studies on different pretraining tasks on the formula prediction task.",
"Here we pretrain TUTA with each pretraining task and fine-tune on Enron dataset, as summarized in Table",
"5. We can see that combining all pretraining tasks brings the most gain on formula accuracy.",
"NRP and NCP improve more on range accuracy and sketch accuracy, respectively.",
"This aligns with our design motivation that NRP targets on how to reference and NCP learns how to calculate.",
"To our surprise, Formula MLM alone also largely benefits formula prediction.",
"We deduce the reason is that both MLM and formula prediction requires encoding and recovering/generating capabilities of the formula token sequence.",
"Numerical reasoning skills.",
"We have shown our model learns numerical reasoning skills by two facts: (1) NRP and NCP improve more on the range and sketch accuracy on the formula prediction task, respectively; (2) our model boosts the (%) Formula Sketch Range TUTA 48 .",
"performance of derived cell type on cell type classification.",
"Here we further decompose QA accuracy of different operations on HiTab.",
"The comparison between previous SOTA system BERT(MAPO) and our FORTAP (MAPO) is shown in Table",
"6. As shown, our model improves most on complex cell selection (cell indexed by 3 headers) and arithmetic (e.g., difference , sum ) problems.",
"Note that complex cell selection not only requires table-text alignment, but also the references between headers considering that mentions of headers in question could be implicit or missing.",
"Meanwhile, our model also handles superlative (e.g., argmax ) and comparative (e.g., less than ) problems better than BERT, despite these types are relatively infrequent in our formula pretraining corpus.",
"To summarize, our model mainly improves numerical skills regarding cell reference and arithmetic, as well as other aspects like comparing and ranking.",
"Table Pretraining.",
"Table pretraining has been widely studied in recent years.",
"Some works mine large-scale table-text pairs as pretraining corpus (Deng et al. , 2020b; Yin et al. , 2020; Herzig et al. , 2020; Wang et al. , 2021b), some leverage annotated table-text datasets (Deng et al. , 2021; Yu et al. , 2020), and some synthesize a table-text corpus by templates (Yu et al. , 2020; Eisensch-los et al. , 2020).",
"Regarding pretraining tasks, they either train the model to recover masked to-kens/column/cell/entity (Yin et al. , 2020; Herzig et al. , 2020; Wang et al. , 2021b; Deng et al. , 2020b), or explicitly learn table-text alignments (Deng et al. , 2021; Yu et al. , 2020).",
"Recently, TaPEx (Liu et al. , 2021) adopts BART (Lewis et al. ) as a neural executor for synthesized SQLs to improve table reasoning.",
"Whereas, our method explores to use real 1157 spreadsheet formulas to guide table pretraining.",
"Numerical reasoning over Natural Language.",
"Numerical reasoning is important in NL domain (Dua et al. , 2019).",
"Numbers even account for 6 .",
"15% of all unique tokens in English Wikipedia (Thawani et al. , 2021).",
"Various works target improving numerical reasoning skills on NL (Andor et al. , 2019; Geva et al. , 2020; Jin et al. , 2021).",
"Except using pure NL, MathBERT (Peng et al. , 2021) pretrains NL documents with mathematical formulas.",
"In this paper, we target numerical reasoning over (semi-) structured tables.",
"In this paper, we present FORTAP , a numerical-reasoning-aware table pretraining model that learns numerical reasoning capabilities from spreadsheet formulas.",
"Specifically, we design two pretraining tasks to capture numerical reasoning capabilities by explicitly predicting cell reference and calculation relations.",
"Experiments show that FORTAP achieves new SOTA on formula prediction, question answering, and cell type classification.",
"Further analyses indicate that formula pretraining indeed improves numerical reasoning skills of the model.",
"One limitation of FORTAP is that we haven't fully exploit spreadsheet formulas beyond numerical reasoning.",
"For example, logic functions like VLOOKUP and text functions like LEN can be leveraged to guide complex logic and text reasoning, which will be a promising direction in the future.",
"Dataset.",
"Our pretraing corpus is built upon public English spreadsheet files crawled from webs via the search engine (Wang et al. , 2021b), covers various domains, and has been checked by a compliance team in a company to ensure that does not contain sensitive names or uniquely identifies individual people or offensive content.",
"All datasets used for evaluation are licensed public datasets, e.g., for formula prediction, Enron (Hermans and Murphy-Hill) is a public spreadsheet dataset consisting of over 17 K spreadsheet files, and we re-purpose it for formula prediction following (Chen et al. , 2021a).",
"Application.",
"Our model shows its effectiveness in three representative table-related tasks.",
"Formula prediction helps spreadsheet end-users to write formulas which could be tedious and error-prone.",
"Table QA enables users to query on the table without the need of domain background knowledge.",
"Cell type classification assists interpreting fine-grained table semantic structures, which help users to better understand table structures and contents.",
"There may be risks that crooks use tabular models to automatically parse tables/forms to obtain private personal or company data in bulk, which should be prevented."
] | [
"abstain",
"result",
"objective",
"abstain",
"objective",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Public debate forums provide a common platform for exchanging opinions on a topic of interest.",
"While recent studies in natural language processing (NLP) have provided empirical evidence that the language of the debaters and their patterns of interaction play a key role in changing the mind of a reader, research in psychology has shown that prior beliefs can affect our interpretation of an argument and could therefore constitute a competing alternative explanation for resistance to changing one's stance.",
"To study the actual effect of language use vs. prior beliefs on persuasion, we provide a new dataset and propose a controlled setting that takes into consideration two reader-level factors: political and religious ideology.",
"We find that prior beliefs affected by these reader-level factors play a more important role than language use effects and argue that it is important to account for them in NLP studies of persuasion.",
"Public debate forums provide to participants a common platform for expressing their point of view on a topic; they also present to participants the different sides of an argument.",
"The latter can be particularly important: awareness of divergent points of view allows one, in theory, to make a fair and informed decision about an issue; and exposure to new points of view can furthermore possibly persuade a reader to change his overall stance on a topic.",
"Research in natural language processing (NLP) has begun to study persuasive writing and the role of language in persuasion.",
"Tan et al. (2016) and Zhang et al. (2016), for example, have shown that the language of opinion holders or debaters and their patterns of interaction play a key role in changing the mind of a reader.",
"At the same time, research in psychology has shown that prior beliefs can affect our interpretation of an argument even when the argument consists of numbers and empirical studies that would seemingly belie misinterpretation (Lord et al., 1979; Vallone et al., 1985; Chambliss and Garner, 1996).",
"We hypothesize that studying the actual effect of language on persuasion will require a more controlled experimental setting one that takes into account any potentially confounding user-level (i.e., reader-level) factors 1 that could cause a person to change, or keep a person from changing, his opinion.",
"In this paper we study one such type of factor: the prior beliefs of the reader as impacted by their political or religious ideology.",
"We adopt this focus since it has been shown that ideologies play an important role for an individual when they form beliefs about controversial topics, and potentially affect how open the individual is to being persuaded (Stout and Buddenbaum, 1996; Goren, 2005; Croucher and Harris, 2012).",
"We first present a dataset of online debates that enables us to construct the setting described above in which we can study the effect of language on persuasion while taking into account selected user-level factors.",
"In addition to the text of the debates, the dataset contains a multitude of background information on the users of the debate platform.",
"To the best of our knowledge, it is the first publicly available dataset of debates that simultaneously provides such comprehensive information about the debates, the debaters and those voting on the debates.",
"With the dataset in hand, we then propose the novel task of studying persuasion (1) at the level of individual users, and (2) in a setting that can control for selected user-level factors, in our case, the prior beliefs associated with the political or 1 Variables that affect both the dependent and independent variables causing misleading associations.",
"religious ideology of the debaters and voters.",
"In particular, previous studies focus on predicting the winner of a debate based on the cumulative change in pre-debate vs. post-debate votes for the opposing sides (Zhang et al., 2016; Potash and Rumshisky, 2017).",
"In contrast, we aim to predict which debater an individual user (i.e., reader of the debate) perceives as more successful, given their stated political and religious ideology.",
"Finally, we identify which features appear to be most important for persuasion, considering the selected user-level factors as well as the more traditional linguistic features associated with the language of the debate itself.",
"We hypothesize that the effect of political and religious ideology will be stronger when the debate topic is Politics and Religion , respectively.",
"To test this hypothesis, we experiment with debates on only Politics or only Religion vs. debates from all topics including Music , Health , Arts , etc.",
"Our main finding is that prior beliefs associated with the selected user-level factors play a larger role than linguistic features when predicting the successful debater in a debate.",
"In addition, the effect of these factors varies according to the topic of the debate topic.",
"The best performance, however, is achieved when we rely on features extracted from user-level factors in conjunction with linguistic features derived from the debate text.",
"Finally, we find that the set of linguistic features that emerges as the most predictive changes when we control for user-level factors (political and religious ideology) vs. when we do not, showing the importance of accounting for these factors when studying the effect of language on persuasion.",
"In the remainder of the paper, we describe the debate dataset (Section 2) and the prediction task (Section 3) followed by the experimental results and analysis (Section 4), related work (Section 5) and conclusions (Section 6).",
"For this study, we collected 67 , 315 debates from debate.org 2 from 23 different topic categories including Politics , Religion , Health , Science and Music .",
"3 In addition to text of the debates, we collected 198 , 759 votes from the readers of these debates.",
"Votes evaluate different dimensions of the 2 www.debate.org 3 The dataset will be made publicly available at http://www.cs.cornell.edu/ esindurmus/.",
"To study the effect of user characteristics, we collected user information for 36 , 294 different users.",
"Aspects of the dataset most relevant to our task are explained in the following section in more detail.",
"Debate rounds.",
"Each debate consists of a sequence of ROUNDS in which two debaters from opposing sides (one is supportive of the claim (i.e., PRO ) and the other is against the claim (i.e., CON )) provide their arguments.",
"Each debater has a single chance in a ROUND to make his points.",
"Figure 1 shows an example ROUND 1 for the debate claim P RESCHOOLISA WASTEOFTIME .",
"The number of ROUNDS in debates ranges from 1 to 5 and the majority of debates ( 61 , 474 out of 67 , 315 ) contain 3 or more ROUNDS .",
"Votes.",
"All users in the debate.org community can vote on debates.",
"As shown in Figure 2, voters share their stances on the debate topic before and after the debate and evaluate the debaters' conduct, their spelling and grammar, the convincingness of their arguments and the reliability of the sources they refer to.",
"For each such dimension, voters have the option to choose one of the debaters as better or indicate a tie.",
"This fine-grained voting system gives a glimpse into the reasoning behind the voters' decisions.",
"There are two alternate criteria for determining the successful debater in a debate.",
"Our experiments consider both.",
"Criterion 1: Argument quality.",
"As shown in Figure 2, debaters get points for each dimension of the debate.",
"The most important dimension in 1036 Figure 2: An example post-debate vote.",
"that it contributes most to the point total is making convincing arguments.",
"debate.org uses Criterion 1 to determine the winner of a debate.",
"Criterion 2: Convinced voters.",
"Since voters share their stances before and after the debate, the debater who convinces more voters to change their stance is declared as the winner.",
"On debate.org , each user has the option to share demographic and private state information such as their age, gender, ethnicity, political ideology, religious ideology, income level, education level, the president and the political party they support.",
"Beyond that, we have access to information about their activities on the website such as their overall success rate of winning debates, the debates they participated in as a debater or voter, and their votes.",
"An example of a user profile is shown in Figure 3.",
"Opinions on the big issues .",
"debate.org maintains a list of the most controversial debate topics as determined by the editors of the website.",
"These are referred to as big issues .",
"4 Each user shares his stance on each big issue on his profile (see Figure 3): either PRO (in favor), CON (against), N / O (no opinion), N / S (not saying) or UND (undecided).",
"In this section, we first analyze which dimensions of argument quality are the most important for determining the successful debater.",
"Then, we analyze whether there is any connection between selected user-level factors and users' opinions on the 4 http://www.debate.org/big-issues/ Figure 3: An example of a (partial) user profile.",
"big issues to see if we can infer their opinions from these factors.",
"Finally, using our findings from these analyses, we perform the task of predicting which debater will be perceived as more successful by an individual voter.",
"Figure 4 shows the correlation between pairs of voting dimensions (in the first 8 rows and columns) and the correlation of each dimension with (1) getting more points (row or column 9) and (2) convincing more people as a debater (final row or column).",
"Abbreviations stand for (on the CON side): has better conduct ( CBC ), makes more convincing arguments ( CCA ), uses more reliable sources ( CRS ), has better spelling and grammar ( CBSG ), gets more total points ( CMTP ) and convinces more voters ( CCMV ).",
"For the PRO side we 1037 Figure 5: The representation of the BIGISSUES vector derived by this user's decisions on big issues .",
"From Figure 4, we can see that making more convincing arguments ( CCA ) correlates the most with total points ( CMTP ) and convincing more voters ( CCMV ).",
"This analysis motivates us to identify the linguistic features that are indicators of more convincing arguments.",
"We disentangle different aspects of a person's prior beliefs to understand how well each correlates with their opinions on the big issues .",
"As noted earlier, we focus here only on prior beliefs in the form of self-identified political and religious ideology.",
"Representing the big issues .",
"To represent the opinions of a user on a big issue , we use a four-dimensional one-hot encoding where the indices of the vector correspond to PRO , CON , N / O (no opinion), and UND (undecided), consecutively (1 if the user chooses that value for the issue, 0 oth-erwise).",
"Note that we do not have a representation for N / S since we eliminate users having N / S for at least one big issue for this study.",
"We then concatenate the vector for each big issue to get a representation for a user's stance on all the big issues as shown in Figure 5.",
"We denote this vector by BIGISSUES .",
"We test the correlation between the individual's opinions on big issues and the selected user-level factors in this study using two different approaches: clustering and classification.",
"Clustering the users' decisions on big issues .",
"We apply PCA on the BIGISSUES vectors of users who identified themselves as CONSERVATIVE vs. LIBERAL ( 740 users).",
"We do the same for the users who identified themselves as ATHEIST vs. CHRISTIAN ( 1501 users).",
"In Figure 6, we see that there are distinctive clusters of CONSERVATIVE vs. LIBERAL users in the two-dimensional representation .",
"while for ATHEIST vs. CHRISTIAN , the separation is not as distinct.",
"This suggests that people's opinions on the big issues identified by debate.org correlate more with their political ideology than their religious ideology.",
"Classification approach.",
"We also treat this as a classification task 5 using the BIGISSUES vectors for each user as features and the user's religious and political ideology as the labels to be predicted.",
"So the classification task is: Given the user's BIGISSUES vector, predict his political and religious ideology.",
"Table 1 shows the accuracy for each case.",
"We see that using the BIGISSUES vectors as features performs significantly better 6 than majority baseline 7 .",
"This analysis shows that there is a clear relationship between people's opinions on the big issues and the selected user-level factors.",
"It raises the question of whether it is even possible to persuade someone with prior beliefs relevant to a debate claim to change their stance on the issue.",
"It may be the case that people prefer to agree with the individuals having the same (or similar) beliefs regardless of the quality of the arguments and the 5 For all the classification tasks described in this paper, we experiment with logistic regression, optimizing the regularizer ( 1 or 2) and the regularization parameter C (between 10 5 and 10 5 ).",
"6 We performed the McNemar significance test.",
"7 The majority class baseline predicts CONSERVATIVE for political and CHRISTIAN for religious ideology for each example, respectively.",
"Some of the previous work in NLP on persuasion focuses on predicting the winner of a debate as determined by the change in the number of people supporting each stance before and after the debate (Zhang et al., 2016; Potash and Rumshisky, 2017).",
"However, we believe that studies of the effect of language on persuasion should take into account other, extra-linguistic, factors that can affect opinion change: in particular, we propose an experimental framework for studying the effect of language on persuasion that aims to control for the prior beliefs of the reader as denoted through their self-identified political and religious ideologies.",
"As a result, we study a more fine-grained prediction task: for an individual voter, predict which side/debater/argument the voter will declare as the winner.",
"Task 1 : Controlling for religious ideology.",
"In the first task, we control for religious ideology by selecting debates for which each of the two debaters is from a different religious ideology (e.g., debater 1 is ATHEIST , debater 2 is CHRISTIAN ).",
"In addition, we consider only voters that",
"(a) self-identify with one of these religious ideologies (e.g., the voter is either ATHEIST or CHRISTIAN ) and",
"(b) changed their stance on the debate claim post-debate vs. pre-debate.",
"For each such voter, we want to predict which of the PRO -side debater or the CON -side debater did the convincing.",
"Thus, in this task, we use Criterion 2 to determine the winner of the debate from the point of view of the voter.",
"Our hypothesis is that the voter will be convinced by the debater that espouses the religious ideology of the voter.",
"In this setting, we can study the factors that are important for a particular voter to be convinced by a debater.",
"This setting also provides an opportunity to understand how the voters who change their minds perceive arguments from a debater who is expressing the same vs. the opposing prior belief.",
"To study the effect of the debate topic, we perform this study for two cases debates belonging to the Religion category and then all the categories.",
"The Religion category contains debates like I S THEBIBLE AGAINST WOMEN ' S RIGHTS ? and R ELIGIOUS THEORIES SHOULD NOT BE TAUGHT IN SCHOOL .",
"We want to see how strongly a user's religious ideology affects the persuasive effect of language in such a topic as compared to the all topics.",
"We expect to see stronger effects of prior beliefs for debates on Religion .",
"Task 2: Controlling for political ideology.",
"Similar to the setting described above, Task 2 controls for political ideology.",
"In particular, we only use debates where the two debaters are from different political ideologies ( CONSERVATIVE vs. LIBERAL ).",
"In contrast to Task 1, we consider all voters that self-identify with one of the two debater ideologies (regardless of whether the voter's stance changed post-debate vs. pre-debate).",
"This time, we predict whether the voter gives more total points to the PRO side or the CON side argument.",
"Thus, Task 2 uses Criterion 1 to determine the winner of the debate from the point of view of the voter.",
"Our hypothesis is that the voter will assign more points to the debater that has the same political ideology as the voter.",
"For this task too, we perform the study for two cases debates from the Politics category only and debates from all categories.",
"And we expect to see stronger effects of prior beliefs for debates on Politics .",
"The features we use in our model are shown in Table 2.",
"They can be divided into two groups features that describe the prior beliefs of the users and linguistic features of the arguments themselves.",
"We use the cosine similarities between the voter and each of the debaters' big issue vectors.",
"These features give a good approximation of the overall similarity of two user's opinions.",
"Second, we use indicator features to encode whether the religious and political beliefs of the voter match those of each of the debaters.",
"We extract linguistic features separately for both the PRO and CON side of the debate (combining all the utterances of PRO across different turns and doing the same for CON ).",
"Table 2 contains a list of these features.",
"It includes features that carry information about the style of the language (e.g., usage of modal verbs, length, punctuation), represent different semantic aspects of the argu-1039 User-based features Description Opinion similarity.",
"Argument lexicon features include the counts for the phrases that match with the regular expressions of argumentation styles such as assessment, authority, conditioning, contrasting, emphasizing, generalizing, empathy, inconsistency, necessity, possibility, priority, rhetorical questions, desire, and difficulty.",
"We then concatenate these features to get a single feature representation for the entire debate.",
"For each of the tasks, prediction accuracy is evaluated using 5-fold cross validation.",
"We pick the model parameters for each split with 3-fold cross validation on the training set.",
"We do ablation for each of user-based and linguistic features.",
"We report the results for the feature sets that perform better than the baseline.",
"We perform analysis by training logistic regression models using only user-based features, only linguistic features and finally combining user-based and linguistic features for both the tasks.",
"Task 1 for debates in category Religion .",
"As shown in Table 3, the majority baseline (predict-ing the winner side of the majority of training examples out of PRO or CON ) gets 56 .",
"10 % accuracy.",
"User features alone perform significantly better than the majority baseline.",
"The most important user-based feature is matching religious ideology .",
"This means it is very likely that people change their views in favor of a debater with the same religious ideology.",
"In a linguistic-only features analysis, combination of the personal pronouns and connotation features emerge as most important and also perform significantly better than the majority baseline at 65 .",
"37 % accuracy.",
"When we use both user-based and linguistic features to predict, the accuracy improves to 66 .",
"42 % with connotation features.",
"An interesting observation is that including the user-based features along with the linguistic features changes the set of important linguistic features for persuasion removing the personal pronouns from the important linguistic features set.",
"This shows the importance of studying potentially confounding user-level factors.",
"Task 1 for debates in all categories.",
"As shown in Table 4, for the experiments with user-based features only, matching religious ideology and opinion similarity features are the most important.",
"For this task, length is the most predictive linguistic feature and can achieve significant improve-Accuracy Baseline Majority 57 .",
"ment over the baseline ( 61 . 01 %).",
"When we combine the language features with user-based features, we see that with exclamation mark the accuracy improves to ( 65 . 74 %).",
"Task 2 for debates in category Politics .",
"As shown in Table 5, using user-based features only, the matching political ideology feature performs the best ( 80 . 40 %).",
"Linguistic features (refer to Table 5 for the full list) alone, however, can still obtain significantly better accuracy than the baseline ( 59 . 60 %).",
"The most important linguistic features include approval , politeness , modal verbs , punctuation and argument lexicon features such as rhetorical questions and emphasizing .",
"When combining this linguistic feature set with the matching political ideology feature, we see that with the accuracy improves to ( 81 . 81 %).",
"Length feature does not give any improvement when it is combined with the user features.",
"Task 2 for debates in all categories.",
"As shown in Table 6, when we include all categories, we see that the best performing user-based feature is the opinion similarity feature ( 73 . 96 %).",
"When using language features only, length feature ( 56 . 88 %) is the most important.",
"For this setting, the best accuracy is achieved when we combine user features with length and Tf-idf features.",
"We see that the set of language features that improve the performance of user-based features do not include some of that perform significantly better than the baseline when used alone ( modal verbs and politeness features).",
"Below we provide an overview of related work from the multiple disciplines that study persuasion.",
"Argumentation mining.",
"Although most recent work on argumentation has focused on identifying the structure of arguments and extracting argument components (Persing and Ng, 2015; Palau and Moens, 2009; Biran and Rambow, 2011; Mochales and Moens, 2011; Feng and Hirst, 2011; Stab and Gurevych, 2014; Lippi and Torroni, 2015; Park and Cardie, 2014; Nguyen and Litman, 2015; Peldszus and Stede, 2015; Niculae et al., 2017; Rosenthal and McKeown, 2015), more relevant is research on identifying the characteristics of persuasive text, e.g., what distinguishes persuasive from non-persuasive text (Tan et al., 2016; Zhang et al., 2016; ? ; Habernal and Gurevych, 2016a,b; Fang et al., 2016; Hidey et al., 2017).",
"Similar to these, our work aims to understand the characteristics of persuasive text but also considers the effect of people's prior beliefs.",
"Persuasion.",
"There has been a tremendous amount of research effort in the social sciences (including computational social science) to understand the characteristics of persuasive text (Kel-man, 1961; Burgoon et al., 1975; Chaiken, 1987; Tykocinskl et al., 1994; Chambliss and Garner, 1996; Dillard and Pfau, 2002; Cialdini, 2007; Durik et al., 2008; Tan et al., 2014; Marquart and Naderer, 2016).",
"Most relevant among these Accuracy Baseline Majority 51 .",
"is the research of Tan et al. (2016), Habernal and Gurevych (2016a) and Hidey et al. (2017).",
"Tan et al. (2016) focused on the effect of user interaction dynamics and language features looking at the ChangeMyView 9 (an internet forum) community on Reddit and found that user interaction patterns as well as linguistic features are connected to the success of persuasion.",
"In contrast, Habernal and Gurevych (2016a) created a crowd-sourced corpus consisting of argument pairs and, given a pair of arguments, asked annotators which is more convincing.",
"This allowed them to experiment with different features and machine learning techniques for persuasion prediction.",
"Taking motivation from Aristotle's definition for modes of persuasion, Hidey et al. (2017) annotated claims and premises extracted from the ChangeMyView community with their semantic types to study if certain semantic types or different combinations of semantic types appear in persuasive but not in non-persuasive essays.",
"In contrast to the above, our work focuses on persuasion in debates than monologues and forum datasets and accounts for the user-based features.",
"Persuasion in debates.",
"Debates are another resource for studying the different aspects of persuasive arguments.",
"Different from monologues where the audience is exposed to only one side of the opinions about an issue, debates allow the audience to see both sides of a particular issue via a 9 https://www.reddit.com/r/changemyview/ 1042 controlled discussion.",
"There has been some work on argumentation and persuasion on online debates.",
"Sridhar et al. (2015), Somasundaran and Wiebe (2010) and Hasan and Ng (2014), for example, studied detecting and modeling stance on online debates.",
"Zhang et al. (2016) found that the side that can adapt to their opponents' discussion points over the course of the debate is more likely to be the winner.",
"None of these studies investigated the role of prior beliefs in stance detection or persuasion.",
"User effects in persuasion.",
"Persuasion is not independent from the characteristics of the people to be persuaded.",
"Research in psychology has shown that people have biases in the ways they interpret the arguments they are exposed to because of their prior beliefs (Lord et al., 1979; Vallone et al., 1985; Chambliss and Garner, 1996).",
"Understanding the effect of persuasion strategies on people, the biases people have and the effect of prior beliefs of people on their opinion change has been an active area of research interest (Correll et al., 2004; Hullett, 2005; Petty et al., 1981).",
"Eagly and Chaiken (1975), for instance, found that the attractiveness of the communicator plays an important role in persuasion.",
"Work in this area could be relevant for the future work on modeling shared characteristics between the user and the debaters.",
"To the best of our knowledge, Lukin et al. (2017) is the most relevant work to ours since they consider features of the audience on persuasion.",
"In particular, they studied the effect of an individual's personality features (open, agreeable, extrovert, neurotic, etc.) on the type of argument (factual vs. emotional) they find more persuasive.",
"Our work differs from this work since we study debates and in our setting the voters can see the debaters' pro-files as well as all the interactions between the two sides of the debate rather than only being exposed to a monologue.",
"Finally, we look at different types of user profile information such as a user's religious and ideological beliefs and their opinions on various topics.",
"In this work we provide a new dataset of debates and a more controlled setting to study the effects of prior belief on persuasion.",
"The dataset we provide and the framework we propose open several avenues for future research.",
"One could explore the effect different aspects of people's background (e.g., gender, education level, ethnicity) on persuasion.",
"Furthermore, it would be interesting to study how people's prior beliefs affect their other activities on the website and the language they use while interacting with people with the same and different prior beliefs.",
"Finally, one could also try to understand in what aspects and how the language people with different prior beliefs/backgrounds use is different.",
"These different directions would help people better understand characteristics of persuasive arguments and the effects of prior beliefs in language.",
"This work was supported in part by NSF grant SES-1741441 and DARPA DEFT Grant FA8750-13-2-0015.",
"The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of NSF, DARPA or the U.S. Government.",
"We thank Yoav Artzi, Faisal Ladhak, Amr Sharaf, Tianze Shi, Ashudeep Singh and the anonymous reviewers for their helpful feedback.",
"We also thank the Cornell NLP group for their insightful comments."
] | [
"abstain",
"result",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"method",
"result",
"objective",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"objective",
"method",
"objective",
"method",
"method",
"abstain",
"result",
"result",
"result",
"method",
"method",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"method",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"objective",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"We present a simple but effective method for aspect identification in sentiment analysis.",
"Our unsupervised method only requires word embeddings and a POS tagger, and is therefore straightforward to apply to new domains and languages.",
"We introduce Contrastive Attention (CAt ), a novel single-head attention mechanism based on an RBF kernel, which gives a considerable boost in performance and makes the model interpretable.",
"Previous work relied on syntactic features and complex neural models.",
"We show that given the simplicity of current benchmark datasets for aspect extraction, such complex models are not needed.",
"The code to reproduce the experiments reported in this paper is available at https://github.com/clips/cat .",
"We consider the task of unsupervised aspect extraction from text.",
"In sentiment analysis, an aspect can intuitively be defined as a dimension on which an entity is evaluated (see Figure 1).",
"While aspects can be concrete (e.g., a laptop battery), they can also be subjective (e.g., the loudness of a motorcycle).",
"Aspect extraction is an important subtask of aspect-based sentiment analysis.",
"However, most existing systems are supervised (for an overview, cf. Zhang et al., 2018).",
"As aspects are domain-specific, supervised systems that rely on strictly lexical cues to differentiate between aspects are unlikely to transfer well between different domains (Rietzler et al., 2019).",
"Another reason to consider the unsupervised extraction of aspect terms is the scarcity of training data for many domains (e.g., books), and, more importantly, the complete lack of training data for many languages.",
"Unsupervised aspect extraction has previously been attempted with topic models (Mukherjee and Liu, 2012), topic model hybrids (Garca-Pablos et al., 2018), and reThe two things that really drew me to vinyl were the expense and the inconvenience .",
"stricted Boltzmann machines (Wang et al., 2015), among others.",
"Recently, autoencoders using attention mechanisms (He et al., 2017; Luo et al., 2019) have also been proposed as a method for aspect extraction, and have reached state of the art performance on a variety of datasets.",
"These models are unsupervised in the sense that they do not require labeled data, although they do rely on unlabeled data to learn relevant patterns.",
"In addition, these are complex neural models with a large number of parameters.",
"We show that a much simpler model suffices for this task.",
"We present a simple unsupervised method for aspect extraction which only requires a POS tagger and in-domain word embeddings, trained on a small set of documents.",
"We introduce a novel single-head attention mechanism, Contrastive At-the bread is top notch as well .",
"tention (CAt ), based on Radial Basis Function (RBF) kernels.",
"Compared to conventional attention mechanisms (Weston et al., 2014; Sukhbaatar et al., 2015), CAt captures more relevant information from a sentence.",
"Our method outperforms more complex methods, e.g., attention-based neural networks (He et al., 2017; Luo et al., 2019).",
"In addition, our method automatically assigns aspect labels, while in previous work, labels are manually assigned to aspect clusters.",
"Finally, we present an analysis of the limitations of our model, and propose some directions for future research.",
"Like previous methods (Hu and Liu, 2004; Xu et al., 2013), our method (see Figure",
"2) consists of two steps: extraction of candidate aspect terms and assigning aspect labels to instances.",
"Both steps assume a set of in-domain word embeddings, which we train using word2vec (Mikolov et al., 2013).",
"We use a small set of in-domain documents, containing about 4 million tokens for the restaurant domain.",
"Step 1: aspect term extraction In previous work (Hu and Liu, 2004; Xu et al., 2013), the main assumption has been that nouns that are frequently modified by sentiment-bearing adjectives (e.g., good, bad, ugly) are likely to be aspect nouns.",
"We experimented with this notion and devised a labeling strategy in which aspects are extracted based on their co-occurrence with seed adjectives.",
"However, during experimentation we found that for the datasets in this paper, the most frequent nouns were already good aspects; any further constraint led to far worse performance on the development set.",
"This means that our method only needs a POS tagger to recognize nouns, not a full-fledged parser.",
"Throughout this paper, we use spaCy (Honni-bal and Montani, 2017) for tokenization and POS tagging.",
"In Section 5, we investigate how these choices impact performance.",
"Step 2: aspect selection using Contrastive Attention We use a simple of form of attention, similar to the attention mechanism used in memory networks (Weston et al., 2014; Sukhbaatar et al., 2015).",
"With an attention mechanism, a sequence of words, e.g., a sentence or a document, is embedded into a matrix S , which is operated on with an aspect a to produce a probability distribution, att .",
"Schematically: att = softmax( aS ) (1) att is then multiplied with S to produce an informative summary with respect to the aspect a : d = (cid:88) i att i S i (2) Where d is the weighted sentence summary.",
"There is no reason to restrict a to be a single vector: when replaced by a matrix of queries, A , the equation above gives a separate attention distribution for each aspect, which can then be used to create different summaries, thereby keeping track of different pieces of information.",
"In our specific case, however, we are interested in tracking which words elicit aspects, regardless of the aspect to which they belong.",
"We address this by introducing Contrastive Attention (CAt ), a way of calculating attention that integrates a set of query vectors into a single attention distribution.",
"It uses an RBF kernel, which is defined as follows: rbf( x, y, ) = exp( || x y || 22 ) (3) where, x and y are vectors, and is a scaling factor, which we treat as a hyperparameter.",
"An important aspect of the RBF kernel is that it turns an arbitrary unbounded distance, the squared eu-clidean distance in this case, into a bounded similarity.",
"For example, regardless of , if x and y have a distance of 0, their RBF response will be 1.",
"As their distance increases, their similarity decreases, and will eventually asymptote towards 0, depending on .",
"Given the RBF kernel, a matrix S , and a set of aspect vectors A , attention is calculated as follows: att = (cid:80) a A rbf( w, a, ) (cid:80) w S (cid:80) a A rbf( w, a, ) (4) The attention for a given word is thus the sum of the RBF responses of all vectors in A , divided by the sum of the RBF responses of the vectors to all vectors in S .",
"This defines a probability distribution over words in the sentence or document, where words that are, on average, more similar to aspects, get assigned a higher score.",
"Step 3: assigning aspect labels After reweighing the word vectors, we label each document based on the cosine similarity between the weighted document vector d and the label vector.",
"Where C is the set of labels, i.e., { FOOD , AMBIENCE , STAFF } .",
"In the current work, we use word embeddings of the labels as the targets.",
"This avoids the inherent subjectivity of manually assigning aspect labels, the strategy employed in previous work (He et al., 2017; Luo et al., 2019).",
"We use several English datasets of restaurant reviews for the aspect extraction task.",
"All datasets have been annotated with one or more sentence-level labels, indicating the aspect expressed in that sentence (e.g., the sentence The sushi was great would be assigned the label FOOD ).",
"We evaluate our approach on the Citysearch dataset (Ganu et al., 2009), which uses the same labels as the SemEval datasets.",
"To avoid optimizing for a single corpus, we use the restaurant subsets of the SemEval 2014 (Pontiki et al., 2014) and SemEval 2015 (Pontiki et al., 2015) datasets as development data.",
"Note that, even though our method is completely unsupervised, we explicitly allocate test data to ensure proper methodological soundness, Method P R F Aspect: FOODSERBM (2015) 89.1 85.4 87.2 ABAE (2017) 95.3 74.1 82.8 W2VLDA (2018) 96.0 69.0 81.0 AE-CSA (2019) 90.3 92.6 91.4 Mean 92.4 73.5 85.6 Attention 86.7 89.5 88.1 CAt 91.8 92.4 92.1 Aspect: STAFFSERBM (2015) 81.9 58.2 68.0 ABAE (2017) 80.2 72.8 75.7 W2VLDA (2018) 61.0 86.0 71.0 AE-CSA (2019) 92.6 75.6 77.3 Mean 55.8 85.7 67.5 Attention 74.4 69.3 71.8 CAt 82.4 75.6 78.8 Aspect: AMBIENCESERBM (2015 80.5 59.2 68.2 ABAE (2017) 81.5 69.8 74.0 W2VLDA (2018) 55.0 75.0 64.0 AE-CSA (2019) 91.4 77.9 77.0 Mean 58.7 56.1 57.4 Attention 67.1 65.7 66.4 CAt 76.6 80.1 76.6 Table 3: Precision, recall, and F-scores on the test set of the Citysearch dataset.",
"and do not optimize any models on the test set.",
"Following previous work (He et al., 2017; Ganu et al., 2009), we restrict ourselves to sentences that only express exactly one aspect; sentences that express more than one aspect, or no aspect at all, are discarded.",
"Additionally, we restrict ourselves to three labels: FOOD , SERVICE , and AMBIENCE .",
"We adopt these restrictions in order to compare to other systems.",
"Additionally, previous work (Brody and Elhadad, 2010) reported that the other labels, ANECDOTES and PRICE , were not reliably annotated.",
"Table 1 shows statistics of the datasets.",
"We optimize all our models on SemEval '14 and '15 training data; the scores on the Citysearch dataset do not reflect any form of optimization with regards to performance.",
"We optimize the hyperpa-rameters of each model separately (i.e., the number of aspect terms and of the RBF kernel), leading to the following hyperparameters: For the regular attention, we select the top 980 nouns as aspect candidates.",
"200 nouns and a of .03.",
"We compare our system to four other systems.",
"W2VLDA (Garca-Pablos et al., 2018) is a topic modeling approach that biases word-aspect associations by computing the similarity from a word to a set of aspect terms.",
"SERBM (Wang et al., 2015) a restricted Boltzmann Machine (RBM) that learns topic distributions, and assigns individual words to these distributions.",
"In doing so, it learns to assign words to aspects.",
"We also compare our system to two attention-based systems.",
"First, ABAE (He et al., 2017), which is an auto-encoder that learns an attention distribution over words in the sentence by simultaneously considering the global context and aspect vectors.",
"In doing so, ABAE learns an attention distribution, as well as appropriate aspect vectors.",
"Second, AE-CSA (Luo et al., 2019), which is a hierarchical model which is similar to ABAE.",
"In addition to word vectors and aspect vectors, this model also considers sense and sememe (Bloom-field, 1926) vectors in computing the attention distribution.",
"Note that all these systems, although being unsupervised, do require training data, and need to be fit to a specific domain.",
"Hence, all these systems rely on the existence of in-domain training data on which to learn reconstructions and/or topic distributions.",
"Furthermore, much like our approach, ABAE, AE-CSA, and W2VLDA rely on the availability of pre-trained word embeddings.",
"Additionally, AE-CSA needs a dictionary of senses and sememes, which might not be available for all languages or domains.",
"Compared to other systems, our system does require a UD POS tagger to extract frequent nouns.",
"However, this can be an off-the-shelf POS tagger, since it does not need to be trained on domain-specific data.",
"We also compare our system to a baseline based on the mean of word embeddings, a version of our system using regular attention, and a version of our system using Contrastive Attention (CAt ).",
"The results are shown in Table 3.",
"Because of class imbalance (60 % of instances are labeled FOOD ), the F-scores from 3 do not give a representative picture of model performance.",
"Therefore, we also report weighted macro-averaged scores in Table 2.",
"Our system outperforms ABAE, AE-CSA, and the other systems, both in weighted macro-average F1 score, and on the individual aspects.",
"In addition, 2 shows that the difference between ABAE and 20 40 60 80 100 percentage of training data (326k sentences) 0 20 40 60 80 100 s c o r e ( w e i g h t e d f 1 ) Figure 4: A learning curve on the restaurant data, averaged over 5 embedding models.",
"SERBM is smaller than one would expect based on the F1 scores on the labels, on which ABAE outperforms SERBM on STAFF and AMBIENCE .",
"The Mean model still performs well on this dataset, while it does not use any attention or knowledge of aspects.",
"This implies that aspect knowledge is probably not required to perform well on this dataset; focusing on lexical semantics is enough.",
"We perform an ablation study to see the influence of each component of our system; specifically, we look at the effect of POS tagging, in-domain word embeddings, and the amount of data on performance.",
"Only selecting the most frequent words as aspects, regardless of their POS tag, had a detrimental effect on performance, giving an F-score of 64.5 ( -21.9), while selecting nouns based on adjective-noun co-occurrence had a smaller detrimental effect, giving an F-score of 84.4 ( -2.2), higher than ABAE and SERBM.",
"Replacing the in-domain word embeddings trained on the training set with pretrained GloVe embeddings (Pennington et al., 2014) 1 had a large detrimental effect on performance, dropping the F-score to 54.4 ( -32); this shows that in-domain data is important.",
"To investigate how much in-domain data is required to achieve good performance, we perform a learning curve experiment (Figure 4).",
"We increase the training data in 10% increments, training five word2vec models at each increment.",
"As the fig-1 Specifically, the glove.6B.200D vectors from https://nlp.stanford.edu/projects/glove/ Phenomenon Example OOV I like the Somosas Data Sparsity great Dhal Homonymy Of course Verb > Noun Waited for food Discourse She didn't offer dessert Implicature No free drink Table 4: A categorization of observed error types.",
"ure shows, only a modest amount of data (about 260k sentences) is needed to tackle this specific dataset.",
"To further investigate the limits of our model, we perform a simple error analysis on our best performing model.",
"Table 4 shows a manual categorization of error types.",
"Several of the errors relate to Out-of-Vocabulary (OOV) or low frequency items, such as the words Somosas' (OOV) and Dhal' (low frequency).",
"Since our model is purely based on lexical similarity, homonyms and polysemous words can lead to errors.",
"An example of this is the word course,' which our model interprets as being about food.",
"As the aspect terms we use are restricted to nouns, the model also misses aspects expressed in verbs, such as waited for food.",
"Finally, discourse context and implicatures often lead to errors.",
"The model does not capture enough context or world knowledge to infer that no free drink' does not express an opinion about drinks, but about service.",
"Given these errors, we surmise that our model will perform less well in domains in which aspects are expressed in a less overt way.",
"For example, consider the following sentence from a book review (Kirkus Reviews, 2019): (1) As usual, Beaton conceals any number of surprises behind her trademark wry humor.",
"This sentence touches on a range of aspects, including writing style, plot, and a general opinion on the book that is being reviewed.",
"Such domains might also require the use of more sophisticated aspect term extraction methods.",
"However, it is not the case that our model necessarily overlooks implicit aspects.",
"For example, the word cheap often signals an opinion about the price of something.",
"As the embedding of the word cheap is highly similar to that of price our model will attend to cheap as long as enough price-related terms are in the set of extracted aspect terms of the model.",
"In the future, we would like to address the limitations of the current method, and apply it to datasets with other domains and languages.",
"Such datasets exist, but we have not yet evaluated our system on them due to the lack of sufficient unannotated in-domain data in addition to annotated data.",
"Given the performance of CAt , especially compared to regular dot-product attention, it would be interesting to see how it performs as a replacement of regular attention in supervised models, e.g., memory networks (Weston et al., 2014; Sukhbaatar et al., 2015).",
"Additionally, it would be interesting to see why the attention model outperforms regular dot product attention.",
"Currently, our understanding is that the dot-product attention places a high emphasis on words with a higher vector norm; words with a higher norm have, on average, a higher inner product with other vectors.",
"As the norm of a word embedding directly relates to the frequency of this word in the training corpus, the regular dot-product attention naturally attends to more frequent words.",
"In a network with trainable parameters, such as ABAE (He et al., 2017), this effect can be mitigated by finetuning the embeddings or other weighting mechanisms.",
"In our system, no such training is available, which can explain the suitability of CAt as an unsupervised aspect extraction mechanism.",
"We present a simple model of aspect extraction that uses a frequency threshold for candidate selection together with a novel attention mechanism based on RBF kernels, together with an automated aspect assignment method.",
"We show that for the task of assigning aspects to sentences in the restaurant domain, the RBF kernel attention mechanism outperforms a regular attention mechanism, as well as more complex models based on auto-encoders and topic models.",
"We are grateful to the three reviewers for their feedback.",
"The first author was sponsored by a Fonds Wetenschappelijk Onderzoek (FWO) aspi-rantschap."
] | [
"method",
"objective",
"objective",
"abstain",
"result",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"other",
"other"
] |
[
"Graph Convolutional Networks (GCNs) are a class of spectral clustering techniques that leverage localized convolution filters to perform supervised classification directly on graphical structures.",
"While such methods model nodes' local pairwise importance, they lack the capability to model global importance relative to other nodes of the graph.",
"This causes such models to miss critical information in tasks where global ranking is a key component for the task, such as in keyphrase extraction.",
"We address this shortcoming by allowing the proper incorporation of global information into the GCN family of models through the use of scaled node weights.",
"In the context of keyphrase extraction, incorporating global random walk scores obtained from TextRank boosts performance significantly.",
"With our proposed method, we achieve state-of-the-art results, bettering a strong baseline by an absolute 2% increase in F 1 score.",
"Learning directly on a graphical structure is a crucial requirement in many domains.",
"These graphs represent information in many forms, ranging from interconnected user groups to contextually linked documents to a central document by shared vocabulary.",
"Learning on graphs has been studied extensively in the form of spectral clustering (Ng et al., 2002).",
"The potential of learning directly on graphs has realized in semi-supervised settings where labels for only a few of the nodes are available.",
"Some prior work formulates such setup as propagating the label information using some form of graph-based regularization (Kipf and Welling, 2016).",
"Recently proposed works have updated such methods to be end-to-end learnable in the deep learning style by employing gradient descent on nodes within a fixed neighborhood, approximating spectral clustering's means of approximating the graph's eigenvectors (Bron-stein et al., 2017) by aggregating neighborhood features.",
"Recent advancements in normalizing the gradient range further improve the efficiency of such solutions (Kipf and Welling, 2016).",
"However, these techniques can only exploit local features within the neighborhood of individual nodes.",
"For some tasks, such simplified local feature aggregation may be sufficient, but insufficient for tasks that need global relative importance information.",
"One such important graph-based task is keyphrase extraction.",
"In this task, individual words or phrases serve as graph nodes, and edges represent some form of co-occurrence.",
"Keyphrase extraction has been extensively studied, in both supervised (classification) and unsupervised (rank-ing) modes.",
"Depending on the length of the text and the final application of the task, solutions can be sample-based classification, pairwise ranking or sequential labeling.",
"For example, Kim et",
"al.(2010) explore the case of extracting top keyphrases from complete documents for downstream indexing, while Augenstein et",
"al.(2017) connects its usage for knowledge base generation, aiming to extract all plausible keyphrases within a short excerpt.",
"Treating a full-text scenario is arguably more challenging than the treatment of an excerpt scenario, as it requires the understanding of the much larger scale of text and extracting its most salient aspects.",
"Traditional supervised models employ a host of hand-engineered features tf.idf , candidate length, POS tags, sectional information, frequency, among others (Kim et al., 2013; Hasan and Ng, 2010) trained with a wide range of classifiers.",
"As they typically model the task as a binary classification task ( i.e. , keyphrase, keyphrase ), they suffer severely from class imbalance as keyphrases are the exception among most plausible candidates.",
"Unsupervised methods use co-occurrence as a signal for the labels.",
"Under the hypothesis that keyphrase saliency is strongly correlated with repetition, graphical methods for unsupervised keyphrase extraction employ centrality measures and random walk techniques to rank prospective keyphrases (Mihalcea and Tarau, 2004).",
"This hypothesis is widely exploited, with proposed extensions further enriching the graph by incorporating topic, section and/or position information (Flo-rescu and Caragea, 2017b,a; Jiang et al., 2018), among other forms of side information.",
"With these in mind, we make two important observations about the existing keyphrase extraction techniques: In the supervised setting, word importance is captured in metrics and engineered features, as is local random walk scores.",
"However, the structure of the graph formed by the text is not exploited.",
"In the unsupervised setting, most techniques do not tightly incorporate the rich semantic features common in the supervised setting.",
"Furthermore, random walk scores are used as-is, without the capability of being fine-tuned by downstream supervision.",
"From this dichotomy, we see there is a gap to close in merging the advantages of both.",
"We propose a Glocal (globallocal portmanteau) technique which incorporates both components directly over the wordgraph.",
"Specifically, we contribute a neural model that elegantly incorporates the random walk scores, while incorporating parameters to fit keyphrase labels.",
"To the best of our knowledge, our model is the only supervised full-text keyphrase extraction model that operates directly on the wordgraph.",
"Our work draws motivation from the introduction of random walks to NLP.",
"In this regard, TextRank (Mihalcea and Tarau, 2004), is a central representative work that serves as a basis for many text extraction modeling techniques used in entity extraction and extractive summarization (we use random walk and TextRank interchangeably in this paper).",
"The success of the application of such random walks in text extraction is based on the hypothesis that the important nodes aggregate more mass and are thereby representative of the graph as a whole.",
"Importantly, TextRank can be viewed as a ranking model, as it induces a ranking of graph nodes via its centrality calculation.",
"However, as noted, supervised techniques for properly incorporating this information natively within the model, in our opinion, has yet to be explored.",
"Recently, neural models have been developed to work on graphs.",
"These models port the ideas of spectral clustering into the deep learning modality, allowing direct computation on graphs.",
"Methods such as Graph Convolution Network (or GCN) can then be applied to many different task scenarios which natively feature graphical structures such as citation and community graphs, which are common in the database, information retrieval, and digital library domains (Kipf and Welling, 2016; Hamilton et al., 2017).",
"They enrich the graph by aggregating features with information gathered from the neighborhood of the node to be classified.",
"Enhancements to the model introduce much sophisticated local information aggregation between the node pairs as in Graph Attention Networks (GAT) Velickovic et al..",
"However, we note that such prior methods fall inherently into the classification paradigm, and hence focus on only local aggregation; i.e., to pull in the most significant feature from its neighbors.",
"In the context of keyphrase extraction, Zhang et al. (2017) is a recent work that learns directly on the graph.",
"Their method, MIKE, determines the weight of edges and nodes in a supervised manner, rather than just utilizing co-occurrence statistics.",
"Their work features 5 orthogonal features, one of which is topic distribution.",
"They consider the prominence of the tokens per topic as the surrogate for ranking, utilized for model training, by minimizing the difference in predicted and gold-standard rank between iterations.",
"MIKE can be employed only when topic information is available, but unfortunately, does not generalize to the more common case where only gold-standard keyphrases are available for training.",
"In Augenstein et",
"al.(2017) benchmarking, state-of-the-art rich semantic embeddings deep learning and handcrafted featurebased statistical sequential labeling models used LSTM (Ammar et al., 2017) and CRF (Prasad and Kan, 2017) models respectively.",
"Meng et",
"al.(2017) uses an encoderdecoder model with a copy mechanism for keyphrase extraction (as a special case of Figure 1: Graph Convolution Model Architectures.",
"generation).",
"This state-of-the-art technique exploits a complementary idea of sequential semantic modeling focused on generating keyphrases rather than merely extracting them.",
"However, their model does not address the common scenario of keyphrase extraction from long documents but only for short excerpts (namely, the abstract).",
"This assumption reduces the complexity of the problem for sequential models that can effectively encode short text spans but may be ineffective on full-text.",
"We suspect this is a current limitation of the encoderdecoder based models, which necessarily reduces the entire textual sequence into a single vector during the encoding stage making it susceptible to the vanishing gradient and representation underfitting on large text.",
"Further advances using the encoderdecoder framework such as (Chen et al., 2018) further explore the sequential modeling architectures by improving the attention mechanism with traditional features like title guidance.",
"Note that many forms of such structural information such as sectional information and citation graph co-occurrence can enhance basic models, however without loss of generality, in this work we consider only text-based features for all the models.",
"Our proposed model exploits the strength of both supervised and unsupervised modalities by combining two baseline models.",
"Our Glocal model has two components: For learning directly over the graph (classi-fication), we use the recently proposed GCN as our baseline model.",
"For incorporating global importance (rank-ing), we use TextRank as our baseline model.",
"We will first introduce the preliminaries: e.g. , Graph Convolution Network (GCN), followed by the modifications that result in the Graph Attention Network (GAT).",
"We then explain how we modify the local convolution operation, to incorporate global importance scores.",
"Graph G = ( V, A ) , where V is a a finite set of vertices such that | V | = n and A R n n is an undirected weighted adjacency matrix representing all edges, assume x : V R n maps each node to x i , which is a n -dimensional feature vector.",
"Spectral filtering on a signal x is then represented as (Defferrard et al., 2016): y = g ( L ) x = Ug () UT x, (1) where L = I n D 1 / 2 AD 1 / 2 and where I n is the identity matrix and D ii = (cid:80) j A ij .",
"Further, parameterization and simplification of the filter by (Hammond et al., 2011) results in: g () K 1 (cid:88) k =0 k T k () , (2) where the Chebyshev polynomial T k ( x ) is computed recursively.",
"Note that Eqn.",
"2 is K -localized; i.e. it depends only on nodes that are at a maximum of K hops away from the target node.",
"A linear model can approximate this (Kipf and Welling, 2016) resulting in the simplified form: y (cid:48) 0 x + (cid:48) 1 ( L IN ) x = (cid:48) 0 x (cid:48) 1 D 12 AD 12 x , (3) with two free, tunable parameters (cid:48) 0 and (cid:48) 1 .",
"Constraining = (cid:48) 0 = (cid:48) 1 further simplifies the approximation to a single-parameter form: y (cid:16) IN + D 12 AD 12 (cid:17) x .",
"(4) Note IN + D 12 AD 12 has eigenvalues in the range [0 , 2] , so a re-normalization trick was proposed previously IN + D 12 AD 12 D 12 A D 12 , with A = A + IN and D ii = (cid:80) j A ij to keep gradients stable.",
"This formulation (GCN) supports the following layer-wise propagation rule: H ( l +1) = (cid:16) D 12 A D 12 H ( l ) W ( l ) (cid:17) .",
"(5) Here, A = A + IN is the adjacency matrix of the undirected graph G with added self-connections and W ( l ) is the layer-specific set of weight parameters.",
"denotes an activation function; in the GCN model, ReLU is preferred.",
"H ( l ) is the matrix of activation in the l th layer; H (0) = X .",
"Unfortunately, the expressive power represented by A is also its biggest limitation: as such, the model only incorporates features from the neighborhood as weighted by the connecting edge normalized to unit sum ( cf. Fig. 1, left).",
"To address this, the Graph Attention Network model (Velickovic et al., 2018) incorporates learnable scaling for each edge.",
"This introduces a local function attn : RF (cid:48) RF (cid:48) R , a ij = attn ( W(cid:126)h i , W(cid:126)h j ) , (6) that computes a score (attention) per node pair (or edge , inclusive of self-loops).",
"Here, the attn operator is a single feed-forward layer employing Leaky ReLU activation.",
"This attention is normalized in GAT as: ij = exp ( a ij ) (cid:80) k N i exp ( a ik ) , (7) to smooth the gradients.",
"node pair by the gradient (Fig. 1, center).",
"The normalized attention coefficients are used to compute a linear combination of the features in a node neighborhood N , yielding the equivalent layer-wise propagation rule: (cid:126)h ( l +1) i = (cid:88) j N i ij W(cid:126)h ( l ) j .",
"In terms of Eqn.",
"5, the fixed weights in the adjacency are replaced with learned weights or equivalently hard neighborhood is replaced by soft neighborhood.",
"Since all edge weights are parameterized, a number ( = T ) of multiple random initializations learn different representations, resulting in a number of different scalings for multiple linear combinations.",
"A final result is achieved by either concatenating or averaging the multiple representations as formulated in Eqns.",
"9 and 10, respectively.",
"In both GCN and GAT, a model with K layers incorporates the feature for a node up to its K hop neighbors.",
"Though GAT improves upon GCN by assigning different importance to nodes via learned weights as compared to the static edge weight in GCN, it is still a local computation.",
"The attention factor, i.e. the scaling coefficients ij , are the function of pairwise feature interaction within the local neighborhood and do not account for node centrality nor the global graph structure.",
"We fix this with our Glocal model.",
"Consider the random walk based score generated for the graph G such that: j = T extRank ( i ) .",
"We introduce this parameter j to the GAT model.",
"Considering this as the global importance component to the node, we obtain two alternative formulations that encode the node importance as either an additive or a multiplicative form: (cid:126)h ( l +1) i = (cid:88) j N i ij W (cid:126)h ( l ) j + (cid:88) j N i j W (cid:48) (cid:126)h ( l ) j (12) (cid:126)h ( l +1) i = (cid:88) j N i j ij W (cid:126)h ( l ) j (13) From Eqns.",
"12 and 13, we see two scaling factors and , which either reinforce or diminish the other's effect (Fig. 1, right).",
"Unlike , is neither calculated locally nor normalized.",
"Not normalizing is an essential component for having both classification and ranking ability.",
"With normalization, even in a neighborhood with minimal TextRank scores, we will get an aggregation of features with unit sum weights.",
"While without normalization it would perform almost no feature aggregation from such nodes.",
"In our experiments, we use the formulation of Eqn.",
"13, as a multiplicative formulation easily enables multi-head attention (achieved by multiplying A with B such that B ii = i ).",
"Note that this modeling applies to GCN as well: the second component of the Eqn.",
"13 actually closely resembles the GCN formulation for our proposed model.",
"We investigate keyphrase extraction, using the most commonly reported full-text datasets, as shown in Table",
"1. We divide the training data in 80 : 20 fraction for train and validation splits.",
"Our complete pipeline comprises the following steps:",
"1. Feature Processing.",
"First we perform TextRank on the complete text of each document, retaining only tokens that are nouns and adjectives, filtering out other words (equivalent to simplex noun phrases).",
"We use the gensim library ( Rehurek and Sojka, 2010) to perform TextRank and compute the scores.",
"This process helps in two ways first, it gives us the node importance value for each keyphrase, needed by Glocal; second, it helps to manage the graph size and lessen the label skew on the minority positive label by removing extraneous tokens.",
"For documents larger than a max size (1200 tokens) we drop the extra least scored tokens.",
"We find that the tokens that have a TextRank score in bottom 50% possess only 16% of partial or full keyphrases in the validation dataset.",
"Hence dropping them from the processing does not affect the recall much.",
"The nodes of the graph are single tokens and not complete phrases, therefore all the tokens of multi-token keyphrases are marked as keyphrase during learning ion the graph.",
"For the node/keyphrase representations, we map our vocabulary to GloVe embeddings using the 2 .",
"2 M vocabulary sized, 300 dimension vector variant (Pennington et al., 2014).",
"For reference, we observe GloVe covers about 90% of the words overall 3 datasets.",
"We then extract various textual features for the candidate keyphrases, including their position of the first occurrence, tf.idf and n -gram count.",
"These features are appended to the word embedding to obtain a final feature vector representing each node.",
"Rather than discard them, we choose to append the n -gram features to retain rich lexical signals obtained from the tokens.",
"2. Learning.",
"The second step is to train the model with the formulated graphs.",
"We use a 2 layer network with 128 units with ReLU activations for hidden layers, followed by a simple 2-way softmax classification layer ( keyphrase , keyphrase ).",
"We further employ 8 attention heads at all layers.",
"We follow Glorot initialization (Glorot and Bengio, 2010) for all setting initial parameters weights, use a dropout of 0.5, and employ a L 2 regularization of 0 .",
"001 .",
"We train with Adam optimizer on cross-entropy loss and initial learning rate of 0 .",
"01 for 200 epochs using an early stopping strategy with patience set to 20 epochs.",
"In both evaluation and training, as gold standard keyphrases have multiple tokens, we use each token of the gold keyphrase as the true label for each token.",
"3. Post-processing.",
"This step reconstructs the multi-token keyphrase from the probability scores as generated by the Glocal model.",
"This formation step then requires a re-ranking (calculating R ( p ) ) of the resultant phrase as: R ( p ) = len ( p ) (cid:88) w i (cid:15)p r ( w i ) (14) Schutz Krapivin SemEval Model F 1 @5 F 1 @10 F 1 @15 F 1 @5 F 1 @10 F 1 @15 F 1 @5 F 1 @10 F 1 @15 tf.idf 11.3 13.7 15.2 6.9 7.3 9.4 9.1 12.2 13.5 TextRank (Mihalcea and Tarau, 2004) 10.2 12.4 14.9 7.6 9.3 9.9 11.2 14.4 15.2 RNN 3.2 3.6 4.0 2.6 2.9 3.6 3.0 3.2 3.7 GRU 3.8 3.2 3.9 3.1 3.4 5.1 2.6 2.8 3.9 CopyRNN 5.8 6.2 7.5 6.6 6.9 7.1 5.4 5.6 6.1 CopyRNN (Meng et al., 2017) 29.3 30.2 32.2 26.2 25.3 27.1 28.7 29.4 31.1 GCN (Kipf and Welling, 2016) 16.7 17.8 19.2 19.2 19.8 20.9 18.7 19.5 21.4 GAT (Velickovic et al., 2018) 25.2 28.1 29.3 21.1 23.1 24.2 22.5 26.8 25.9 Glocal 30.7 30.3 33.9 24.7 25.6 27.1 28.9 29.8 33.5 Table 2: Main comparative system evaluations on keyphrase extraction.",
"The initial rank of each candidate token is in this case equal to the probability of the keyphrase , i.e., w i p , r ( w i ) = p keyphrase ( w i ) .",
"We also constrain the process such that the actual word sequence must appear in the original text.",
"An important note that we strictly do not normalize the i scores; the re-ranking process and the preservation of raw scores work in tandem.",
"Topologically, such graphs generated from textual data often have a few dense neighborhoods and many sparse ones, resulting in significant raw score differences that can benefit from scaling down the feature appropriately.",
"If normalization is done in each neighborhood (as done for i ), it will scale up individual nodes in a sparse neighborhood and suppress nodes in a dense neighborhood, the reverse of the intended operation.",
"We compare our Glocal model's results against other models on the core task of keyphrase extraction.",
"We select baselines which represent the related state-of-the-art under particular learning paradigms: unstructured retrieval-based ( tf.idf ), unsupervised graph-based (TextRank), supervised sequence learning (RNN and derivatives) and supervised structured models (GCN and derivatives).",
"The marked performance differences help us to ablate and understand the gain brought about by the supervision directly on the graph over unsupervised graph-based techniques, as well as that brought by incorporation of the random walk-based score into the supervised model.",
"We also report TextRank and tf.idf baselines to measure ablative effects, as they contribute to Glocal.",
"For the sake of comparison, we restrict our comparison to modeling approaches, deciding to keep the feature inventory constant; i.e. textual and statistical features.",
"Closely related work discussed earlier such as MIKE (Zhang et al., 2017) are not directly comparable, since such works may use many other orthogonal features.",
"Similarly, many supervised techniques use additional features.",
"The best reported SemEval-2010 systems show very close performance to our proposed model, but they take advantages of other sources of side information, such as Wikipedia term acquisition, logical section information, bagging, and ensembles; hence, they are not directly comparable (reported in (Kim et al., 2010)).",
"We argue that our main contribution is in the capacity of modeling; other features utilized in prior work can enhance Glocal's performance and suitable incorporation is future work.",
"In a similar vein, Position-Rank (Florescu and Caragea, 2017b) enhances the random walk itself which can again act as a replacement for the TextRank in our model and is thus not strictly a comparable method.",
"We argued earlier that the recently proposed supervised encoderdecoder based CopyRNN (Meng et al., 2017) deep learning models trained using word embedding do not scale well to the full-text setup.",
"In their work, they only report results for extracting (and generating) keyphrases from abstracts.",
"For direct comparison, we have retrained these models in the full-text scenario to validate our claim that these models have difficulty with scaling.",
"Further, we still compare with the abstract-only trained model for CopyRNN to benchmark both approaches.",
"The results are reported in F 1 @K where F 1 = 2 precision recall/ ( precision + recall ) and K is the number of keyphrases generated by the model.",
"Table 2 shows that our Glocal model outperforms both GCN and GAT; and that no other text-only based model shows comparable performance.",
"In the full-text scenario, sequential neural models fare comparably to TextRank, Text example from SemEval: Edge Indexing in a Grid for Highly Dynamic Virtual Environments.",
"Further, notice that TextRank alone performs weakly in comparison to GAT, but does synergize well with the base GAT in our Glocal system yielding state-of-the-art performance over all three datasets.",
"These results are consistent microscopically as well.",
"Fig. 2 gives two illustrations of keyphrases generated by TextRank, GAT, and Glocal, which show the ability of Glocal to pull out a larger ratio of exact keyphrase matches.",
"We drill down on SemEval dataset for a closer look, as in Table 3, which has the highest positive label ratio at 14.8 keyphrases per document on average.",
"Keyphrases in SemEval are further clas-sified as either authoror reader-annotated.",
"Despite the task being difficult (15% of the reader-and 19% of the author-assigned keyphrases are not in the text) Glocal's performance edge holds out well, and the results are consistent on both forms of assigned keyphrases.",
"We experiment with adding the neighborhood normalization for j which resulted in a significant decrease in performance of the Glocal model.",
"To further explore this issue of whether normalization thus formed classification model has any benefit over GCN and GAT, we experiment with the Cora and Citeseer scientific document classification tasks (as reported in (Kipf and Welling, 2016) Model Author Reader KP-Miner 17.1% 21.5% Maui 16.2% 16.1% TextRank 14.5% 15.1% GAT 25.5% 26.0% Glocal 32.2 % 34.5 % Table 3: Summary of the fine-grained F 1 @15 on SemEval.",
"and (Velickovic et al., 2018)) which assigns a document into one of six or seven topical categories.",
"We find that the incorporation of global random walk information does not influence topical categorization much (with a minor gain in the case of Cora dataset), and hence Glocal's performance is almost identical to the less expressive GAT model in terms of classification.",
"We now discuss three aspects of the model with respect to the task of keyphrase extraction.",
"1. Feature versus Scaling .",
"TextRank scores are used in supervised classifiers, traditionally incorporated as a feature.",
"How well would just incorporating TextRank as an additional feature to GAT work?",
"While it does help to improve performance, it does not incorporate the ranking component element in the model; i.e. even nodes with very low centrality can positively contribute to its neighbors' scores.",
"Glocal's tight integration is beneficial as it allows further gradient optimization, rather than just simply adding a new feature dimension.",
"Our experiment with adding TextRank as an additional feature show 2% performance gain for GCN and < 0 .",
"5% performance gain for GAT on average as compared to Glocal showing gain of 5% on average.",
"The behavior is consistent with that of the classification model as in the absence of j , the GAT model is inherently a classification model.",
"As an example ( cf. Fig. 1, right), consider a neighboring Node h 5 to the prospective keyphrase represented by Node h 1 .",
"Node h 5 has a low similarity with the target (low 15 ), but if a prominent node in the graph will be accorded a higher 5 .",
"In this way, such a node channels more to its neighbor, exerting comparatively steeper gradients to less essential nodes, hence giving a larger chance for Node h 1 to be considered a keyphrase.",
"However, without Glocal's scaling mechanism, such modeling is not captured and essentially unlearnable.",
"We further elaborate on this point next, which shows how our scaling is equivalent to known embedding aggregation techniques (in contrast to adding an extra feature dimension).",
"2. TextRank Averaged Node Embedding .",
"Kipf and Welling argue that GCN is analogous with node embedding approach with the Weisfeiler-Lehman algorithm for graph isomorphism.",
"Namely, for every node i in V , and scalar feature h i , the algorithm repeats for k steps or until convergence: (cid:126)h i hash (cid:88) j N i (cid:126)h j .",
"In case (cid:126) h j is a scalar this as well represents the TextRank.",
"Comparing with layer-wise propagation rules for GAT, GCN, and Glocal, we find a common structure among the methods for incorporating features.",
"Eqn.",
"16b (representative of GAT) converts a constant aggregation of Eqn.",
"16a (repre-senting GCN) to be a locally learnable parametric scaling.",
"Eqn.",
"16c (Glocal) further factorizes the scaling with one locally learnable parametric and one global random walk score.",
"In short, our means of generating node representations ( i.e. , the post-training embedding) weights the representations for each node/candidate keyphrase by its random walk score.",
"Such a procedure generates stable representations, similar to tf.idf weighted word embeddings used for sentence representation.",
"Such techniques have shown better performance as compared against complex aggregators (like LSTMs) on tasks with insufficient data to train end-to-end models.",
"3. Generating Longer Keyphrases .",
"The re-ranking trick for promoting the generation of longer keyphrases discussed earlier and in the final step of our pipeline is fielded in many systems (Zhang et al., 2017).",
"This is a difficult requirement to incorporate directly into the model, relegating such techniques to post-processing by default.",
"A nice offshoot effect of our model is that it implicitly favors generating longer keyphrases, due to the inherent nature of the graph convolution operator in aggregating neighboring nodes' features.",
"This, compounded with a high random walk score for the prospective keyphrase node, results in a higher fraction of features propagating from such nodes being included to its neighboring nodes.",
"In turn, this favors dense local keyphrase neighborhoods of highly weighted keyphrases, making the re-ranking step easier.",
"We have presented Glocal, a global plus local graph convolution model for incorporating the global importance of the node within the local convolution operation for supervised learning on graphs.",
"We argue both theoretical and validate empirically that such a model has ben-efits in strengthening the graph node ranking component, particularly helpful in tasks such as keyphrase extraction.",
"On our detailed experiments on keyphrase extraction on 3 real-world full-text datasets, our model achieves better performance than traditional, graph-based unsupervised graph-based ranking models and bests sequential supervised classifiers as well.",
"The specific component of incorporating global importance further improve the performance by up to 8.0% absolute F 1 on different evaluation criteria on full-text setup as compared to GAT and up to 2% absolute gain as compared to CopyRNN.",
"We would like to acknowledge the support of the NExT research grant funds, supported by the National Research Foundation, Prime Ministers Of-fice, Singapore under its IRC @ SG Funding Initiative.",
"We would also like to gratefully acknowledge the support of NVIDIA Corporation with the donation of the GeForce GTX Titan X GPU used for this research."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"result",
"objective",
"objective",
"abstain",
"objective",
"method",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"other",
"other"
] |
[
"Neural network models have been actively applied to word segmentation, especially Chinese, because of the ability to minimize the effort in feature engineering.",
"Typical segmentation models are categorized as character-based, for conducting exact inference, or word-based, for utilizing word-level information.",
"We propose a character-based model utilizing word information to leverage the advantages of both types of models.",
"Our model learns the importance of multiple candidate words for a character on the basis of an attention mechanism, and makes use of it for segmentation decisions.",
"The experimental results show that our model achieves better performance than the state-of-the-art models on both Japanese and Chinese benchmark datasets.",
"1 1 Introduction Word segmentation is the first step of natural language processing (NLP) for most East Asian languages, such as Japanese and Chinese.",
"In recent years, neural network models have been widely applied to word segmentation, especially Chinese, because of their ability to minimize the effort in feature engineering.",
"These models are categorized as character-based or word-based.",
"Word-based models (Zhang et al., 2016; Cai and Zhao, 2016; Cai et al., 2017; Yang et al., 2017) directly segment a character sequence into words and can easily achieve the benefits of word-level information.",
"However, these models cannot usually conduct exact inference because of strategies, such as beam-search decoding and constraints of maximum word length, which are necessary as the number of candidate segmentations increases exponentially with the sentence length.",
"On the other hand, character-based models (Zheng et al., 2013; 1 We have released our code at https://github. com/shigashiyama/seikanlp . Mansur et al., 2013; Pei et al., 2014; Chen et al., 2015a) treat word segmentation as sequence labeling.",
"These models typically predict optimal label sequences while considering adjacent labels.",
"Limited efforts have been devoted to leveraging the advantages of both types of models, such as utilizing word information and conducting exact inference, which are complementary characteristics.",
"In particular, the candidate word information for a character is beneficial to disambiguate word boundaries because a character in the sentence has multiple candidate words that contain the character.",
"For example, there are three or four candidate words for characters x 3 , x 4 and x 5 in a sentence x 1:5 in Figure 1. A feasible solution to develop a model with both characteristics is to incorporate word information into a character-based framework.",
"An example of such work is that of Wang and Xu (2017).",
"They concatenated embeddings of a character and candidate words and used it in their convolutional neural network (CNN)-based model.",
"They treated candidate words equivalently, although the plausibility of a candidate word differs in the context of a target character.",
"In this paper, we propose a character-based word segmentation model that utilizes word information.",
"Our model is based on a BiLSTM-CRF architecture that has been successfully applied to sequence labeling tasks (Huang et al., 2015; Chen et al., 2015b).",
"Differing from the work of Wang and Xu (2017), our model learns and distinguishes the importance of candidate words for a character in a context, by applying an attention mechanism (Bahdanau et al., 2015).",
"Our contributions are as follows: We introduce word information and an attention mechanism into a character-based word segmentation framework, to distinguish and leverage the importance of candidate words Sentence: x 1:5 = kare wa nihonjin (He is a Japanese.) Candidate words f w j g for characters f x i g in the sentence: j 1 2 3 4 5 6 7 8 No.",
"We empirically reveal that accurate attention to proper candidate words leads to correct segmentations.",
"Our model outperforms the state-of-the-art word segmentation models on both Japanese and Chinese datasets.",
"Word segmentation can be regarded as a character-level sequence labeling task.",
"Given a sentence x = x 1: n := ( x 1 , . . . , x n ) of length n , each character x i will be assigned a segmentation label y i of tag set T , and a label sequence y = y 1: n will be predicted.",
"We employ tag set T = { B , I , E , S } , where B, I and E, respectively, represent the beginning, inside and end of a multi-character word, and S represents a single character word (Xue, 2003).",
"We use a BiLSTM-CRF architecture for our baseline model.",
"The model consists of a character embedding layer, recurrent layers based on long short-term memory (LSTM) and a conditional random fields (CRF) layer as in Figure 2. Character Embedding Layer Let V c be a character vocabulary.",
"Each character in a given sentence is transformed into a character embedding e c of d c -dimensional vector by a lookup operation that retrieves the corresponding column of the embedding matrix E c R d c |V c | .",
"Recurrent Layers for Character Representation A sequence of character embeddings e c 1: n is fed into a recurrent neural network (RNN) to derive contextualized representations h 1: n , which Recurrent layers for char Char embedding layer Recurrent layers for word+char CRF layer x 1 x n x i w 1 w m w j y 1 y n y i Concat !\" # !",
"we call character context vectors .",
"We adopt a stacked (multi-layer) and bidirectional variant of an LSTM (Hochreiter and Schmidhuber, 1997) network, which addresses the issue of learning long-term dependencies and the gradient vanishing problem.",
"Hidden vectors h ( l ) 1: n of l -th bidirectional LSTM (BiLSTM) layer are calculated by forward LSTM ( LSTM f ) and backward LSTM ( LSTM b ): h ( l ) i = BiLSTM( h ( l 1) 1: n , i ) (1) := LSTM f ( h ( l 1) 1: n , i ) LSTM b ( h ( l 1) n :1 , n i + 1) , where h (0) i = e ci and denotes a concatenation operation, and h ( l 1) n :1 denotes the reversed sequence of the original vectors h ( l 1) 1: n .",
"More concretely, each forward or backward LSTM calculates hidden vectors h 1: n from an input sequence v 1: n of d v -dimensional vectors as follows: h i = LSTM( v 1: n , i ) := o i (cid:12) tanh( c i ) , c i = i i (cid:12) t i + f i (cid:12) c i 1 , g i = ( W g v i + U g h i 1 + b g ) , t i = tanh( W t v i + U t h i 1 + b t ) , where is sigmoid function, (cid:12) denotes element-wise multiplication, g indicates an input gate i , a forget gate f or an output gate o , W g , U g R d r d v and b g R d r are trainable parameters for each gate g { i, f, o } , and d r is a hyperparameter.",
"CRF Layer A character context vector h i is mapped into a |T | -dimensional vector representing scores of segmentation labels: s i = W s h i + b s , where W s R |T | 2 d r and b R |T | are trainable parameters.",
"Following previous sequence labeling work (Collobert et al., 2011), we introduce a CRF (Lafferty et al., 2001) layer, which has a transition matrix A R |T ||T | to give transition scores of adjacent labels.",
"Thus, the score of a label sequence y = y 1: n for a sentence x = x 1: n is calculated as follows: score( x, y ; ) = n (cid:88) i =1 ( A y i 1 ,y i + s i [ y i ]) , where denotes all the parameters and s [ y ] indicates the dimension of a vector s corresponding to a label y .",
"We can find the best label sequence y (cid:63) by maximizing the sentence score: y (cid:63) = argmax y T n score( x, y ; ) .",
"Training Objective During training, parameters of the network are learned by minimizing the negative log likelihood over all the sentences in training data D w.r.t :",
"To disambiguate word boundaries more effectively, we integrate word information into the character-based framework.",
"More concretely, we transform embeddings of multiple candidate words for each character into a fixed size word vector, which we call a word summary vector , by a word feature composition function .",
"We show the architecture of our model in Figure 2. In addition to the layers of the baseline model, the model comprises a word embedding layer, a word feature composition function, and additional recurrent layers.",
"Word Embedding Layer Given a character sequence x = x 1: n , we search for all words corresponding to subsequences of the input sequence from a word vocabulary V w within a maximum word length.",
"Then, we obtain a unique list 2 W x of candidate words of size m .",
"For example, for a given sentence x 1:5 in Figure 1, candidate words { w 1 , , w 8 } will be found.",
"Each word w W x V w is transformed into a d w dimensional vector e w by the embedding matrix E w R d w | V w | .",
"We can construct a word vocabulary to search candidate words by using external dictionaries or auto-segmented texts processed by any segmenter.",
"Regarding the construction method used in our experiments, refer to 5.1.",
"Composition Functions of Word Features For a character x i , the embeddings of all the candidate words that contain it are aggregated into a word summary vector a i by a composition function.",
"We introduce two attention-based composition functions, weighted average (WAVG) and 2 List indicates set where each element has a unique number from 1 to the size of the set.",
"weighted concatenation (WCON), which enable a model to pay more or less attention according to the importance of candidate words.",
"Both functions calculate the importance score u ij from a character x i to a word w j in W x by a bilinear transformation, which indicates the interaction between the character context vector h i and the word embedding e wj .",
"Then the weight ij [0 , 1] is obtained by a softmax operation to normalize the scores: u ij = h Ti W a e wj , ij = ij exp( u ij ) (cid:80) mk =1 ik exp( u ik ) , (4) where W a R 2 d r d w is a trainable parameter.",
"For simplification of equations, we introduce an indicator variable ij { 0 , 1 } that indicates whether the character x i is included in the word w j as Figure 1 illustrates.",
"Next, WAVG and WCON calculate a word summary vector a i as the weighted average and the weighted concatenation of word embeddings, respectively: a i = WAVG( x i , { w j } mj =1 ) = m (cid:88) j =1 ij e wj , (5) a i = WCON( x i , { w j } mj =1 ) = L (cid:77) l =1 i,i l e wi l , (6) where { w j } = W x and (cid:76) ( ) indicates concatenation of given arguments.",
"Let K be the maximum word length, L = (cid:80) Kk =1 k , and i l for the character x i denotes the corresponding index in W x of l -th words w (cid:48) l in the list { w (cid:48) 1 , . . . , w (cid:48) L } = (cid:83) Kk =1 (cid:83) 0 p = k +1 { x i + p : i + p + k 1 } .",
"If w (cid:48) l / V w , we use a zero vector as the l -th argument in Eq.",
"(6).",
"For example, if K = 3 , WCON concatenates embeddings of words corresponding to x i (length 1), x i 1: i , x i : i +1 (length 2), x i 2: i , x i 1: i +1 and x i : i +2 (length 3) in this order, into a single vector for the character x i .",
"WAVG and WCON finally output a summary vector of size d w and Ld w , respectively.",
"Note that we use zero vector as a summary vector if no candidate words are found for a character.",
"We also use two more variants of composition functions without the attention mechanism, the average function (AVG) and the concatenation function (CON).",
"AVG is a special case of WAVG, where ij = ij / (cid:80) k ik for all ( i, j ) in Eq.",
"(5).",
"CON is the equivalent function to the word features used in Wang and Xu (2017) and a special case of WCON, where i,i l = 1 for all ( i, i l ) in Eq.",
"Recurrent Layers for Word-Integrated Character Representation The summary vector a i and the context vector h i for a character are together fed into additional recurrent layers, which are BiLSTM layers, to further contextualize character representations using word information of surrounding characters.",
"Given the input h i a i , BiLSTMs calculate hidden vectors, and the hidden vectors z 1: n of the last BiLSTM layer are fed into the CRF layer.",
"Datasets We evaluated our model on three datasets, CTB6 and MSR for Chinese word segmentation and BCCWJ (short unit word annotation) for Japanese word segmentation.",
"CTB6 is Chinese Penn Treebank 6.0 (Xue et al., 2005).",
"MSR is provided by the second International Chinese Word Segmentation Bakeoff (Emerson, 2005).",
"BCCWJ is Balanced Corpus of Contemporary Written Japanese 1.1 (Maekawa et al., 2014).",
"We followed the same training/development/test split as in previous work (Yang and Xue, 2012; Chen et al., 2015b) for CTB6, official training/test split for MSR, and the same training/test split as in the Project Next NLP 3 for BCCWJ.",
"We randomly selected 90% of the sentences in the training data as a training set and used the other 10% as a development set, respectively for MSR and BCCWJ.",
"Word Vocabulary Construction Apart from the given training and development sets for each dataset, we assumed no annotated information, including external dictionaries and third-party seg-menters, was available in our experiments.",
"Therefore, we used the training set and large unlabeled texts to obtain a word vocabulary to be used in our proposed model.",
"First, we trained a baseline model from each training set and applied it to large unlabeled texts.",
"Then we collected auto-segmented words appearing in the texts 4 and gold words in the training set, and regarded the union of both kinds of words as a 3 http://www.ar.media.kyoto-u.ac.jp/ mori/research/topics/PST/NextNLP.html 4 We discarded words occurring less than five times in auto-segmented texts, since their pre-trained embeddings were not learned by Word2Vec with the default minimum frequency of five as described later.",
"word vocabulary.",
"We used the non-core section of BCCWJ (BCCWJ-NC) 5 for the Japanese dataset and Chinese Gigaword Fifth Edition 6 for the Chinese datasets as unlabeled texts.",
"Pre-training of Embedding Parameters Following previous work (Collobert et al., 2011), we pre-trained word embeddings from large texts and used them to initialize the word embedding matrix in our proposed segmenter.",
"To pre-train word embeddings, we used the gensim ( Rehurek and So-jka, 2010) implementation of Word2Vec (Mikolov et al., 2013) and applied it to the same texts as ones used to construct the word vocabularies, i.e., the auto-segmented BCCWJ-NC or Chinese Gigaword texts processed by the baseline segmenters.",
"We used the toolkit with a skip-gram model, embedding size 300, the number of iterations one, and other default parameters.",
"For words occurring only in a training set, we randomly initialized their embeddings.",
"We fine-tuned all word embeddings during training of the proposed segmenter.",
"In contrast, we randomly initialized all character embeddings, since pre-trained character embeddings did not improve performance in our preliminary experiments.",
"Hyperparameter Setting Table 1 gives the hyperparameters for the proposed model.",
"The same dropout strategy as in Zaremba et al. (2015) was applied to non-recurrent connections of recurrent layers.",
"We used word vector dropout, which ran-5 We restored provided auto-segmented texts to the original raw sentences and used them as unlabeled texts.",
"domly replaces a word embedding e w to a zero vector when calculating a word summary vector in Eq.",
"(5) or (6).",
"A mini-batch stochastic gradient descent was used to optimize parameters and decayed the learning rate with a fixed decay ratio every epoch after the first five epochs.",
"We trained models for up to 20 epochs and selected the best model on the development set.",
"We evaluated our baseline and proposed model variants on the test sets of the three benchmark datasets.",
"Table 2 shows the mean of F1 scores of three runs for each dataset and each model.",
"Among the proposed model variants, WCON achieved the best performance in almost all cases.",
"We observed the following three findings from the results.",
"First, all the word-integrated model variants consistently outperformed the pure character-based baseline.",
"We conducted McNemar's tests in a similar manner to Kudo et al. (2004), and the improvement of each variant over the base-Attention Segmentation Method NoC Lower Upper Acc F1 Acc Acc-CA Acc-IA BCCWJ WAVG 2.09 30.09 94.53 79.69 99.14 99.22 99.74 97.18 WCON 90.97 99.21 99.29 99.88 93.98 CTB6 WAVG 2.39 18.01 94.54 82.97 95.98 96.64 98.72 86.50 WCON 86.65 96.35 96.99 99.24 82.37 MSR WAVG 2.24 21.61 91.76 83.42 98.53 98.75 99.65 94.26 WCON 85.02 98.51 98.72 99.63 93.53 Table 4: Attention accuracy, segmentation accuracy and F1-score on the development sets.",
"line was significant at 0.001 level.",
"Second, the attention-based variants further boosted performance in comparison with their counterparts without attention.",
"The improvements of WCON over CON on BCCWJ and CTB, and that of WAVG over AVG were statistically significant according to the McNemar's tests.",
"We discuss the reason for the slight and insignificant performance difference between CON and WCON on MSR in 5.3.",
"Third, the concatenation-based variants performed better than the average-based counterparts in almost all cases.",
"This is probably because CON and WCON keep word length and character position information.",
"For example, ( d w +1 )-th to 2 d w -th dimensions of a summary vector always represent a word whose length is two and which ends with a target character (namely x i 1: i for x i ), while AVG and WAVG lose this kind of information.",
"Table 3 shows the performance of state-of-the-art models without additional annotated data.",
"We listed only the WCON results in Table 3 since it performed the best among all variants on the development sets.",
"In comparison with the best previous models, we obtained better performance on BCCWJ and CTB6, and achieved approximately 31% and 3% error reductions, respectively.",
"On MSR, we obtained a comparable performance with the character-based model with word features in Wang and Xu (2017), which used different unlabeled texts from ours to pre-train word embeddings.",
"To our knowledge, our model is the first neural network-based model that has achieved state-of-the-art results on both Japanese and Chinese word segmentation.",
"To analyze how the attention mechanism affected segmentation performance, we show in Table 4 attention accuracy of the proposed model with the attention-based functions of the development sets.",
"Attention accuracy regards a predicted result as correct if a character x i most strongly attends to the word corresponding to the gold segmentation.",
"The table also shows the segmentation performance, where accuracy indicates character-level accuracy of segmentation label prediction.",
"Note that the attention accuracy of a model falls between lower and upper bounds shown in the table.",
"The upper bound indicates the ratio of characters whose candidate words contain the gold word (then attention can be correctly paid) and the lower bound indicates the ratio of characters whose candidate words consist only of the gold word (then attention is always correctly paid).",
"For example, assuming that the gold segmentation of the sentence in Figure 1 is x 1 | x 2 | x 3 x 4 x 5 (= w 1 | w 2 | w 8 ) , candidate words for all characters contain their gold words, and those for characters x 1 and x 2 consist only of respective gold words w 1 and w 2 .",
"Thus, the upper and lower bounds for the sentence are 5 / 5 and 2 / 5 , respectively.",
"Both WAVG and WCON achieved approximately 80% attention accuracy over more than two candidate words on average.",
"The Acc-CA (Acc-IA) column denotes the segmentation accuracy in cases where the attention was correctly (incorrectly) paid.",
"We obtained particularly high segmentation accuracy (close to or higher than 99%).",
"However, incorrect attention led to a large drop in segmentation accuracy.",
"Moreover, we can see a clear tendency for WCON resulting in poorer segmentation accuracy in cases with incorrect attention, compared with WAVG.",
"This suggests that attention by WCON is more sensitive to segmentation decisions; information on attended words is more directly propagated to succeeding layers.",
"As for the slight performance difference between CON and WCON on MSR in Table 2, a possible explanation is that existence of fewer gold words (observed from the upper bound of accuracy) leads to inaccurate attention and segmentation.",
"To examine segmentation results of actual sentences by different methods, we picked up sentence segments",
"(a)-(f) from the BCCWJ development set.",
"We show in Figure 3 the results obtained by BASE, WCON, and CON, which are selected as the character-based baseline, the best of our model variants, and its counterpart without attention, respectively.",
"In addition, we also show values of weight ij learned by WCON in Figure 4. In examples",
"(a) and",
"(b), BASE resulted in a wrong segmentation.",
"However, both word-integrated methods correctly segmented words with the benefit of word information corresponding to gold segmentations ( adoresu , bar and hitewa ).",
"This suggests that word information enables a model to utilize information on distant characters with target characters directly.",
"From WCON results, we confirmed that all characters strongly attended to correct words, as in Figure 4. This suggests that accurate attention contributed to predicting correct segmentations.",
"In examples",
"(c) and",
"(d), only WCON predicted correct segmentations.",
"The existence of correct words in the vocabulary and correct attention probably resulted in the correct segmentation for",
"(c).",
"As for",
"(d), although parts of characters attended to a wrong word ( kankin ), correct attention regarding surrounding characters ( hikikae and kingaku ) seems to lead to the correct segmentation.",
"In examples",
"(e) and",
"(f), WCON predicted the wrong results.",
"The wrong results of",
"(e) by CON and WCON are probably due to the non-existence of the gold word ochanomizu , which is a location name, in the vocabulary.",
"As for",
"(f), WCON paid incorrect attention and predicted the wrong segmentation, even though the correct word chuya exists in the vocabulary.",
"The model learned the incorrect weights likely due to the infrequent occurrence of the correct words; the single words hiru and yoru occur in the training set tens or hundreds of times while the compound word chuya occurs only twice.",
"We may reduce these errors due to no or infrequent occurrences of gold words by increasing word vocabulary size, e.g., using larger texts to pre-train word embeddings.",
"Word Segmentation For both Chinese and Japanese, word segmentation has been traditionally addressed by applying linear statistical algorithms, such as maximum entropy (Xue, 2003), CRF (Peng et al., 2004; Kudo et al., 2004; Zhao and Kit, 2008), and logistic regression (Neubig et al., 2011).",
"Various neural network architectures have been explored for Chinese word segmentation to reduce the burden of manual feature engineering.",
"Specifically, character-based neural models have been developed to model the task as a sequence labeling problem, starting with earlier work by (Zheng et al., 2013) and (Mansur et al., 2013), which applied feed-forward neural networks.",
"Pei et al. (2014) used a neural tensor network to capture interactions between tags and characters.",
"More sophisticated architectures have also been used as standard components of word segmentation models to derive effective features automatically.",
"Chen et al. (2015a) proposed gated recursive neural networks to model complicated combinations of characters.",
"Chen et al. (2015b) used LSTM to capture long distance dependencies.",
"Xu and Sun (2016) combined LSTM and GRNN to capture long term information better by utilizing chain and tree structures.",
"CNNs have been used to extract complex features such as character n -grams (Chen et al., 2017) and graphical features of Chinese characters (Shao et al., 2017).",
"On the other hand, word-based neural models have also been proposed.",
"Typical word-based models (Zhang et al., 2016; Cai and Zhao, 2016; Cai et al., 2017; Yang et al., 2017) sequentially determine whether or not to segment each character on the basis of word-level features and segmentation history, while keeping multiple segmentation candidates by beam search decoding.",
"Liu et al. (2016) combined neural architectures for segment (i.e., word) representations into a semi-CRF framework, which searches for an optimal segmentation sequence consisting of variable length segments.",
"Sun et al. (2017) proposed a gap-based model to predict whether or not to segment two consecutive characters, using a deep CNN consisting of more than ten layers.",
"Recent works utilized word information on a character-based framework.",
"Zhou et al. (2017) pre-trained character embeddings using word boundary information from auto-segmented texts.",
"Wang and Xu (2017) explicitly introduced word information into their CNN-based model.",
"They concatenated embeddings of a character and multiple words corresponding to n -grams ( n ranging from 1 to 4) that include the target character.",
"For Japanese, less work employed neural models for word segmentation than for Chinese.",
"Morita et al. (2015) integrated an RNN language model into a statistical Japanese morphological analysis framework, which simultaneously segments a sentence into words and predicts word features, such as POS and lemma.",
"Kitagawa and Ko-machi (2018) applied a pure neural model based on LSTM and achieved a better performance than a popular statistical Japanese segmenter (Neubig et al., 2011).",
"Around the same time as our work, two other character-based models for word segmentation have been proposed.",
"Ma et al. (2018) showed a standard BiLSTM model can achieve state-of-the-art results when combined with deep learning best practices, including dropout to recurrent connections (Gal and Ghahramani, 2016) and pre-trained embeddings of character bigrams.",
"These techniques can also be applied to and can further boost performance of our model.",
"Yang et al. (2018) proposed a lattice LSTM-based model with subsequence (word or subword) information.",
"Their model also considers the importance of multiple words by integrating character and word information into an LSTM cell vector using a gate-mechanism.",
"However, their model might not fully exploit word information, since word information is given to only the first and last characters of the word.",
"LSTM-CRF LSTM-CRF is a popular neural architecture, which has been applied to various tagging tasks, including word segmentation (Chen et al., 2015b), POS tagging and NER (Huang et al., 2015; Ma and Hovy, 2016; Rei et al., 2016).",
"Ma and Hovy (2016) and Rei et al. (2016) introduced the internal character information of words on word-level labeling tasks in contrast to our work introducing candidate word information of characters in the character-level labeling task.",
"Attention Mechanism An attention mechanism (Bahdanau et al., 2015) was first introduced in machine translation to focus on appropriate parts of a source sentence during decoding.",
"This mechanism has been widely applied to various NLP tasks, including question answering (Sukhbaatar et al., 2015), constituency parsing (Vinyals et al., 2015), relation extraction (Lin et al., 2016) and natural language inference (Parikh et al., 2016).",
"Rei et al. (2016) introduced a gate-like attention mechanism on their word-based sequence labeling model to determine the importance between the word itself and the internal characters for each word.",
"In this paper, we proposed a word segmentation model that integrates word-level information into a character-based framework, aiming to take the advantages of both characterand word-based models.",
"The experimental results show that our model with an attention-based composition function outperforms the state-of-the-art models on both Japanese and Chinese benchmark datasets.",
"Our analysis suggests that a word vocabulary with larger coverage can reduce errors deriving from unknown words.",
"In future work, we will explore (1) the relationship between vocabulary coverage and segmentation performance, and (2) the effect of using pre-trained word vectors learned from different domain texts in domain adaptation scenarios.",
"We would like to thank Atsushi Fujita, Rui Wang, and the anonymous reviewers for their helpful feedback on this work."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"method",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"objective",
"result",
"method",
"objective",
"other"
] |
[
"Abstract Temporal orientation refers to an individual's tendency to connect to the psychological concepts of past , present or future , and it affects personality, motivation, emotion, decision making and stress coping processes.",
"The study of the social media users' psycho-demographic attributes from the perspective of human temporal orientation can be of utmost interest and importance to the business and administrative decision makers as it can provide an extra precious information for them to make informed decisions.",
"In this paper, we propose a very first study to demonstrate the association between the sentiment view of the temporal orientation of the users and their different psycho-demographic attributes by analyzing their tweets.",
"We first create a temporal orientation classifier in a minimally supervised way which classifies each tweet of the users in one of the three temporal categories, namely past , present , and future .",
"A deep Bi-directional Long Short Term Memory (BLSTM) is used for the tweet classification task.",
"Our tweet classifier achieves an accuracy of 78.27% when tested on a manually created test set.",
"We then determine the users' overall temporal orientation based on their tweets on the social media.",
"The sentiment is added to the tweets at the fine-grained level where each temporal tweet is given a sentiment with either of the positive, negative or neutral.",
"Our experiment reveals that depending upon the sentiment view of temporal orientation, a user's attributes vary.",
"We finally measure the correlation between the users' sentiment view of temporal orientation and their different psycho-demographic factors using regression.",
"The rapid growth of social media data in recent years has encouraged different studies which only existed at the psychological level (theory or pure",
"logic).",
"Various attributes of users can be analyzed from the texts they write on the social media platform.",
"The studies include age, gender prediction (Marquardt et al., 2014; Sap et al., 2014), psychological well being (Dodds et al., 2011; Choud-hury et al., 2013), and a host of other behavioural, psychological and medical phenomena (Kosinski et al., 2013).",
"However, a few works exist which analyze these factors using socio-economic characteristics of the Twitter users.",
"Time is generally defined by a dimension where the events are ordered from the past through the present into the future which includes duration and intervals.",
"Major studies on time have been done for event detection (Ihler et al., 2006; Batal et al., 2012; Sakaki et al., 2013) which are mainly of the subjective consent.",
"In contrast, temporal orientation of a user is defined by his/her tendency to emphasize past , present or future (Zimbardo and Boyd, 2015) which gives more objective consent of time.",
"The growth of social media content has enabled us to study this objective consent more precisely.",
"Past studies have established a consistent link between the temporal orientation and several user characteristics such as age, gender, education, and psychological traits (Webley and Nyhus, 2006; Adams and Nettle, 2009; Schwartz et al., 2013; Zimbardo and Boyd, 2015).",
"However, the sentiment dimension (positive, negative or neutral) of the temporal orientation is merely studied at the empirical level on a large-scale.",
"For example, people who are optimistic are future-oriented and positive at the same time.",
"So, only defining the temporal orientation cannot specify the optimistic people correctly.",
"We need the sentiment dimension as well to find the exact correlation.",
"In this paper, we first develop a temporal-orientation classifier to classify tweets into past , present , and future and then group over the users 663 to create user-level assessments.",
"We use a Bidirectional Long Short Term Memory (Bi-LSTM) network for tweet temporal classification where tweet vectors are fed to generate the classification model.",
"We propose a hash tag-based minimally supervised method with the two-pass filtering to create the past , present and future -oriented tweets for the training of the Bi-LSTM network.",
"We manually examined trending hashtags in Twitter for a specific period of time and selected hashtags which represent past, present/ongoing , or future events.",
"The English tweets containing one of the selected hashtags are crawled using Twitter streaming API.",
"1 The tweet temporal orientation classifier is validated on a manually annotated test set.",
"Finally, we use this classifier to automatically classify a large dataset consisting of 10 million tweets from 5,191 users mapped to their user-level features.",
"Besides these three temporal categories ( past , present or future ), we have considered the positive, negative and neutral sentiments of the tweets for the fine-grained classification.",
"The user-level tweets with a particular temporal orientation is further subdivided into either positive, negative or neutral sentiment.",
"Finally, we evaluated whether the sentiment view of temporal orientation (i.e. past-positive, past-negative, past-neutral, present-positive, present-negative, present-neutral, future-positive, future-negative, and future-neutral) of the users is related to their several psycho-demographic attributes.",
"In this research, we have considered five psycho-demographic attributes, namely age, eduction, relationship, intelligence, and optimism.",
"Our contributions are summarised as below: We introduce the sentiment dimensions in the human temporal orientation to infer the social media users' psycho-demographic attributes on a large-scale.",
"We propose a minimally supervised approach to the temporal orientation classification task that leverages large quantities of unlabeled data and requires no hand-annotated training corpora.",
"The empirical evidence shows that the method performs reasonably well.",
"We define a way to find a novel association between the sentiment view of temporal orientation and the different psycho-demographic factors of the tweet users.",
"The temporal study has recently received an increased attention in several application domains of Natural Language Processing (NLP) and Information Retrieval (IR).",
"The introduction of the TempEval task (Verhagen et al., 2009) and the subsequent challenges i.e. TempEval-2 and -3 (Ver-hagen et al., 2010; UzZaman et al., 2013) in the Semantic Evaluation workshop series have clearly established the importance of time in dealing with the different NLP tasks.",
"Alonso et al. (2011) reviewed the current research trends and presented a number of interesting applications along with the open problems.",
"The shared task like the NTCIR-11 Temporalia task (Joho et al., 2014) further pushed this idea and proposed to distinguish whether a given query is related to past , recency , future or atemporal .",
"It is the first such challenge, which is organized to provide a common platform for designing and analyzing the time-aware information access systems.",
"In parallel, new trends have emerged in the context of the human temporal orientation (Schwartz et al., 2013; Sap et al., 2014; Park et al., 2015; Schwartz et al., 2015; Park et al., 2017).",
"The underlying idea is to understand how the past, present, and future emphasis in the text may affect people's fi-nances, health, and happiness.",
"For that purpose, the temporal classifiers are built to detect the overall temporal dimension of a given sentence.",
"For instance, the following sentence can't wait to get a pint tonight would be tagged as future .",
"In summary, most of the temporal text processing applications have been mainly relying on the rule-based time taggers, for e.g. HeidelTime (Strotgen and Gertz, 2015) or SUTime (Chang and Manning, 2012) to identify and normalize time mentions in the texts.",
"Although interesting results have been reported (UzZaman et al., 2013), but the coverage is limited to the finite number of rules they implement.",
"The time perspective and its importance in various social science and psychological studies is well established in literature.",
"It plays a fundamental role in our interpersonal relation influenced by cognitive process (Zimbardo and Boyd, 2015).",
"This is also useful in forming goals, expectations and imaginations.",
"Time perspective is a fundamental process, which intern, influenced by many user attributes such as age, religion, education etc.",
"In their research, Zimbardo and Boyd (2015) have shown that the negative view of the past is related to depression, anxiety, unhappiness, and low self-esteem but the positive view of the past is related to self-esteem and happiness.",
"The hedonistic view of the present is related to novelty seeking and sensation seeking whereas the fatalistic view of the present is related to aggression, anxiety and depression.",
"The future is related to conscientiousness but negatively correlated with depression and anxiety.",
"Another research suggests that the satisfaction with life of the older adults depends on their positive views of past (Kazakina, 1999).",
"In their research, Drake et al. (2008) described that the past-positive is positively correlated to happiness.",
"The link between the past-negative and many psychological distress like depression and anxiety has been well established in literature (Cully et al., 2001).",
"A focus on the future is very effective for functioning positively.",
"The future orientation also helps in better health in later life (Kahana et al., 2005).",
"In a research, George (2009) evaluated that subjective well-being, happiness, psychological well-being, positive effects and morale refer to the positive orientation towards life.",
"Past research has established that the time perspective is an important factor to determine the human emotional intelligence (Stolarski et al., 2011).",
"In our work, we measure the relationship with different level of intelligence and the sentiment view of the temporal orientation by more objective consent of the time perspective, i.e. temporal orientation from the tweets on the social media.",
"In a social science research, Guthrie et al. (2009) have shown that the future time perspective is associated with the current socioeconomic status, and the past-fatalistic time perspective is associated with the both current and childhood socioeconomic status.",
"Although these kinds of research exist extensively in the psychological study, it is not well explored with the empirical study using more objective consent of the time perspective, i.e the temporal orientation.",
"As per our best knowledge, only a very few studies exist that focus on the temporal orientation where only the coarse-grained classes have been considered (Schwartz et al., 2015; Park et al., 2017).",
"In these researches, many user attributes were correlated with the temporal orientation which include conscientiousness, age, gender, openness, extraversion, agreeableness, neuroticism, satisfaction with life, depression, IQ, number of friends etc.",
"In our work, we incorporated fine-grained temporal orientation and found the correlation with the users' age, education, relationship, intelligence, and optimism.",
"The fine-grained study of the temporal orientation only existed at the theoretical level validated with very limited user dataset.",
"Besides validating these findings in the empirical way for the large number of users, we also discuss some previously unexplored relationships.",
"We first create a deep temporal-orientation classifier to capture the temporal orientation ( past , present and future ) of the users' tweets.",
"Thereafter we further classify the users' tweets at the fine-grained level by associating sentiment, i.e. positive, negative or neutral for each temporal category.",
"We compare our temporal-orientation classifier with an existing state-of-the-art method.",
"The temporal orientation of tweets is defined by classifying each tweet T in one of the temporal categories t, where t { past, present , or future } .",
"Given the following tweet Let me change lanes and turn left legally , the temporal orientation classifier should predict it as an instance of future orientation.",
"At first we create a temporal oriented tweet dataset in a minimally supervised way by exploiting the hashtag information.",
"Deep Bi-LSTM network is then trained on this dataset.",
"We use LSTM networks (Hochreiter and Schmid-huber, 1997) as these are well known for capturing the long-term dependencies within the text.",
"Many times we fail to capture the temporal orientation of a text using just the tense information or the existing temporal keywords.",
"In particular, the tweet Today I have a plan for a meeting at night. is future-oriented.",
"Here, the temporal keyword Today' has a time sense of present whereas the tense of the verb is also present .",
"The deep learning networks have been very useful to correctly capture the temporal dimension of these kinds of tweets.",
"Although the basic Artifi-665 cial Neural Networks (ANNs) (Schalkoff, 1997) and Convolutional Neural Network (CNN) (Le-Cun et al., 1995) capture the temporal orientation of many tweets correctly, they fail to properly identify where the validating temporal information in the tweet has a long dependency between them.",
"For example, the tweet Working in the same unit today with different staff was much better. has temporal orientation as past .",
"Here, the word which has a temporal sense ( i.e. working, today, was ) are placed at a distance from each other.",
"This motivates us to use the LSTM network.",
"Bidirectional Long Short Term Memory Networks (Bi-LSTM): LSTMs are a special kind of recurrent neural network (RNN) capable of learning long-term dependencies in the text by effectively handling the vanishing or exploding gradient problem.",
"The Bidirectional LSTMs (Schuster and Paliwal, 1997) train two LSTMs, instead of one, on the input sequence.",
"The first on the input sequence and the second on a reversed copy of the input sequence.",
"It is designed to capture information of the sequential dataset and maintain the contextual features from the past and the future.",
"This can provide an additional context to the network and result in faster and even fuller learning on the problem without keeping the redundant context information.",
"The previous study on the temporal orientation classification based on machine learning includes a supervised classification based on the manually created training set (Schwartz et al., 2015).",
"The multi-class classification was based on a one vs rest approach.",
"But adapting the multiple binary classifiers is not always the best way to deal with a multi-class classification problem.",
"It requires building of three independent classifiers for each temporal category, which consumes more time.",
"Unlike this approach, we incorporate a deep learning-based multi-class classification method for the temporal orientation.",
"The training corpus is generated in a minimally supervised way and fitted to the Bi-LSTM network.",
"Our experiment uses Bi-LSTM with 200 neurons at the input layer.",
"The loss function we used is categorical cross-entropy and the optimizer used is Root Mean Square Propagation (rm-sprop) .",
"We repeat the training for 100 number of epochs with batch size set to 128.",
"We also employ dropout (Srivastava et al., 2014) for regularization with a dropout rate of 0.2 to prevent over-fitting.",
"All of these attributes are finalized by parameter tuning with the performance obtained on 10-fold cross-validation using the grid search method.",
"Tweet vectors are generated by existing Glove vectors (Pennington et al., 2014) for tweets 2 of 200 dimensions which are trained on 27 billion tweets.",
"We also validate our model on the validation set which was 10% of the training set.",
"We use an existing sentiment classifier available with the NLTK toolkit (Bird, 2006) to classify the user-level tweets into positive, negative or neutral.",
"3 Sentiment is added at the fine-grained level of the temporal orientation.",
"Given the tweets of a user, the sentiment view of temporal orientation of that user is defined by the following equation: orientation s,t ( user ) = | tweets s/t ( user ) | | tweets t ( user ) | (1) where, ( t { past, present , or future } ), and ( s { positive, negative , or neutral } ), in equation (1).",
"Here, we first classify each user's tweet into the past, present or future temporal category.",
"Then for each temporal category, we find the percentage of each sentiment class (i.e positive, negative or neutral) to obtain the sentiment view of temporal orientation.",
"We measure the correlation between a user's sentiment view of temporal orientation with their 2 https://nlp.stanford.edu/projects/ glove/ 3 As we are using a sentiment classifier from well known NLTK library, we are not validating it on a manually-tagged test set.",
"For experiments we categorize the datasets into three kinds: training, test and user-level.",
"Training set consists of 27k tweets, whereas the test set is manually annotated with 741 tweets.",
"4 The user-level tweets consist of 10 million tweets from 5,191 users mapped to their user-level features.",
"Training tweets are collected using the Twitter streaming API.",
"5 The tweets are collected for the duration of September 2017 and October 2017.",
"We consider day-wise trending topics during this period.",
"6 We only consider those hashtags which signify a temporal event.",
"Finally, we chose worldwide trending events and collected the tweets based on the hashtags.",
"The collection of the temporal tweets are based on the following three hypotheses:",
"(a) if a trending topic is of a future event then mostly people would write the futuristic tweets;",
"(b) if a trending topic is about a past incident, then the people would write more about the past but they also write about the present effects of that event;",
"(c) the tweets of trending present event are most critical to handle as besides writing about the present incidents, people always join the links with the past incidents and also give opinion about the future effects.",
"The task was challenging as the tweets contain a lot of noises and people use various ways to refer to the past, the present and the future.",
"To deal with the pitfalls described in the hypotheses, we filter the tweets using a two-pass filtering method.",
"The method is based on two assumptions",
"(a) every meaningful sentence should contain a verb.",
"(b) mostly past-oriented tweets have tense of the verb as past.",
"The first assumption is well-established in literature, whereas the second assumption is based on our observation on the tweets and validation against a tense-based classifier.",
"In the first pass of 4 All the developed resources are available at http:// www.iitp.ac.in/ai-nlp-ml/resources.html 5 https://developer.twitter.com/en/docs 6 The reason for this selection strategy was the fact that during the passage of time, the future events become present event and the present event becomes past event.",
"the filtering method, we filter out the tweets which do not contain a verb.",
"The verb part-of-speech tag is determined using the CMU tweet-tagger (Gim-pel et al., 2011).",
"In the second pass of the filtering method, we removed the tweets having tense as past from the tweets of the present and future events.",
"The CMU tweet-tagger does not provide verbs in different sub-categories.",
"For this reason, we also retrieve the Part-of-Speech (PoS) tag information from the Standford PoS-tagger (Manning et al., 2014) for all the tweets to get the subcategories of verb (i.e. VB, VBD, VBG, VBN, VBP, VBZ).",
"We observed that although Standford PoS-tagger assigned the required verb subcategories, it also incorrectly tagged some non-verbs as verbs.",
"This is the reason why we considered only those verbs for sub-categorization which were identified (as verbs) by the CMU tweet-tagger.",
"We varied the training set starting from 3K (equally distributed) to 30K and observed that the accuracy on the gold standard test set did not improve after 27K training instances.",
"Few example tweets with the trending topics are depicted in Table",
"1. 4.2 Test Set We evaluate our temporal-orientation classifier on a manually created test set.",
"To get proper assessment on the user-level test set, we randomly selected 800 tweets from the user-level test tweets.",
"Three annotators (post-graduate level students) were asked to tag the tweets in one of the four available classes, namely past , present , future and other .",
"The annotation guidelines were as follows: Tag a tweet as past if it talks about an event which has started as well as ended or the underlaying temporal connotation of the tweet refers to the past time.",
"Tag a tweet as present if it talks about an event which started but not ended yet or the tweet has a present temporal connotation, Tag a tweet as future if it talks about an event which is yet to happen.",
"Tag a tweet as other in case they found it difficult to get the exact temporal tag for the tweets.",
"We measured the multi-rater kappa agreement (Fleiss, 1971) among the annotators and it was found to have a substantial kappa value of 0.82.",
"The higher kappa value indicates that associating text with temporal dimensions, namely, past, present, future, and other is relatively straightforward task for humans by using world knowledge than words (Dias et al., 2014).",
"Moreover, our inter-annotator agreement value is in line with the literature.",
"7 Finally, we select the temporal class of a tweet based on the majority voting among the annotators.",
"The distribution of annotated tweets is as follows: Past 375 Tweets Present 164 Tweets Future 202 Tweets Other 59 Tweets We removed tweets tagged as Other and used 741 tweets as the test set.",
"8 4.3 User-level Test Set The user-level tweets consist of 10 million tweets from 5,191 users mapped to their user-level psycho-demographic features developed by Preotiuc-Pietro et al. (2015) are used for this current work.",
"In particular, we use five psycho-demographic attributes such as age , education , intelligence , optimism , and relationship for our experiment.",
"The users' psycho-demographic features are automatically deduced based on the users' published texts.",
"Preotiuc-Pietro et al. (2015) used a predictive model to automatically infer user-level features.",
"The method uses various user properties (annotated using crowdsourc-7 Inter-annotator agreement value for the same task in Schwartz et al. (2015) is 0.83.",
"8 We only considered past , present and future classes for the reason justified in Schwartz et al. (2015).",
"ing) including age, gender, income, education, relationship status, optimism and life satisfaction as well as all the tweets published by a user to infer user-level features.",
"We first evaluate our temporal orientation classifier which measures the orientation of each tweet as either of past , present or future .",
"The classifier was trained on the training set and evaluated on the test set.",
"We obtain the highest accuracy of 78.27% over 741 test samples.",
"For comparative evaluation, we can consider a strong baseline system proposed by Schwartz et al. (2015).",
"The baseline system was built following a supervised learning strategy over different features such as ngrams, time expression, PoS tags, tweet length, and temporal class-specific lexicons.",
"The system achieved an accuracy of 71.8% when tested over 500 manually annotated data.",
"The baseline was not reproducible as both the training and test set were manually tagged and the datasets were not available.",
"9 The baseline model was constructed using the manually annotated data, creation of which involved considerable efforts and expenses.",
"In contrast, we follow a minimally supervised method (does not incur any manual effort) to create our own datasets which are of acceptable quality.",
"We show the results in Table",
"2. Orientation Precision Recall F1-measure Past 81.75 92.0 86.57 Present 79.04 50.61 61.71 Future 71.02 75.24 73.07 Table 2: Precision, Recall and F1-measure of our proposed temporal orientation classification model on manually annotated test data.",
"9 We approached the authors of Schwartz et al. (2015) for the data.",
"They did not share the data due to copyright issues.",
"This is the reason for generating our own gold-standard test set.",
"Results in Table 2 show that the past class is the most correctly classified followed by the future and the present.",
"We observe low recall for the present class as many present tweets were mis-classified into either past or present class.",
"The confusion matrix is shown in Figure",
"2. The Figure 2: Confusion matrix for the temporal orientation classification.",
"present class is mis-classified into future when the tweet is of the declarative type.",
"For example, the tweet Its not a casserole as theres no binding matrix has present orientation but our classifier classifies it as of future orientation.",
"Another reason could be the fact that the words in the sentence representing present temporal orientation are not in the correct form (Its, theres).",
"The present classes are mis-classified into the past classes in those cases where mainly the existence of the tense of a verb is past but actually the tweet has present orientation.",
"For example, the tweet For me gloves and mitts made for Cross Country skiing work well for ventilated warmth is mis-classified into past because of the existence of the word (made) which has tense as past .",
"The tweets with future orientation are mostly mis-classified into past orientation.",
"These kinds of mis-classification is due to either for the presence of past tense or the tweet is a compound sentence which has an independent clause of the past orientation.",
"For example, the tweet Hoping to have fun among my friends but wishing I were with you instead has a future orientation but it is mis-classified into the past orientation.",
"We measure the potential limitations of the NLTK sentiment classifier on 100 randomly selected tweets from the test set.",
"The manual observation shows that the classifier generally mis-classifies where the sentiment of the tweet is not well understood (example: Big Trucks parked all over).",
"In some cases, the tweets having conflict sentiment are mis-classified in either of positive or negative class.",
"For example, the tweet I am very sorry that is a working weekend for me but thanking you very much for the invitation has a conflict sentiment but the classifier classified it into an instance of negative sentiment.",
"We measure the predictive power of the sentiment view of temporal orientation by performing regression on different psycho-demographic factors.",
"The correlation results between the users' sentiment view of temporal orientation and their psycho-demographic factors using linear regression are presented in Table 3 and Table 4.",
"The performance is measured using a standard metric, namely Pearson's correlation coefficient r between the inferred and the target values.",
"All the results in Table 3 and Table 4 are statistically significant when tested against null hypothesis ( p value < 0.05).",
"We measure the correlation between the Twitter users' psycho-demographic features and their sentiment view of temporal orientation.",
"In this section, all the discussions and analyses are based on the correlation results over the user test set.",
"We select age', education' and relationship' as demographic features for this study.",
"The correlation coefficients between the users' demographic attributes and their sentiment view of temporal orientation are shown in Table 3.",
"Results in Table 3 demonstrate that any user's past-orientation is significantly correlated (0.4677) with their age.",
"In other words, it suggests that when people age they think more about the past than the present and future.",
"To the best of our knowledge, psychology literature (Nurmi, 2005; Steinberg et al., 2009) has not established the correlation of the past orientation with age.",
"Our finding is consistent with a recent computational study on the human temporal orientation (Schwartz et al., 2015) which shows positive correlation between age and the past orientation.",
"However, we also observed that the users' age has the highest positive correlation (0.5235) with the 669 UserAttribute Past Past-Pos Past-Neg Past-Neu Present Present-Pos Present-Neg Present-Neu Future Future-Pos Future-Neg Future-Neu Age 0.4677 0.3736 -0.0639 -0.3086 0.0802 0.4392 -0.0538 -0.3635 -0.4547 0.5235 -0.0186 -0.4590 Education:degree -0.0577 -0.0281 -0.1340 0.0853 0.0347 -0.0402 -0.1588 0.1013 0.0340 -0.0393 -0.1470 0.0807 Education:graduate degree -0.2214 -0.1837 -0.2136 0.2625 -0.0082 -0.2139 -0.2454 0.2898 0.2004 -0.2259 -0.2603 0.2817 Education:high school 0.1137 0.0780 0.1748 -0.1488 -0.0264 0.0970 0.2048 -0.1702 -0.0878 0.0997 0.1994 -0.1507 Relationship:divorced -0.3100 -0.2414 -0.2106 0.3139 -0.0299 -0.2946 -0.2425 0.3596 0.2898 -0.3139 -0.2654 0.3614 Relationship:in a relationship 0.0306 0.0240 0.0742 -0.0560 0.0169 0.0208 0.0596 -0.0431 -0.0355 0.0326 0.0664 -0.0496 Relationship:married -0.0859 -0.0385 -0.1593 0.1075 0.0173 -0.0605 -0.1800 0.1279 0.0677 -0.0546 -0.1812 0.1049 Relationship:single 0.1280 0.0822 0.1613 -0.1479 -0.0107 0.1082 0.1936 -0.1755 -0.1082 0.1069 0.1866 -0.1531 Table 3: Correlation between users sentiment view of temporal orientation and their different demographic features using LR.",
"future-positive.",
"It indicates that people become positively future-orientated when they age, though not surprising, yet somewhat novel.",
"The results indicate that only considering the temporal orientation without the sentiment dimensions can be sometimes misleading as we can observe that the negative future-orientation has a negative correlation (-0.4547) with age while the future-positive has a positive correlation with age.",
"Figure 3 explains how the trends of the sentiment view of temporal orientation varies from age 10 to 60.",
"We observe that for all the temporal classes, the positive sentiment increases rapidly with the increase of age.",
"Most interestingly, for all the temporal orientation people become negative up to the age of 28 and then their negative sentiment steadily reduces.",
"We also observe that human's neutral sentiment rapidly decreases up to the age of 27 and then it decreases steadily.",
"The second demographic attribute we considered is education'.",
"We measure the correlation between the temporal orientation and three different levels of education: degree , graduate degree , and high school .",
"In the psychological literature (Horstmanshof and Zimitat, 2007; Richardson et al., 2012), it was reviewed that the students' temporal orientations is a new dimension to approaches in enhancing student engagement in the academics.",
"It was found that the first year students of university were more future-oriented rather than present or past oriented.",
"From our results in Table 3, we found that the users who have education of degree level are present-oriented .",
"But interestingly they nei-Figure 3: Standardized sentiment view of temporal orientation of the users over their age.",
"Smoothing was done using loess smoothing estimates.",
"Here, pos-positive, neg-negative, neu-neutral.",
"ther think positively nor negatively-they express more neutral sentiment.",
"Users with education of graduate degree are found to be future-oriented .",
"Here, the fine-grained classification suggests that they also express the neutral sentiment.",
"Interestingly, we found that the users with education of high school had positive correlation with past orientation .",
"However, when we considered the sentiment dimension, we found that it was actually correlated with present orientation with negative sentiment .",
"Our third and final demographic feature relationship' is categorized into four types in our current study: divorced , in a relationship , married , and single .",
"From the results in Table 3, we observe that the users who are divorced found to be more future oriented and they seem to express the neu-670 tral sentiment.",
"The in a relationship users seem to be more past-oriented.",
"They also found to be negative minded.",
"Married people are found to be present oriented while expressing the neutral sentiment.",
"The users who are single, are generally futuristic but they are negative about it.",
"We chose two psychological factors, intelligence and optimism.",
"The correlation coefficient results are shown in Table 4.",
"The intelligence level of the users was measured in three sub-categories: intelli-gence:below average , intelligence:average and intelligence:much above .",
"Some novel findings have been observed through our results.",
"We found a modest yet significant positive correlation between intelligence below average and negative view of the present orientation.",
"It suggests that the users having intelligence below average are present oriented but they seem to have negative view of it.",
"Surprisingly, we found that average intelligent users are past-oriented but considering the sentiment dimension they seem to be more future-positive.",
"However, this should be validated with further investigation.",
"We found that the users who have intelligence much above are more future orientated.",
"Interestingly, we found a negative correlation with the future-positive.",
"However, we found a positive correlation (0.3614) with the future-neutral which suggests that the users with much above intelligence are futuristic and they express a neutral view.",
"We chose three categories of optimism: optimistic, pessimistic, and neither for our observation.",
"The result shown in Table 4 suggests that the optimistic people are future oriented.",
"They also seem to have positive sentiment.",
"Although the link between the future orientation and optimism is well established in literature (Lennings, 2000; Busseri et al., 2013), there is no empirical study for a large number of users.",
"We find a relatively higher positive correlation between the pessimist and the present-negative which suggests that the pessimistic people are negative minded and focus more on present.",
"People who are neither optimistic nor pessimistic are found to be future oriented with the neutral sentiment which is also a novel finding.",
"This paper presents a first large-scale study to associate the psycho-demographic profile of the Twitter users with their sentiment view of temporal orientation based on language they use in Twitter.",
"We first detect the temporal orientation of the tweets using the Bi-LSTM based temporal orientation classifier.",
"We generated the temporal categories of our training set in a minimally supervised way.",
"We created a benchmark dataset for the evaluation of our temporal orientation classifier.",
"The temporal orientation classifier achieved an accuracy of 78.27% when run on the manually tagged test data.",
"We added the sentiment dimension at the fine-grained level of the temporal orientation.",
"The associations between the users' sentiment view of temporal orientation and their different psycho-demographic attributes (age, education, intelligence, optimism, and relationship) are somewhat novel in the context of the computational social science studies.",
"Whereas the study on the temporal orientation concentrated on a coarse-grained level, we focused on the fine-grained level of temporal orientation which opens more aspects of the social, economic, and psychological research which was not possible previously on a large scale.",
"Acknowledging the possible limitations of this study including the quality of the sentiment classifier and a low recall of the present temporal orientation, in future, we will consider more sophisticated sentiment classifier for better performance and explore more linguistic insight into consideration to improve the performance of the temporal orientation classifier.",
"We also like to extend our work with the link to more behavioral study and analysis.",
"Asif Ekbal acknowledges Young Faculty Research Fellowship (YFRF), supported by Visvesvaraya PhD scheme for Electronics and IT, Ministry of Electronics and Information Technology (MeitY), Government of India, being implemented by Digital India Corporation (formerly Media Lab Asia).",
"Mohammed Hasanuzzaman and Andy Way would like to acknowledge ADAPT Centre for Digital Content Technology, funded under the SFI Research Centres Programme (Grant 13/RC/2106) and is co-funded under the European Regional Development Fund."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"objective",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"method",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"method",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"result",
"result",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"objective",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"objective",
"objective",
"other",
"other"
] |
[
"Emotion recognition in conversations is crucial for the development of empathetic machines.",
"Present methods mostly ignore the role of inter-speaker dependency relations while classifying emotions in conversations.",
"In this paper, we address recognizing utterance-level emotions in dyadic conversational videos.",
"We propose a deep neural framework, termed conversational memory network, which leverages contextual information from the conversation history.",
"The framework takes a multimodal approach comprising audio, visual and textual features with gated recurrent units to model past utterances of each speaker into memories.",
"Such memories are then merged using attention-based hops to capture inter-speaker dependencies.",
"Experiments show an accuracy improvement of 3 4% over the state of the art. 1 Introduction Development of machines with emotional intelligence has been a long-standing goal of AI.",
"With the increasing infusion of interactive systems in our lives, the need for empathetic machines with emotional understanding is paramount.",
"Previous research in affective computing has looked at dialogues as an essential basis to learn emotional dynamics (Sidnell and Stivers, 2012; Poria et al., 2017a; Zhou et al., 2017).",
"Since the advent of Web 2.0, dialogue videos have proliferated across the internet through platforms like movies, webinars, and video chats.",
"Emotion detection from such resources can benefit numerous fields like counseling (De Choudhury et al., 2013), public opinion mining (Cambria et al., 2017), financial forecasting (Xing et al., 2018), and intelligent systems such as smart homes and chat-bots (Young et al., 2018).",
"In this paper, we analyze emotion detection in videos of dyadic conversations.",
"A dyadic conversation is a form of a dialogue between two entities.",
"We propose a conversational memory network (CMN), which uses a multimodal approach for emotion detection in utterances (a unit of speech bound by breathes or pauses) of such conversational videos.",
"Emotional dynamics in a conversation is known to be driven by two prime factors: self and interspeaker emotional influence (Morris and Keltner, 2000; Liu and Maitlis, 2014).",
"Self-influence relates to the concept of emotional inertia , i.e., the degree to which a person's feelings carry over from one moment to another (Koval and Kuppens, 2012).",
"Inter-speaker emotional influence is another trait where the other person acts as an influencer in the speaker's emotional state.",
"Conversely, speakers also tend to mirror emotions of their counterparts (Navarretta et al., 2016).",
"Figure 1 provides an example from the dataset showing the presence of these two traits in a dialogue.",
"Existing works in the literature do not capitalize on these two factors.",
"Context-free systems infer emotions based only on the current utterance in the conversation (Bertero et al., 2016).",
"Whereas, state-of-the-art context-based networks like Poria et al., 2017b, use long short-term memory (LSTM) networks to model speaker-based context that suffers from incapability of long-range summarization and unweighted influence from context, leading to model bias.",
"Our proposed CMN incorporates these factors by using emotional context information present in the conversation history.",
"It improves speaker-based emotion modeling by using memory networks which are efficient in capturing long-term 2122 dependencies and summarizing task-specific details using attention models (Weston et al., 2014; Graves et al., 2014; Young et al., 2017).",
"Specifically, the memory cells of CMN are continuous vectors that store the context information found in the utterance histories.",
"CMN also models interplay of these memories to capture interspeaker dependencies.",
"CMN first extracts multimodal features (audio, visual, and text) for all utterances in a video.",
"In order to detect the emotion of a particular utterance, say u i , it gathers its histories by collecting previous utterances within a context window.",
"Separate histories are created for both speakers.",
"These histories are then modeled into memory cells using gated recurrent units (GRUs).",
"After that, CMN reads both the speaker's memories and employs attention mechanism on them, in order to find the most useful historical utterances to classify u i .",
"The memories are then merged with u i using an addition operation weighted by the attention scores.",
"This is done to model inter-speaker influences and dynamics.",
"The whole cycle is repeated for multiple hops and finally, this merged representation of utterance u i is used to classify its emotion category.",
"The contributions of this paper can be summarized as follows: 1. We propose an architecture, termed CMN, for emotion detection in a dyadic conversation that considers utterance histories of both the speaker to model emotional dynamics.",
"The architecture is extensible to multi-speaker conversations in formats such as textual dialogues or conversational videos.",
"2. When applied to videos, we adopt a multimodal approach to extract diverse features from utterances.",
"It also makes our model robust to missing information.",
"3. CMN provides a significant increase in accuracy of 3 4% over previous state-of-the-art networks.",
"One variant called CMN self which does not consider the inter-speaker relation in emotion detection also outperforms the state of the art by a significant margin.",
"The remainder of the paper is organized as follows: Section 2 provides a brief literature review; Section 3 formalizes the problem statement; Section 4 describes the proposed method in detail; ex-So you're leaving tomorrow.",
"Over the years, emotion recognition as an area of research has seen contributions from researchers across varied fields like signal processing, machine learning, cognitive and social psychology, natural language processing, etc. (Picard, 2010).",
"Ek-man, 1993, provided initial findings that related facial expressions as universal indicators of emotions.",
"Datcu and Rothkrantz, 2008, 2011, showed the importance of acoustic cues in affect modeling.",
"A large section of researchers approaches emotion recognition from a multimodal learning perspective.",
"Hence, many works used visual and audio features together for detecting affect (Busso et al., 2004; Castellano et al., 2008; Ranganathan et al., 2016).",
"An in-depth review of the literature in these systems is provided by D'mello and Kory, 2015.",
"Our work, which performs context-sensitive recognition (Wollmer et al., 2010) uses three modalities: audio, visual and text.",
"Recently, this combination of modalities has provided the best performance in affect recognition systems (Poria et al., 2017b; Wang et al., 2017; Tzirakis et al., 2017), thus motivating the use of a multimodal approach.",
"Previous works have focused on conversations as a resourceful event for emotion analysis.",
"Ru-usuvuori, 2013, provides an in-depth analysis on how emotions affect social interactions and conversations.",
"In fact, significant works have attributed emotional dynamics as an interactive phe-2123 nomenon, rather than being within-person and one-directional (Richards et al., 2003; Hareli and Rafaeli, 2008).",
"Such emotional dynamics are modeled by observing transition properties.",
"Yang et al., 2011, study patterns for emotion transitions and show the evidence of emotional inertia.",
"Xiaolan et al., 2013, use finite state machines to model transitions using stimuli and personality characteristics.",
"Our work also tries to model emotional transitions using multimodal features.",
"Unlike these works, however, we use memory networks to achieve the same.",
"The use of memory networks have been instrumental in the progress of multiple research problems, e.g., question-answering (Weston et al., 2014; Sukhbaatar et al., 2015; Kumar et al., 2016), machine translation (Bahdanau et al., 2014), speech recognition (Graves et al., 2014), and commonsense reasoning (Cambria et al., 2018).",
"The repeated read and write to their memory cells is often coupled with attention modules, thus allowing it to filter only relevant memories.",
"Our model is loosely inspired from Sukhbaatar et al., 2015.",
"Unlike their model, which directly encodes sentences into memories, we perform temporal sequence processing on our utterance histories using GRUs.",
"We also extend their architecture to handle two speakers while keeping the possibility to add more.",
"Finally, our model is different in the fact that we use multimodal features for input and processing.",
"Our goal is to infer the emotion of utterances present in a dyadic conversation.",
"Let us define a dyadic conversation to be an asynchronous exchange of utterances between two persons P a and P b .",
"Both the speakers speak a sequence of utterances U a and U b , respectively.",
"Here, U = ( s 1 , s 2 , ..., s l ) is ordered temporally, where s i is the i th utterance by P and l is the total number of utterances spoken by person P , { a, b } .",
"Overall, the utterances by both speakers can be linearly ordered based on temporal occurrence as ( u 1 , u 2 , ...u l a + l b ) , where, u j U a or U b .",
"Our model takes as input an utterance u i whose emotion category (Section 5.1) needs to be classified.",
"To get its history, preceding K utterances of each person are separately collected as hist a and hist b .",
"Here, K serves as the length of the context window for history of u i .",
"= { < } hist is also ordered temporally.",
"At the beginning of the conversation, histories would have lesser than K utterances, i.e., hist < K .",
"In the remaining sections, for brevity, we explain the processes using a subscript which can instantiate to either a or b , i.e., { a, b } .",
"We start by detailing the multimodal feature extraction scheme for all utterances followed by the mechanism to model emotional context using memory networks.",
"The first phase of CMN is to extract multimodal features of all utterances in the conversations.",
"The dyadic conversations are present in the form of videos.",
"Each utterance of a particular conversation is thus a small segment of the full video.",
"For each utterance, we extract features for the modes: audio, visual and text.",
"The process of feature extraction for each mode is described below.",
"We extract features from the transcript of an utterance video using convolutional neural networks (CNNs).",
"CNNs are effective in learning high level abstract representations of sentences from constituting words or n-grams (Kalchbrenner et al., 2014).",
"To get our sentence representation, we use a simple CNN with one convolutional layer followed by max-pooling (Kim, 2014; Poria et al., 2016).",
"Specifically, the convolution layer consists filters of sizes 3 , 4 and 5 with 50 feature maps each.",
"Max-pooling is employed on these feature maps with a pooling window of size 2 .",
"Finally, a fully connected layer is used with 100 neurons.",
"The activations of this layer form our sentence representation t u .",
"To extract audio features we use openSMILE (Ey-ben et al., 2010).",
"It is an open-source software which provides high dimensional audio vectors.",
"These vectors comprise of features like loudness, Mel-spectra, MFCC, pitch, etc.",
"Audio features play a significant role in providing information on the emotional state of a speaker (Song et al., 2004).",
"In fact, the literature shows that there exists a high correlation between many statistical measures of speech with speakers' emotion.",
"For example, high pitch and fast speaking rate often denote anger while sadness associates low standard deviation of pitch and slow speech rate (Dellaert et al., 1996; Amir, 1998).",
"In this work, we use the IS13 ComParE 1 config file which extracts a total of 6373 features for each utterance video.",
"Z-standardization is performed for voice normalization and dimension of the audio vector is reduced to 100 using a fully-connected neural layer.",
"This provides the final audio feature vector a u .",
"Facial expressions and visual surrounding provide rich emotional indicators.",
"We use a 3D-CNN to capture these details from the utterance video.",
"Apart from the benefits of extracting relevant features from each image frame, 3D-CNN also extracts spatiotemporal features across frames (Tran et al., 2015).",
"This leads to the identification of emotional expressions like a smile or frown.",
"The working of a 3D-CNN is identical to its 2D counterpart with an input being a video v of dimension: ( 3 , f, h, w ) .",
"Here, 3 represents the RGB channels and f, h, w are the number of frames, height, and width of each frame, respectively.",
"For the convolution operation, a 3D filter f l of dimension ( f m , 3 , f d , f h , f w ) is used where, f [ m / d / h / f ] represents number of feature maps, depth, height and width of the filter, respectively.",
"Max-pooling is applied to the output of this convolution across a 3D sliding window of dimension ( m p , m p , m p ).",
"In our model, we use 128 feature maps for 3D filters of size 5 .",
"For pooling, we set m p to be 3 whose output is fed to a fully connected layer with 100 neurons.",
"All the values are decided using hyperparameter tuning (see Section 5).",
"For the input utterance, the activations of this layer form the video representation v u .",
"Fusion: We perform feature level fusion to map the individual modalities to a joint space.",
"This is done through a simple feature concatenation.",
"Thus, the extracted features t u , a u and v u are joined to form the utterance representation u = [ t u ; a u ; v u ] of dimension d in = 300 .",
"This multimodal representation is generated for all utterances in a conversation.",
"Literature consists of numerous fusion techniques for multimodal data (Atrey et al., 2010; Zadeh et al., 2017; Poria et al., 2017c).",
"Exploring these on CMN, however, is beyond the scope of this paper and left as a future work.",
"For classifying the emotion of an utterance u i , its corresponding histories ( hist a and hist b ) are taken.",
"Each history hist contains the preceding K utterances by person P (see Section 3).",
"Here, both u i and utterances in the histories are represented using their multimodal feature vectors of dimension R d in (Figure 2).",
"The histories are first modeled into memory cells using GRUs.",
"This provides the memories with context information summarized by the GRU.",
"We call this step as memory representation.",
"Following cognitive evidence of self-emotional dynamics, we model separate memory cells for each person.",
"Thus, identical but separate computations are performed on both histories.",
"From these memories, content relevant to utterance u i is then filtered out using attention mechanism over multiple input/output hops.",
"At each hop, both memories are accumulated and merged with u i to model interspeaker emotional dynamics.",
"First, we describe our model as a single layer memory network which runs one hop operation on the memories.",
"Here, we explain the representation scheme of the memories for both histories and the input/output operations on them along with attention mechanism.",
"The memory representation for each history is generated using a GRU for modeling emotion transitions.",
"First, we define the GRU cell.",
"Gated Recurrent Unit: GRUs are a gating mechanism in recurrent neural networks introduced by (Cho et al., 2014).",
"Similar to an LSTM (Hochre-iter and Schmidhuber, 1997), GRU provides a simpler computation with similar performance.",
"At any timestep t , it utilizes two gates r t ( reset gate ) and z t ( update gate ) to control the combination criteria with current input utterance u t and previous hidden state s t 1 .",
"The new state s t is computed as: z t = ( V z .u t + W z .s t 1 + b z ) (2) r t = ( V r .u t + W r .s t 1 + b r ) (3) h t = tanh ( V h",
".u t + W h .",
"( s t 1 r t ) + b h ) (4) s t = ( 1 z t ) h t + z t s t 1 (5) 2125 Memory Output Conversation : P e r s o n BP e r s o n A Multimodal Feature Extractor D o y o u h a v e y o u r f o r m ?",
"Here, V, W and b are parameter matrices and vector and represents element-wise multiplication.",
"The above equations can be summarized as: s t = GRU ( s t 1 , u t ) .",
"Memory Representation: For each { a, b } , a memory representation M = [ m 1 , ..., m K ] for hist is generated using a GRU.",
"To grasp the temporal context, the K utterances in hist are framed as a sequence (starting from the oldest one) and fed to the GRU .",
"At each timestep t [ 1 , K ] , the GRU 's internal state s t (equation",
"5) forms the t th memory cell m t of memory representation M .",
"Memory Input: This step takes the memory representation M and performs an attention mechanism on it, resulting in an attention vector p RK .",
"First, the current utterance u i is embedded into a vector q i of dimension R d using a projection matrix B R d d in .",
"To find the relevance of each memory m t 's context with q i , a match between both is computed.",
"We do this by taking an inner product as follows: q i = B.u i (6) p t = softmax ( q Ti .m t ) (7) Here, softmax ( x i ) = e x i / j e x j and attention vector p = { p t } is a probability distribution over the input memories M = { m t } for t [ 1 , K ] .",
"Memory Output: First a new set of memories are created using another GRU to get new memory representation M = {( m t ) } .",
"An output representation o R d is then generated using the weighted sum of attention vector p and new memory M as follows: o = t p t .",
"Final Prediction: To generate the predictions for the current utterance u i , we combine the output representations of both persons: o a and o b with u i 's representation q i and perform an affine transformation using matrix W o .",
"Softmax is applied to this final vector to get the emotion predictions, y = softmax ( W o .",
"( q i + o a + o b )) (9) Categorical cross-entropy is used as the loss: Loss = 1 NN i = 1 C j = 1 y i,j log 2 ( y i,j ) (10) Here, N denotes total utterances across all videos and C is the number of emotion categories.",
"y i is the one-hot vector ground truth of i th utterance from the training set and y i,j is its predicted probability of belonging to class j .",
"Many recent works on memory networks adopt a multiple hop scheme in their network.",
"This repeated input and output cycle on the memories along with a soft attention module, leads to a refined representation of the memories (Sukhbaatar et al., 2015; Kumar et al., 2016).",
"Motivated by these works, we extend our model to perform R hops on the memories.",
"This is done by stacking the single hop layers (Section 4.2.1) as follows: At a particular hop r , the output memory of the previous hop M ( r 1 ) is used as the input memory of the current hop M ( r ) .",
"Output memory of current r th hop is generated using a new GRU ( r ) .",
"This constraint of sharing parameters adjacently between layers is added for reduction in total parameters and ease of training.",
"At every hop, the query utterance u i 's representation q i is updated as: q ( r + 1 ) i = q ( r ) i + o ( r ) a + o ( r ) b (11) o ( r ) is calculated as per equation 8 using M ( r ) .",
"After R hops, the final prediction is done using equation 9 as: y = softmax ( W o .",
"( q ( R + 1 ) i )) .",
"Algorithm 1 summarizes the overall CMN network.",
"We perform experiments on the IEMOCAP dataset 2 (Busso et al., 2008).",
"It is a multimodal database of 10 speakers (5 male and 5 female) involved in two-way dyadic conversations.",
"A pair 2 http://sail.usc.edu/iemocap/ Figure 3: Each block represents an utterance and the blocks are ordered as per temporal occurrence.",
"of speakers is given multiple conversation scenarios which are grouped in a single session.",
"All the conversations are segmented into utterances.",
"Each utterance is annotated using the following emotion categories: anger, happiness, sadness, neutral, excitement, frustration, fear, surprise, and other.",
"However, in our experiments, we consider the first four categories.",
"This is done to compare our method with state-of-the-art frameworks (Po-ria et al., 2017b; Rozgic et al., 2012).",
"The dataset provides rich video and audio samples for all the utterances along with transcriptions.",
"Apart from these emotional states, we also investigate the valence and arousal degrees of each utterance.",
"IEMOCAP provides labels for both these attributes on a 5-point Likert scale.",
"Following Aldeneh et al., 2017, we convert the attributes into 3 categories, namely, low ( 2 ), medium ( > 2 and < 4 ) and high ( 4 ).",
"The dataset configu-ration for the experiments is obtained from Poria et al. (2017b).",
"The first 8 speakers (Session 1 4) compose the training fold while the last session is used as the testing fold.",
"Overall, the training and testing set comprises of 4290 utterances ( 120 conversational videos) and 1208 utterances ( 31 conversational videos), respectively.",
"There is no speaker overlap in the training and testing set to make the model person-independent.",
"In this section, we perform dataset exploration to check the existence of emotional influences.",
"Figure 3a ) presents the emotion sequence of two videos 2127 sampled from the dataset.",
"Both videos show the presence of self and inter-speaker emotional influences.",
"Visual exploration of videos from the dataset reveal significant existence of such instances in the conversations.",
"To provide quantitative evidence of the emotional influence patterns, we curate a non exhaustive list of possible cases of influence.",
"For all utterances in the dataset, we sample their histories by setting K = 5 , i.e., five previous utterances (as per availability) from both speakers.",
"Cases 1 and 2 (Figure",
"3) represent scenarios when the emotion of current utterance is influenced by self or the other person respectively.",
"In case 3 , the utterance has relevant content in the histories that do not precede immediately.",
"An effective attention mechanism provides the capability to capture this pattern.",
"Finally case 4 presents the situation when the utterance is independent of the history.",
"Such situations are indicative from the content of the utterance which often deviates from the previous topic of discussion or introduces a new information.",
"Table 1 presents a statistical summary of these cases present in the dataset.",
"From the table it can be seen that a large section of the dataset demonstrate these influence patterns.",
"This provides motivation to explicitly model these patterns.",
"We thus hypothesize that models that are able to capture these cases would have superior emotion inference capabilities.",
"This passive exploration is a label-based analysis which is performed as a sanity check.",
"Needless to say, existence of some false positive patterns at the label level is imminent.",
"On the other hand, our model CMN is content-based which enables it to mine intricate patterns from the utterance histories.",
"We use 10% of the training set as a held-out validation set for hyperparameter tuning.",
"To optimize the parameters, we use Stochastic Gradient Descent (SGD) optimizer, starting with an initial learning Case 1 Case 2 Case 3 Case 4 Percentage 63.77 40.44 30.97 16.24 Table 1: Percentage of occurrence of different cases in the dataset as mentioned in Section 5.2.",
"rate (lr) of 0 .",
"01 .",
"An annealing approach halves the lr every 20 epochs and termination is decided using an early-stop measure with a patience of 12 by monitoring the validation loss.",
"Gradient clipping is used for regularization with a norm set to 40 .",
"Hyperparameters are decided using a Random Search (Bergstra and Bengio, 2012).",
"Based on validation performance, context window length K is set to be 40 and the number of hops R is fixed at 3 hops.",
"If K previous utterances are unavailable, then null utterances are added at the beginning of the history sequence.",
"The dimension size of the memory cells d is set as 50 .",
"SVM-ensemble: A strong context-free benchmark model which uses similar multimodal approach on an ensemble of trees.",
"Each node represents binary support vector machines (SVM) (Rozgic et al., 2012).",
"bc-LSTM: A bi-directional LSTM equipped with hierarchical fusion, proposed by Poria et al., 2017b.",
"It is the present state-of-the-art method.",
"The model uses context features from unimodal LSTMs and its concatenation is fed to a final LSTM for classification.",
"For fair comparison in an end-to-end learning paradigm, we remove the penultimate SVM of this model.",
"The model doesn't accommodate inter-speaker dependencies.",
"Memn2n: The original memory network as proposed by Sukhbaatar et al., 2015.",
"Contrasting to CMN, the model generates the memory representations for each historical utterance using an embedding matrix B as used in equation 7, without sequential modeling.",
"Thus for utterance u i , both memories are created as M using { m t = B.u t u t hist and t [ 1 , K ]} for { a, b } .",
"CMN Self : In this baseline, we use only self history for classifying emotion of utterance u i .",
"Thus, if u i is spoken by person P a , then only hist a is considered.",
"Clearly, this variant is also incapable of modeling inter-speaker dependencies.",
"CMNNA : Single layer variant of the CMN with no attention module.",
"Thus, its output o (equa-tion",
"8) is generated using a uniform probability distribution p , i.e., { p t = 1 K } Kt = 1 .",
"Table 2 presents the performances of CMN and its variants along with the state-of-the-art mod-2128",
"els.",
"CMN succeeds over both neural (Poria et al., 2017b) and SVM-based (Rozgic et al., 2012) methods by 3.3% and 8.12%, respectively.",
"Improvement in performance is seen for all emotions over the ensemble-SVM based method.",
"A similar trend is seen with bc-LSTM (Poria et al., 2017b), where our model does explicitly well for the active emotions happiness and anger .",
"This trend suggests that CMN is capable of capturing inter-speaker emotional influences which are often seen in the presence of such active emotions.",
"The importance of sequential processing of the histories using a recurrent neural network (in our case, a GRU) is evidenced by the poorer performance of Memn2n with respect to CMN.",
"This suggests that gathering contexts temporally through sequential processing is indeed a superior method over non-temporal memory representations.",
"CMN self which uses only single history channel also provides lesser performance when compared to CMN.",
"This signifies the role of inter-speaker influences that often moderate the emotions of the current utterance.",
"Overall, predictions on valence and arousal levels also show similar results which reinforce our hypothesis of CMN's ability to model emotional dynamics.",
"Hyperparameters: Figure 4 provides a summary of the performance trend of our model for different values of the hyperparameters K (context window length) and Q (number of hops).",
"In the first graph, as K increases, more past-utterances are provided to the model as memories.",
"The performance maintains a positive correlation with K .",
"This trend supplements our intuition that the historical context acts as an essential resource to model emotional dynamics.",
"Given enough history, the performance saturates.",
"The second graph shows that multiple hops on the histories indeed lead to an improvement in performance.",
"The attention-based filtering in each hop provides a refined context representation of the histories.",
"Models with hops in the range of 3 10 outperform the single layer variant.",
"However, each added hop contributes a new set of parameters for memory representation, leading to an increase in total parameters of the model and making it susceptible to overfitting.",
"This effect is evidenced in the figure where higher hops lead to a dip in performance.",
"Multimodality: Table 3 summarizes the performance of unimodal and multimodal variants of the baselines along with CMN.",
"As seen in the table, text modality performes best out of the three.",
"This is in contrast to Rozgic et al. 2012 where audio provides the best performance.",
"A possible reason for t e s t acc u r ac y % K: history window length Q: number of hops Figure 4: Performance trends of our model with different values of K (history length) and Q (number of hops).",
"this shift is the improved representational scheme of the textual modality.",
"Text tends to have lesser noisy signals as opposed to audio-visual sources, thus providing better features in the joint representation.",
"Overall, multimodal systems outperform the unimodal variants justifying the design of CMN as a multimodal system.",
"Table 3 also showcases the superiority of CMN and its variants over bc-LSTM.",
"The proposed model achieves better performance over the state of the art in all the unimodal and multimodal segments.",
"This asserts the importance of the memory-network framework and its ability to effectively store context information.",
"Role of Attention: Attention module plays a vital role in memory refinement.",
"This is also observed in Table 2, where CMNNA provides inferior performance over CMN.",
"With the uniform weight, all the memory cells in both memories M a and M b equally contribute to the output representation.",
"This incorporates irrelevant information from the perspective of emotional context.",
"Case Study: We perform qualitative visualization of the attention module by applying it on the testing set.",
"Figure 5a represents a conversation where both the speakers are in an excited and jolly mood.",
"Person A, in particular, drives the dialogue with less influence from Person B. To classify the test utterance of A, the attention module of CMN successfully focuses on the utterances 1 , 3 , 5 which had triggered the speaker's positive mood in the video.",
"This shows CMN's capacity to model speaker-based emotions.",
"Also, at the textual level, utterances 3 and 6 do not seem to depict a happy mood.",
"However, audio and visual sources provide contrasting evidence which helps CMN to correctly model them as utterances spoken with happiness.",
"This shows the advantage of a multimodal system.",
"In Figure 5b we reiterate through the dialogue presented in Figure 1. As shown, Person A converses in a sad mood (utterances 1 , 3 , 5 in Fig 5b), bounded by the grief of his wife's departure.",
"But when he expresses his inhibitions, his wife B reacts in an angry and sarcastic manner (utterance 7 ).",
"This ignites an emotional shift for A who then replies angrily.",
"In this example, CMN is able to focus on utterance 7 spoken by B to anticipate A's test utterance to be an angry statement, thus showing its ability to model inter-speaker influences.",
"However, there are cases where our model fails, e.g., in the absence of historical utterances as this forces attention to focus on null memories.",
"In this paper, we presented a deep neural framework that identifies emotions for utterances in dyadic conversational videos.",
"Our results suggest that leveraging context information from utterance histories and representing them as memories indeed helps to better recognize emotions.",
"Performing speaker-specific modeling and considering interspeaker influences also helps in capturing emotional dynamics.",
"This work also showed the importance of attention mechanism in filtering relevant contextual information from utterance histories and, hence, paved the path to the development of more efficient and human-like dialogue systems.",
"This research was supported in part by the National Natural Science Foundation of China under Grant no. 61472266 and by the National University of Singapore (Suzhou) Research Institute, 377 Lin Quan Street, Suzhou Industrial Park, Jiang Su, Peo-ple's Republic of China, 215123."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"abstain",
"objective",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"other",
"other",
"abstain",
"method",
"objective",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"result",
"other"
] |
[
"We present a novel method for mapping unrestricted text to knowledge graph entities by framing the task as a sequence-to-sequence problem.",
"Specifically, given the encoded state of an input text, our decoder directly predicts paths in the knowledge graph, starting from the root and ending at the target node following hypernym-hyponym relationships.",
"In this way, and in contrast to other text-to-entity mapping systems, our model outputs hierarchically structured predictions that are fully interpretable in the context of the underlying ontology, in an end-to-end manner.",
"We present a proof-of-concept experiment with encouraging results, comparable to those of state-of-the-art systems.",
"Text-to-entity mapping is the task of associating a text with a concept in a knowledge graph (KG) or an ontology (we use two terms, interchange-ably).",
"Recent works (Kartsaklis et al., 2018; Hill et al., 2015) use neural networks to project a text to a vector space where the entities of a KG are represented as continuous vectors.",
"Despite being successful, these models have two main disadvantages.",
"First, they rely on a predefined vector space which is used as a gold standard representation for the entities in a KG.",
"Therefore, the quality of these algorithms depends on how well the vector space is represented.",
"Second, these algorithms are not interpretable; hence, it is impossible to understand why a certain text was linked to a particular entity.",
"To address these issues we propose a novel technique which first represents an ontology concept as a sequence of its ancestors in the ontology (hy-pernyms) and then maps the corresponding textual description to this unique representation.",
"For example, given the textual description of the concept swift (small bird that resembles a swallow and is noted for its rapid flight), we map it to the hierarchical sequence of entities in a lexical ontology: animal chordate vertebrate bird apodiform bird .",
"This sequence of nodes constitutes a path.",
"1 Our model is based on a sequence-to-sequence neural network (Sutskever et al., 2014) coupled with an attention mechanism (Bahdanau et al., 2014).",
"Specifically, we use an LSTM (Hochre-iter and Schmidhuber, 1997) encoder to project the textual description into a vector space and an LSTM decoder to predict the sequence of entities that are relevant to this definition.",
"With this framework we do not need to rely on the pre-existing vector space of the entities, since the decoder explicitly learns topological dependencies between the entities of the ontology.",
"Furthermore, the proposed model is more interpretable for two reasons.",
"First, instead of the closest points in a vector space, it outputs paths; therefore, we can trace all predictions the model makes.",
"Second, the attention mechanism allows to visualise which words in a textual description the model selects while predicting a specific concept in the path.",
"In this paper, we consider rooted tree graphs 2 only and leave the extension of the algorithm for more generic graphs to future work.",
"We evaluate the ability of our model in generating graph paths for previously unseen textual definitions on seven ontologies (Section 3).",
"We show that our technique either outperforms or performs on a par with a competitive multi-sense LSTM model (Kartsaklis et al., 2018) by better utilising external information in the form of word embeddings.",
"The code and resources for the paper can 1 We only consider hypernymy relations, from the root to the parent node ( apodiform bird ) of the entity swift .",
"2 Only single root is allowed.",
"If a tree has more than one root, one can create a dummy root node and connect the roots of the tree to it.",
"be found at https://github.com/VictorProkhorov/ Text2Path.",
"We assume that an ontology is represented as a rooted tree graph G = ( V, E, T ) , where V is a set of entities (e.g. synsets in WordNet), E is a set of hyponymy edges, and T is a set of textual descriptions such that v V there is a t v T .",
"We assume that an ontological concept can be defined by either using a textual description from a dictionary or hypernyms of the defining concept in the ontology.",
"For example, to define the noun swift one can use the dictionary definition mentioned previously.",
"Alternatively, the concept of swift can be understood from its hypernyms, e.g. in the trivial case one can say that swift is an animal .",
"This definition is not very useful since animal is a hypernym for many other nouns.",
"To provide a more specific definition, one can use a sequence of hypernyms e.g. animal chordate vertebrate bird apodiform bird starting from the most abstract node (root of an ontology) to the most specif (parent node of the noun).",
"More formally, for each entity v (cid:54) = v root V we create a path p v .",
"Each p v starts from v root and ends with a hypernym of v , i.e., the hierarchical order of entities is preserved.",
"Then the path p v is aligned with t v such that each node is defined by a textual definition and a path.",
"This set of aligned representations is used to train the model.",
"The path representation of an entity ends with its parent node.",
"Therefore, a leaf node will not be present in any of the paths.",
"This is problematic if a novel definition should be attached to a leaf.",
"To alleviate this issue we employ the dummy source sentences technique from neural machine translation (NMT) (Sennrich et al., 2016).",
"We create an additional set of paths from the root node to each leaf.",
"As for the textual definition we leave it empty.",
"We use a sequence-to-sequence model with an attention mechanism to map a textual description of a node to its path representation.",
"Encoder.",
"To encode a textual definition t v = ( w i ) Ni =1 , where N is sentence length, we first map each word w i to a dense embedding e w i and then use a bi-directional LSTM to project the sequence into a latent representation.",
"The final encoding state is obtained by concatenating the forward and backward hidden states of the bi-LSTM.",
"Decoder.",
"Decoding the path representation of a node from the latent state of the textual description is done again with an LSTM decoder.",
"Similarly to the encoding stage, we map each symbol in the path p v = ( s j ) Mj =1 to a dense embedding e s j , where M is the path length.",
"To calculate the probability of the path symbol s j at time step j we first represent the path sequence as h j = LSTM ( e js , h j 1 ) .",
"Then, we concatenate h j with the context vector c j (de-fined next) and pass the concatenated representation [ h j ; c j ] through the softmax function, i.e. s j = max ( softmax ( W [ h j ; c j ])) , where W is a weight parameter.",
"To calculate the context vector c j we use an attention mechanism, e ji = v T a tanh ( W a h i + U a h j ) and c j = (cid:80) Ni softmax ( e ji ) h i , where v a , W a and U a are the weight parameters, over the words in the text description.",
"Ontologies.",
"We experimented with seven graphs four of which are related to the bio-medical do-main: Phenotype And Trait Ontology 3 (PATO), Human Disease Ontology (Schriml et al., 2012, HDO), Human Phenotype Ontology (Robinson et al., 2008, HPO) and Gene Ontology 4 (Ash-burner et al., 2000, GO).",
"The other three graphs, i.e. WN animal.n.015 , WN plant.n.02 and WN entity.n.01 are subgraphs of the WordNet 3.0 (Fellbaum, 1998).",
"We present the statistics of the graphs in Table",
"1. Ontology Preprocessing.",
"All the ontologies we experimented with are represented as directed acyclic graphs (DAGs).",
"This creates an ambiguity for node path definitions since there are multiple pathways from a root concept to other concepts.",
"We have assumed that a single unambiguous pathway will reduce the complexity of the problem and leave the comparison with ambiguous pathways (which would inevitably involve a more complex model) to future work.",
"To convert a DAG to a tree 3 http://www.obofoundry.org 4 After prerocessing GO we took its largest connected component.",
"5 The subscript in WN' indicates the name of the root node of the graph.",
"we constrain each entity to have only one parent node.",
"The edges between the other parent nodes are removed.",
"6 Path Representations.",
"We also experiment with two path representations.",
"Our first approach, text2nodes , uses the label of an entity (cf. Section 1) to represent a path.",
"This is not efficient since the decoder of the model needs to select between all of the entities in an ontology and also requires more parameters in the model.",
"Our second approach, text2edges , to reduce the number of symbols for the model to choose from, uses edges to represent the path.",
"To do this we create an artificial vocabulary of the size ( G ) , where ( G ) corresponds to the maximum degree of a node.",
"Each edge in the graph is labeled using the artificial vocabulary.",
"For the example in Section 1, the path would be animal [a] chordate [b] vertebrate [c] bird [d] apodiform bird where { a,b,c,d } is the artificial vocabulary.",
"In the resulting path we discard labels for the entities; therefore, the path reduces to: [a] [b] [c] [d].",
"Bag-of-Words Linear Regression (BOW-LR): To represent a textual definition in a vector space we first use a pre-trained set of word embeddings (Speer et al., 2017) to represent words in the definition and then find the mean of the word embeddings.",
"As for the ontology, we use node2vec (Grover and Leskovec, 2016), to represent each entity in a vector space.",
"To align the two vector spaces we use linear regression.",
"6 The choice of an edge is performed on random basis.",
"Multi-Sense LSTM (MS-LSTM): Kartsaklis et al. (2018) proposed a model that achieves state-of-the-art results on the text-to-entity mapping on the Snomed CT 7 dataset.",
"The approach uses a novel multi-sense LSTM, augmented with an attention mechanism, to project the definition to the ontology vector space.",
"Additionally, for a better alignment between the two vector spaces, the authors augmented the ontology graph with textual features.",
"To perform evaluation of the models described above we used Ancestor-F1 score (Mao et al., 2018).",
"This metric compares the ancestors ( is a model ) of the predicted node with the ancestors ( is a gold ) of the gold node in the taxonomy.",
"P = | is a model is a gold | | is a model | , R = | is a model is a gold | | is a gold | , where P and R are precision and recall, respectively.",
"The Ancestor-F1 is then defined as: 2 P R P + R. 3.3 Intrinsic Evaluation To verify the reliability of our model on text-to-entity mapping we did a set of experiments on the seven graphs (Section 3) where we map a textual definition of a concept to a path.",
"To conduct the experiments we randomly sampled 10% of leaves from the graph.",
"From this sample, 90% are used to evaluate the model and 10% are used to tune the model.",
"The remaining nodes in the graph are used for training.",
"We sample leaves for two reasons: (1) to predict a leaf, the model needs to make the maximum number of (correct) predictions and (2) this way we do not change the original topology of the graph.",
"Note that the sampled nodes and their textual definitions are not present in the training data.",
"Both baselines predict a single entity instead of a path.",
"To have the same evaluation framework for all the models, for each node predicted by the baselines we create 8 a path from the root of the node to the predicted node.",
"However, we want 7 https://www.snomed.org/snomed-ct 8 We used NetworkX (https://networkx.github.io) to find a path from predicted node to the root of a graph.",
"to emphasize that this is disadvantageous for our model, since all the symbols in the path are predicted by it and in the case of the baselines only a single node is predicted.",
"The results are presented in Table",
"2. Models that are in the last three rows of Table 2 use pre-trained word embeddings (Speer et al., 2017) in the encoder.",
"MS-LSTM and our models that are above the last three rows use randomly initialised word vectors.",
"We had four observations: (1) without pre-trained word embeddings in the encoder our model outperforms the best MS-LSTM = 0 .",
"5 only on two of the seven graphs, (2) the text2edges model outperforms all the other models including MS-LSTM =0 .",
"5 , (3) the text2edges model can better exploit pre-trained word embeddings than MS-LSTM, (4) our model performs better when the paths are represented using edges (rather than nodes).",
"We also found that there is a strong negative correlation (Spearman: 0 . 75 , Pearson: 0 . 80 ) between A.D. (Table 3) and the Ancestor F1 score for the text2edges model, meaning that with an increase in A.D. the Ancestor F1 score decreases.",
"We carried out an analysis on the outputs of our best-performing model, i.e. text2edges with pre-trained word embeddings.",
"One factor that affects the performance is the number of invalid sequences predicted by the text2nodes and text2edges models.",
"An invalid sequence is the path that does not exist in the original graph.",
"This happens because at each time step the decoder outputs a distribution over all the nodes/edges and not just over possible children nodes.",
"We therefore performed a count of the number of invalid sequences produced by the model.",
"The percentage of invalid sequences is in the range of 1.82% -8.50% (Appendix B), which is relatively low.",
"This analysis was also performed by J. Kusner et al. (2017).",
"To guarantee that the model always produces valid graphs, they use a context-free grammar.",
"A similar method can be adapted in our work.",
"Another factor that affects the performance is the length of the generated paths which is expected to match the length of the gold path.",
"To test this, we compared the mean length of the generated sequences with the length of the gold path (the graph on the bottom of Figure 1).",
"Also, in the training set, we associate the length of the sequences with their frequencies (the graph on the top of Figure 1).",
"We found that (1) the length of the generated paths are biased towards the more frequent paths in the training data, (2) if the length of a path is not frequent in the training data, the model either under-generates or over-generates the length (Ap-pendix D).",
"Text-to-entity mapping is an essential component of many NLP tasks, e.g. fact verification (Thorne et al., 2018) or question answering (Yih et al., 2015).",
"Previous work has approached this problem with pairwise learning-to-rank method (Lea-man et al., 2013) or phrase-based machine translation (Limsopatham and Collier, 2015).",
"However, these methods generally ignore ontology's structure.",
"More recent work has viewed the problem of text-to-entity mapping as a projection of a textual definition to a single point in a KG (Kartsak-lis et al., 2018; Hill et al., 2015).",
"However, despite potential advantages, such as being more interpretable and less brittle (model predicts multiple related entities instead of one), path-based approaches have received relatively little attention.",
"Instead of predicting a single entity, path-based models, such as the one we proposed in this paper, try to map a textual definition to multiple relevant entities in an external resource.",
"We presented a model that maps textual definitions to interpretable ontological pathways.",
"We evaluated the proposed technique on seven semantic graphs, showing that it can perform competitively with respect to existing state-of-the-art text-to-entity systems, while being more interpretable and self-contained.",
"We hope this work will encourage further research on path-based text-to-entity mapping algorithms.",
"A natural next step will be to extend our framework to DAGs.",
"Furthermore, we plan to constrain our model to always predict paths that exist in the graph, as we discussed above.",
"We would like to thank the anonymous reviewers for their comments.",
"Also, we would like to thank Dimitri Kartsaklis and Ehsan Shareghi for helpful discussions and comments.",
"This research was supported by an EPSRC Experienced Researcher Fellowship (N. Collier: EP/M005089/1) and an MRC grant (M.T. Pilehvar: MR/M025160/1).",
"We gratefully acknowledge the donation of a GPU from the NVIDIA Grant Program."
] | [
"objective",
"method",
"objective",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"method",
"result",
"method",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"objective",
"method",
"objective",
"method",
"other",
"other",
"other",
"other"
] |
[
"Transformer-based models are the modern work horses for neural machine translation (NMT), reaching state of the art across several benchmarks.",
"Despite their impressive accuracy, we observe a systemic and rudimentary class of errors made by current state-of-the-art NMT models with regards to translating from a language that doesn't mark gender on nouns into others that do.",
"We find that even when the surrounding context provides unambiguous evidence of the appropriate grammatical gender marking, no tested model was able to accurately gender occupation nouns systematically.",
"We release an evaluation scheme and dataset for measuring the ability of NMT models to translate gender morphology correctly in unambiguous contexts across syntactically diverse sentences.",
"Our dataset translates from an English source into 20 languages from several different language families.",
"With the availability of this dataset, our hope is that the NMT community can iterate on solutions for this class of especially egregious errors.",
"Neural machine translation models are trained on vast amounts of data and consistently attain strong performance on standard benchmarks (Barrault et al., 2020).",
"Despite this impressive achievement, state-of-the-art MT models are often largely unable to make basic deductions regarding how to correctly inflect nouns with grammatical gender.",
"Previous work measured gender bias by determining how often models translated pronouns coreferent with stereotypical occupation noun stereotypically (e.g., Stanovsky et al. 2019; Prates et al. 2019).",
"Crucially, in this ambiguous setting, the correct gender was genuinely under-determined given the context, which allowed for investigating the underlying (often stereotypical) assumptions of machine translation models (i.e., that most if not all nurses are women).",
"However, gender mistakes in translation go beyond stereotyping: in some cases, assigning the wrong gender to a noun can result in a genuine mistranslation (i.e., a factual error).",
"In this work, we cast the task of measuring gender bias in machine translation as the task of measuring gender errors in translation (as opposed to the prevalence of stereotyping in translation).",
"We argue that oper-ationalizing the gender-bias measurement problem with an unambiguous task is much clearer than framing it as an ambiguous task, because, in our setup, morphological gender mistakes are not forgivable.",
"We introduce a novel unambiguous benchmark dataset that measures whether an MT model can appropriately inflect occupation nouns for gender when translating from an English source into 20 gender-marking target languages.",
"We craft source sentences by manipulating the context of the occupation noun so that the gender of the person referred to (i.e., their gender identity) is clearly specified.",
"For example: My nurse is a good father the gender identity of the nurse is unambiguous, because nurse is coreferent with father .",
"When translating into a target, the occupation noun ( nurse ) requires masculine gender marking.",
"To also enable stereotype measurement within our unambiguous translation task, we vary the gender stereotypicality of occupations (e.g., nurses are stereotypically likely to be women while janitors are more likely to be men) to determine whether a model's propensity to stereotype contributes to its translation mistakes.",
"Furthermore, we augment our sentences with gender stereotypical adjectives (such as pretty and handsome , the former being used more frequently in practice to modify nouns referring to women and the latter, to men) to additionally study whether there might be possible interactions between contextual cues, as it is well known that translation systems perform better when provided with more context (i.e., longer sentences; Tiedemann and Scherrer 2017; Miculicich et al. 2018).",
"We expect the incidence of correct inflection 3454 to rise in cases when a stereotypical contextual cue is also provided.",
"It is our hope that the benchmark will more clearly surface these kinds of errors to the wider NMT community, encouraging us to devise better, targeted mitigation strategies.",
"Our contributions are as follows: We offer a new unambiguous benchmark to measure MT models' ability to mark gender correctly in 20 target languages (Belarusian, Catalan, Czech, German, Greek, Spanish, French, Hebrew, Croatian, Italian, Latvian, Lithuanian, Polish, Portuguese, Romanian, Russian, Serbian, Ukranian, Urdu) translated from an English source.",
"1 We find that all tested NMT models reach fairly low accuracy across target languagesat best approximately 70 (Portuguese and German) and at worst below 50 (Urdu).",
"The tested models do better when the trigger refers to a man (e.g., father ) than when it refers to a woman (e.g., mother ), and have higher accuracy when the stereotypical gender of the occupation (e.g., nurse ) matches the gender of the unambiguous trigger (e.g., mother ), compared to examples for which they don't match ( nurse and father ).",
"When we see such blatant translation failures for morphological features as frequent as grammatical gender (which has clear social consequences and strong community buy-in), it becomes very clear that more work is needed to teach our models how to correctly translate morphological information.",
"Our method crucially relies upon linguistic theory to engineer the context and arrive at unambiguous examples.",
"In most attempts to measure gender bias in NMT, there has been no ground-truth correct translationmodel preferences (Stanovsky et al., 2019; Prates et al., 2019) are reflected by the percentage of examples for which the MT system chooses the gender-stereotypical pronoun as opposed to the anti-gender-stereotypical one.",
"However, since both translations are practically possible in reality (for example, janitors come in all genders), we feel this setting might be overly optimistic about the capabilities of current models.",
"Our set up has two main components: we have a trigger (i.e., a noun or pronoun in the source sentence that unambiguously refers to a person with a particular known gender), and we have an occupation noun which isn't marked for gender in 1 https://github.com/arendu/ Unambiguous-gender-bias Source/Target Label Src: My sister is a carpenter 4 .",
"the source language and can be marked with various genders in the target language.",
"We call the former class triggers because they are the unambiguous signal which triggers a particular grammatical gender marking on the occupation noun.",
"Triggers comprise all standard American English pronouns that inflect for gender, and explicitly gendered kinship terms, which were chosen because they are very common concepts cross-linguistically and are gender unambiguous.",
"2 Occupation nouns were drawn from the U.S. Bureau of Labor Statistics, 3 following Caliskan et al. (2017); Rudinger et al. (2017); Zhao et al. (2018); Prates et al. (2019).",
"We ensure that there is an equal number of triggers and occupation words, so that our benchmark is gender-balanced for binary gender.",
"For a list, see Table 2 and Table 5 in the Appendix.",
"We measure accuracy based on the inflection of the occupation noun, which depends on the syntactic structure of the sentence.",
"To ensure that we have unambiguous sentences, we constructed a short English phrase structure grammar comprising 82 commands to construct our corpus.",
"Previous datasets for measuring gender failures in translation have had a handful unambiguous examples (Stanovsky et al., 2019), but not enough to derive strong conclusions based on unambigous examples alone.",
"Our dataset is unique in having only unambiguous examples and having them for a large set of target languages (see also Gonzlez et al. 2020).",
"We also make use of Binding Theory (Chomsky, 1980, 1981; Bring, 2 Gender identity is not strictly binary.",
"We adopt a binary conception here, because none of our investigated languages grammatically mark genders other than masculine or feminine on occupation nouns.",
"Our gendered trigger words are largely unambiguous modulo costume party examples (Ackerman, 2019), where people dress up contra their gender identity: if a man dresses up as his own grandmother, he can be referred to with so-called unambiguous triggers such as grandma or she .",
"We have ensured that our dataset is free from such examples.",
"3 http://www.bls.gov/cps/cpsaat11.htm 3455 2005) to ensure that",
"(i) all of our pronoun triggers (both pronominals like she and anaphors like herself ) are strictly coreferring with the occupations and",
"(ii) that no other interpretations are possible.",
"4 Having a grammar is useful, since it allows for an increased diversity of source sentences and better control over the context.",
"We will release three grammars which create datasets of three sizes for convenience: extra small ( 1 , 536 sentences), small ( 59 , 520 sentences), and extra large ( 1 , 800 , 006 sentences).",
"We mainly focus on the extra large dataset (which is a proper superset of the others) for the purposes of the paper.",
"A grammar also allowed us to investigate a couple subsidiary questions about the nature of anaphoric relations: for example, does accuracy depend on whether the occupation precedes or follows the trigger?",
"Moreover, when we include a contextual cue that is predictive of the gender required by the trigger (e.g., handsome for brother ), does accuracy change when we attach it to the occupation (e.g., that handsome nurse is my brother ) instead of to the trigger ( that nurse is my handsome brother )?",
"And finally, to what extent do these different syntactic factors interact with each other or vary across languages?",
"Since we anticipated poor performance on the task, we also devised an easier scenario, where we provide additional contextual cues provided by adjectives about the gender of the relevant entity.",
"Our list of adjectives is the union of single word stereotyped traits drawn from several works in the social psychology literature on gender stereotyping (Bem, 1981; Prentice and Carranza, 2002; Haines et al., 2016; Eagly et al., 2020; Saucier and Iurino, 2020), where they were normed for English.",
"We evaluate gendered translation of three pretrained open-source models,",
"(i) OPUS-MT is a collection of 1000+ bilingual and multilingual (for certain translation directions) models (Tiedemann and Thottingal, 2020).",
"The architecture of each model was based on a standard transformer (Vaswani et al., 2017) setup with 6 self-attentive layers in both the encoder and 4 Consider the sentence Carlotta's dog accompanies her to kindergarten (Bring, 2005, p.5).",
"In this sentence, we can interpret this sentence as meaning that the dog accompanies either Carlotta or another woman or girl to kindergartento strengthen this reading you can append to the front of the sentence the clause something like whenever Mary's parents have to go to work early, Carlotta's dog accompanies her to kindergarten .",
"In this way, her can refer to either Carlotta or to Mary.",
"We have avoided such ambiguity in our dataset.",
"decoder network with 8 attention heads in each layer.",
"(ii) M2M-100 is a large multilingual model which supports many-to-many translation directions (Fan et al., 2020).",
"M2M-100 pretrained models are available in three sizes (418 million parameters, 1.2 billion parameters and 15 billion parameters).",
"We employ the small and medium sized models for our experiments which are based on the transformer architecture with 12 encoder and decoder layers and 16 attention heads.",
"(iii) mBART-50 is another multilingual model (Tang et al., 2020) that is obtained by many-to-many direction fine-tuning of a seed mBART denoising auto-encoder model (Liu et al., 2020).",
"The many to-many fine-tuning process is reported to improve multilingual translation by 1 BLEU point, averaged across all translation directions.",
"The mBART-50 models are also based on transformers with 12 encoder and decoder layers with 16 attention heads.",
"To ascertain whether the translation applied the correct morphological marker on the target-side occupation noun, we design a reference-free evaluation scheme.",
"Following Stanovsky et al. (2019), we extract token-alignments between the source occupation noun token and its translation in the target side.",
"We also extract morphological features for every token in the target sequence, using a morphological tagger.",
"Thus, we can ascertain the gender associated with the translated occupation noun (as judged by the morphological tagger) and measure the NMT models' accuracy concerning gender translation.",
"We use Dou and 3456 Language M2M (1.2B) M2M (418M) mBART-50 OPUS Correct Wrong N/A Correct Wrong N/A Correct Wrong N/A Correct Wrong N/A be 0.47 0.31 0.21 0.39 0.28 0.33 ca 0.57 0.22 0.22 0.43 0.32 0.25 0.43 0.39 0.19 cs 0.67 0.29 0.04 0.56 0.38 0.06 0.68 0.32 0.01 0.63 0.36 0.01 de 0.73 0.26 0.01 0.54 0.45 0.02 0.61 0.37 0.02 0.61 0.38 0.01 el 0.59 0.35 0.06 0.51 0.37 0.12 0.59 0.39 0.02 es 0.63 0.20 0.17 0.44 0.37 0.18 0.53 0.26 0.22 0.52 0.31 0.17 fr 0.61 0.28 0.11 0.47 0.38 0.15 0.60 0.39 0.01 0.57 0.41 0.02 he 0.57 0.31 0.12 0.51 0.37 0.11 0.57 0.31 0.12 0.55 0.34 0.11 hi 0.51 0.37 0.12 0.49 0.40 0.11 0.49 0.39 0.12 hr 0.65 0.29 0.05 0.55 0.39 0.07 0.68 0.29 0.03 it 0.53 0.25 0.22 0.41 0.34 0.24 0.47 0.32 0.21 0.41 0.33 0.26 lt 0.65 0.33 0.02 0.55 0.42 0.03 0.53 0.43 0.04 lv 0.63 0.35 0.02 0.53 0.44 0.03 0.63 0.33 0.04 pl 0.65 0.33 0.03 0.54 0.43 0.03 0.59 0.39 0.02 pt 0.74 0.24 0.02 0.56 0.41 0.03 0.68 0.31 0.02 ro 0.59 0.33 0.08 0.51 0.41 0.07 0.62 0.32 0.06 0.53 0.40 0.07 ru 0.60 0.38 0.02 0.54 0.42 0.04 0.54 0.36 0.09 0.53 0.47 0.01 sr 0.52 0.43 0.05 0.49 0.44 0.07 uk 0.59 0.37 0.04 0.51 0.42 0.07 0.67 0.31 0.03 0.56 0.41 0.03 ur 0.44 0.34 0.22 0.44 0.38 0.18 0.42 0.41 0.17 Table 3: Accuracy for all languages and models.",
"Neubig (2021) for word-alignment and Qi et al. (2020) as our morphological tagger.",
"Note that our evaluation scheme only checks if the appropriate gender marking is applied on the occupation noun and does not check if the occupation noun itself has been translated correctly.",
"Thus, we do not prescribe our evaluation scheme as a replacement for traditional MT evaluation using BLEU or chrF++ scores (Papineni et al., 2002; Popovic, 2015).",
"Under our evaluation scheme, there are three possible evaluation outcomes for each sentence.",
"We deem the output",
"(i) correct if the gender of the target-side occupation noun is the expected gender (based on the source-side trigger gender).",
"(ii) wrong if the gender of the target-side occupation is explicitly the wrong gender, and",
"(iii) inconclusive if we are unable to make a gender-determination of the target-side occupation noun.",
"A translation can be inconclusive if there are errors in the translation, word-alignments, or morphological tags.",
"In most cases with an inconclusive result, translation errors are the root cause (see Table 1).",
"If errors predominate more for one gender, this itself can be taken as evidence of an imbalance that needs rectification.",
"Note that some of the target languages present for M2M models were not present for mBART and OPUS modelswhen those models were not trained to translate into a particular target, cells for those languages are left blank in our results tables.",
"We observe that accuracy doesn't exceed the low 70s for any language or model (see Table 3).",
"This shows that our dataset is appreciably difficult, and can provide good signal about the failures of our current best models.",
"We additionally find, expectedly, that the larger M2M model outperforms its smaller counterpart (for all languages except Urdu, where performance is comparable).",
"Across the board, M2M with 1.2B parameters slightly outperforms mBART-50, and vastly outperforms the small M2M model with 418M parameters and the OPUS models.",
"When there is a mismatch between trigger-gen-der and occupation-gender, accuracy drops.",
"In Table 4, we report M as the difference in accuracy of sentences with (M-Trigger, M-Occupation) and (M-Trigger, F-Occupation) configurations, demonstrating the model's inability to resolve gender mismatches between triggers and occupations (See table 2 for values for the triggers and occupa-tions).",
"We report the same for F where the drop in performance is more pronounced.",
"We take the fact that F > M for all languages to be evidence of a more complex type of stereotyping that negatively affects women, namely androcentrism (Bem, 1993; Hegarty et al., 2013; Bailey et al., 2019).",
"5 5 Androcentrism is a wide reaching cultural phenomenon that treats the male experience...as a neutral standard or 3458 avg e m o ti on a l ( F ) d e li ca t e ( F ) s t r ong ( M ) t ough ( M ) a gg r e ss i v e ( M ) a ff ec ti on a t e ( F ) g r ace f u l ( F ) l og i ca l ( M ) m a s c u li n e ( M ) a rr og a n t ( M ) m u s c u l a r( M ) s e xy ( F ) r e b e lli ou s ( M ) d a i n t y ( F ) f e m i n i n e ( F ) 0 0 .",
"(d) F-trigger, M-occupation Figure 4: Results for M2M model (1.2B) analysing the relative position of the trigger token and occupation-noun and the trigger token.",
"The before category contains source text where the occupation token appears before the trigger token, e.g. That engineer is my sister , the after category contains source sentences of the form He works as an engineer and middle category contains a occupation-noun in between two trigger tokens.",
"In this section, we analyze our results by splitting up languages, occupations, adjective contexts and relative positioning of triggers and occupations using source sentences generated from the small grammar (described in Section 2).",
"man than from when it refers to a woman.",
"As we see in Figure 1, accuracy is lower for the M2M (1.2B) when the trigger requires feminine gender on the occupation, hovering around 40 in most languages.",
"For some languages, such as Urdu, occupation nouns are rarely inflected with the correct gender marking for feminine triggers.",
"The only language for which accuracy on sentences with feminine triggers exceeds 50 is Serbian.",
"In aggregate, these results likely reflect the cultural fact than many languages utilize the masculine form to refer to generic people (Gastil, 1990; Hamilton, 1991).",
"Accuracy is higher when trigger-gender and occupation-gender match.",
".",
". In Figure 1, the M2M model performs better on inflecting occupations nouns correctly when they are statistically more likely to refer to a person whose gender matches the gender required by the trigger: for example, our models are better at correctly marking nanny (stereotypically performed by women) in the context of mother than they are at marking janitor (stereotypically performed by men).",
"This finding replicates previous work (Stanovsky et al., 2019) that showed that six then-state-of-the-art models were very susceptible to statistical gender biases encoded in occupation words.",
"less when the occupation is mismatched with a masculine trigger than when it is mismatched with a feminine one.",
"Although statistical gender biases in how women are presented of the kind presented in Figure 1 are relatively well described in NLP and adjacent fields (Bolukbasi et al., 2016; Hovy and Spruit, 2016; Caliskan et al., 2017; Rudinger et al., 2017; Garg et al., 2018; Garimella et al., 2019; Gonen and Goldberg, 2019; Dinan et al., 2020a,b), we see additional evidence that our NMT systems encode this cultural androcentrism bias in the fact that the drop in accuracy is greater for sentences with feminine triggers ( mother ) and norm for the culture of the species as a whole (Bem, 1993, p. 41)one consequence of this cultural phenomenon is that women are restricted to their stereotypical domains (e.g. home, care) more than men are to theirs (e.g. work, science).",
"man-stereotypic occupations ( janitor ) than for the converse (compare the magnitude of the drop in Figure 1 and Figure 2 between a and c to the drop between b and d, as well as Table 4).",
"Models achieve higher accuracy for man-stereo-typic than woman-stereotypic occupations (although this varies).",
"To understand particular occupations, we plot the M2M (1.2B) accuracy by occupation averaged across all languages (see Table 5 in the Appendix for the full list of adjectives).",
"Recall that all occupations that are frequent, are either statistically biased towards either men or towards women in the source language, and are balanced in the dataset.",
"We observe that in the case of feminine grammatical gender triggers, only a few woman-stereotypic occupations (e.g. housekeeper, nurse, secretary in Figures 2b and 2d) reach the level of accuracy that the model achieves on most man-stereotypic occupations (in Figures 2a and 2c).",
"We also note that variation in accuracy is much higher for woman-stereotypic occupations across both trigger types (compare Figures 2c and 2d), lending support to a cultural androcentrism hypothesis.",
"Models perform better on sentences when there is a stereotypical adjective that matches the gender of the trigger.",
"We observe an effect of including stereotypical adjectives whereby accuracy is higher when the adjective's stereotypical gender matches the gender that was unambiguously triggered.",
"For example, in Figure 3b shows models translate sentences like The nanny is my sexy sister more accurately than The nanny is my logical sister , and in Figure 3c sentences like The sheriff is my logical brother with higher accuracy than The sheriff is my feminine brother .",
"We note that the result holds regardless of whether the adjective precedes the occupation or the trigger (see discussion of Figure 6 and Figure 7 in Appendix A).",
"impact accuracy.",
"We also analyzed if the relative positions of the trigger and occupation tokens (in the source sentence) affect the performance of the model.",
"We split the source sentences into a before group wherein all occupation nouns appear before the trigger token, (e.g. That engineer is my sister ), an after group which contained sentences in which the occupation noun appears after the trigger token (e.g. He works as a engineer ) and a middle group where the occupation noun has trigger tokens before and after it (e.g. He is a nanny who can 3460 inspire himself ).",
"Figure 4 shows these findings.",
"We expected the after and middle category to have better accuracy because the decoding proceeds in a left-to-right manner, which gives allows the model to condition on the target side trigger token when generating the target side occupation token (assum-ing the target language maintains the same ordering of trigger and occupation tokens).",
"Surprisingly, we do not see a noticeable difference in accuracies between the before and after categories.",
"We see a small improvement in the middle group across evidence that the relative position of the triggers affect the quality of gendered noun translation.",
"Note that the middle category has more trigger tokens.",
"Nonce Word Test.",
"Finally, all of our occupation words genuinely occur in the real world.",
"This means that various idiosyncratic factors, such as word frequency in the training corpora, might have an effect on how well they are translated into other languages.",
"We generate wholly novel nonce occupation words (e.g., nurson, plumbervist, farper ) which should have no stereotypical gender associations (Appendix C).",
"Therefore, we expect models to do equally well on each word regardless of whether it is in the presence of a masculine or feminine trigger.",
"While Nonce-occupations expectedly have higher levels of inconclusive translations, we do see in Figure 5 that the models are better at resolving a Male-trigger with a Nonce-occupation than a Female-trigger with a Nonce-occupation.",
"Recently, several works (Stanovsky et al., 2019; Prates et al., 2019; Gonen and Webster, 2020; Gonzlez et al., 2020) investigated gender bias in multiple languages with complex morphology, and showed that state-of-the-art MT systems resolve gender-unbalanced occupation nouns (from the US Bureau of Labor Statistics) more often to masculine than feminine pronouns, despite the fact that people of many genders participate in all listed occupations.",
"Our work improves upon these prior approaches by exploring the effects of gender-indicative contexts (e.g., additionally stereotypically masculine and feminine traits and events) in range of syntactic positions (e.g., preceding or following the clue, directly adjacent to the occupation, etc.).",
"While Prates et al. (2019) did investigate some stereotypical traits in their work, they only investigate a few of them, only in the context of the ambiguous paradigm, and were narrowly focused on measuring the translation abilities of one commercial translation product.",
"Recently, Bentivogli et al. (2020) focused on translation quality of occupation-nouns in speech-translation, where they consider the speaker-voice as well as contextual clues.",
"We, on the other hand, explore not only more diverse example traits as well as additional contextual cues, but we do so in unambiguously gendered sentences with a diverse range of sentence structures that allow us to vary the linear precedence of contextual cues as well as their prevalence.",
"Gonen and Webster (2020) also made use of minimally different sentences via an innovative perturbation method that mines examples from real world data and moves away from static word lists; however, their benchmark is also collected for the ambiguous gender setting.",
"Several works aim to enrich the gender input to an MT system by adding additional gold annotation or context (Stafanovi cs et al., 2020; Saunders et al., 2020; Moryossef et al., 2019).",
"This has the additional benefit of making gender tags learnable, but it does not rely on the linguistic signal alone (as we do through leveraging grammatical rules) and instead relies on additional denser annotation.",
"Only two contributions other than our own is known to us to rely only on the particular linguistic structure of the sentence: the first by Gonzlez et al. (2020) also focused on unforgivable grammatical gender-related errors in translation (as well as on other tasks) that come about as a result of syntactic structure and unambiguous coreference.",
"Their approach is somewhat analogous to some of our examples, except that, instead of relying on language-internal properties, we rely on syntactic context to construct unambiguous examples: e.g., particularly those that make use of own to make obligatory the local coreference (in this case cataphora) as in That her own child cried, surprised the doctor .",
"We take our work to be wholly complementary to theirs; Their approach focuses on more source languages, fewer target languages, and a wider range of tasks, we focus on one source language, more target languages, and sentences from a wider range of (source) syntactic structures.",
"The second work closely related to ours is Renduchintala et al. (2021) which also focuses on unambiguous source sentences.",
"Their work has only a small number of templates for two languages.",
"We propose and create a grammar that encompasses more scenarios where the source sentences contain unambiguous gender indicators for occupation nouns.",
"Our grammar enables us to examine the 3461 effect of adjectives and verbs (which were selected for their association with particular genders) on gendered occupation noun translation accuracy.",
"We also discuss the impact of the relative position of the occupation noun with respect to the gender trigger.",
"Our evaluation scheme allows for more diversity from the NMT model as we do not use a dictionary approach.",
"Our evaluation also focuses on the correctness of morphological markers on the target-side occupation noun and not on the noun itself.",
"Our evaluation scheme also allows us to apply our analysis to more languages.",
"The present work does not aim to ascertain the cause of models' errors.",
"Our main goal here is to present a novel benchmark for surfacing errors and measuring bias.",
"Since it is relatively well known that generation models, including MT models, often output translations that are less lexically diverse than their training data (Vanmassenhove et al., 2019), several recent works have investigated the effects of gender bias as a function of model training data.",
"Stafanovics et al. (2020) argues that gender bias in MT models can be lessened if models are trained on denser annotations for identifying the genders of referents.",
"Concurrently, another approach to pronoun coreference utilized a hand-crafted grammar to generate sentences for measuring fairness (Soremekun et al., 2020), but in the context of NLP tasks other than NMT.",
"Although Soremekun et al. (2020) are interested in measuring performance for unambiguous examples, it does not focus on the NMT use case, and its examples require cross-sentential coreferences, which will likely require a more complex linguistic toolbox than our intrasentential case (Szabolcsi, 2003; Hardmeier and Federico, 2010; Reinhart, 2016).",
"Moreover, the grammar created in that work is much less developed than ours: it does not manipulate the location of the trigger, there is limited syntactic diversity, and there is no incorporation of statistically gender-biased words above and beyond occupation nouns.",
"At a high level, our work resurfaces problems with morphology in machine translation.",
"While neural machine translation is more fluent than phrase-based machine translation, it has long been observed that even high-resource models can struggle to generate faithful translations that are also syntactically correct (Isabelle et al., 2017) and the problem intensifies for longer sentences with long-distance dependencies (Choshen and Abend, 2019).",
"We highlight yet another morphological failure mode in NMT models in this work.",
"There is also a long history of incorporating morphology and syntax explicitly into NMT models in the hope of reducing the prevalence of such errors (Minkov et al., 2007).",
"For example, Eriguchi et al. (2016) model source-side syntax while Aharoni and Goldberg (2017) proposed models that generate linearized constituency trees.",
"Other works also consider modifications to the attention mechanism in order to improve NMT (Kim et al., 2017).",
"Many of our NLP tasks and datasets are rife with statistical gender biases that reflect, in language, the stereotypical associations we have about gender in our cultures.",
"In this work, we present a new evaluation dataset for measuring gender bias in machine translation for gender unambiguous sentences.",
"Our dataset supports translation from an English source into 20 languages, contains three evaluation datasets of different sizes to accommodate all users, and is designed to answer questions not only about particular occupation words and gender triggering words, but also to further explicate the role of context in how MT systems translate gender morphology.",
"We hope that our dataset will encourage the community to improve on this new setting for measuring gender biases in language.",
"Our work has proposed a benchmark for measuring morphological gender errors in translation which require adequate representation of the context and may have social repercussions.",
"Our evaluation benchmark measures translation accuracy on an occupation noun on the target side.",
"In this work, we restrict ourselves to English as a source language.",
"English specifies several kinds of gender, for example, on pronouns, including feminine ( she, her, hers ), masculine ( he, him, his ), nonbinary ( they, them, their, theirs, xe, ze, sie, co, ey . . . ), and underspecified ( they, them, their, theirs ).",
"6 We focused solely on binarily gendered contextual clues, although that provides an incomplete picture, for multiple reasons.",
"First, the translation models we evaluated are not yet able to handle underspecified and nonbinary contextual clues consistently, let 6 Note: Although the sets of morphological forms of underspecified pronouns and nonbinary pronouns overlap, they are not the same phenomena from a linguistic perspective, see Ackerman 2019 i.a.).",
"alone neopronouns.",
"For example, translating my parent is a doctor into German resulted in a translation with a plural verb, and the masculine singular form of the occupation noun (we presume masculine was the majority class in the training data).",
"Second, they are a doctor 7 is translated as honorific in German with the pronoun Sie (Note that the pronouns for she and they are homophonous in that language, and are only distinguished by capitalization), but a masculine gender on the occupation, and a plural verb form.",
"If the original translation models that we are aiming to evaluate with our benchmark are unable to translate nonbinary and underspecified examples with any reasonable accuracy at all, this is a much bigger issue requiring its own nuanced investigation.",
"This issue becomes even more complex when you consider what ought to be the appropriate forms of occupation nouns when they refer to nonbinary or individuals we don't know the gender(s) of.",
"Most of the target languages we use do not have a single, standardized way of generating gender-inclusive occupation nouns, because norms regarding complex social/demographic features are currently in flux.",
"Considerations about what ought to be the ideal translation policy will change over time, and will doubtless vary by language and culture.",
"For example, in American English, some prefer actor to actress as the former is inclusive of all genders.",
"In other languages, specifying more than one gender on the same occupation noun has become preferred (at least in some contexts and among some groups) as another gender-inclusive option.",
"Take, for example, in continental French, the gender on words like student can be duplicated as in tudiante et tudiants , tudiant-e-s , or",
"tudi-ant.e.s , (see Burnett and Pozniak 2021; Pozniak and Burnett 2021; Richy and Burnett 2021 for more information).",
"Even this linguistic innovation however doesn't cover every person's preferences.",
"Some women prefer masculine gender on their occupations because they have the impression that the masculine forms have a more prestigious connotation than the feminine ones (Burnett and Pozniak 2021, p.11; Burnett and Bonami 2019).",
"Acknowledging the range of complexities at play here, for our test benchmark, we fixed the gold translation to obligatorily mark the (binary) gender on the occupation noun in accordance with the explicit gender identity of a person (i.e., it is always 7 This sentence unambiguously refers to a single person identifying as non-binary in the English source. preferred for the translation system to explicitly specify a known, binary gender for each occupation noun).",
"Although our approach runs contrary to some preferred ways of referring to people, it is still useful as a tool for uncovering gender biases in current translation systemsit can determine whether the system prefers to translate into the most frequent gender (usually the masculine) while, worryingly, ignoring relevant contextual cues to the contrary.",
"Future iterations of work like this might survey the appropriate ways of specifying nonbinary gender (or purposefully not specifying any gender) in each target language, and develop specific and more fine-grained schemes for measuring statistical gender biases for these situations (Note: considerations like these should be taken into account at the training phase and not just at the evaluation phase)."
] | [
"abstain",
"result",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"method",
"method",
"objective",
"result",
"abstain",
"result",
"method",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"other",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain"
] |
[
"In this paper we implement and compare 7 different data augmentation strategies for the task of automatic scoring of children's ability to understand others' thoughts, feelings, and desires (or mindreading).",
"We recruit in-domain experts to re-annotate augmented samples and determine to what extent each strategy preserves the original rating.",
"We also carry out multiple experiments to measure how much each augmentation strategy improves the performance of automatic scoring systems.",
"To determine the capabilities of automatic systems to generalize to unseen data, we create UK-MIND-20 a new corpus of children's performance on tests of mindreading, consisting of 10,320 question-answer pairs.",
"We obtain a new state-of-the-art performance on the MIND-CA corpus, improving macro-F1-score by 6 points.",
"Results indicate that both the number of training examples and the quality of the augmentation strategies affect the performance of the systems.",
"The task-specific augmentations generally outperform task-agnostic augmentations.",
"Automatic augmentations based on vectors (GloVe, FastText) perform the worst.",
"We find that systems trained on MIND-CA generalize well to UK-MIND-20.",
"We demonstrate that data augmentation strategies also improve the performance on unseen data.",
"Many state-of-the-art NLP models are limited by the availability of high quality human-annotated training data.",
"The process of gathering and annotating additional data is often expensive and time consuming.",
"It is especially difficult to gather data for tasks within psychology and psycholinguistics, as test administration typically requires highly trained in-domain experts, controlled environments, and large numbers of human participants.",
"Data augmentation is a popular technique for artificially enlarging datasets.",
"Typically, data augmentation uses one or more predefined strategies to modify existing gold-standard examples while retaining the original label.",
"The objectives of data augmentation are:",
"1) to increase the size of the dataset;",
"2) to introduce more variety;",
"3) to reduce overfitting; and",
"4) to improve generalizability.",
"Data augmentation has been used successfully in computer vision (Shorten and Khoshgoftaar, 2019) and has recently become more popular in the field of NLP (Wei and Zou, 2019; Min et al., 2020; Dai and Adel, 2020; Marivate and Sefara, 2020).",
"We use data augmentation to improve the performance of systems for automatic scoring of children's performance on tests of mindreading (i.e., the ability to reason about others' thoughts, feelings and desires) (Hughes and Devine, 2015).",
"Automatic scoring of mindreading was recently introduced by Kovatchev et al. (2020).",
"Their corpus, MIND-CA contains hand-scored data from more than 1000 children aged 7 to 14.",
"Collecting data on children's mindreading performance is complicated, time-consuming, and expensive.",
"It requires in-person testing sessions led by trained researchers and children's open-ended responses must be rated by trained annotators.",
"Data augmentation could be very beneficial to improve the performance and consistency of the automated scoring systems.",
"In this paper we aim to measure, in a systematic way, the quality and efficiency of the different augmentation strategies.",
"We evaluate and compare the different strategies intrinsically and extrinsically.",
"For the intrinsic evaluation, we recruit in-domain experts to re-annotate augmented examples and determine the extent to which each strategy preserves the original label.",
"For the extrinsic evaluation, we measure the quantitative improvement (macro-F1, F1-per-Question, Standard Deviation) of automatic systems on the MIND-CA corpus.",
"Furthermore, we create a new corpus, UK-MIND-20, containing 10,320 question-answer pairs in English and we use it to evaluate the performance of automated systems on unseen data.",
"We find that the intrinsic quality of the augmentation strategies varies significantly, according to human raters.",
"However, the extrinsic evaluation demonstrates that all strategies improve the performance of the automated systems.",
"We systematically measure the importance of three factors in data augmentation: corpus size, sampling strategy, and augmentation strategy.",
"We find corpus size to be the most important factor.",
"However, the choice of sampling and augmentation strategies also sig-nificantly affects the performance of the automated systems.",
"We report a correlation between the quality of the augmentation and the performance.",
"With the best configuration we obtain a new state-of-the-art on MIND-CA, improving Macro-F1 score by 6 points and F1-per-Question by 10.3 points.",
"We demonstrate that the automated scoring systems can generalize well between MIND-CA and UK-MIND-20.",
"These findings indicate that the methodology for administering and scoring mindreading is consistent and the automatic solutions can be adopted in practice.",
"The rest of this article is organized as follows.",
"Section 2 discusses the related work.",
"Section 3 presents the methodologies for data augmentation.",
"Section 4 compares the quality of the augmentation strategies.",
"Section 5 describes the machine learning experimental setup and evaluation criteria.",
"Section 6 analyzes the effect of data augmentation on automated systems.",
"Section 7 presents some follow-up experiments and discusses the implications of the findings.",
"Section 8 concludes the article and proposes directions for future work.",
"Mindreading (also known as theory of mind) is the ability to understand others' thoughts, feelings, and desires (Hughes and Devine, 2015).",
"For example, in the final scene of Romeo and Juliet, Romeo holds a mistaken belief that Juliet is dead.",
"Being able to understand the state of the world (Juliet is alive) and the mistaken belief (Juliet is dead) is important to understand the situation and the motivation of the characters.",
"Individual differences in children's mindreading are linked with both social and academic outcomes and children's wellbeing (Banerjee et al., 2011; Fink et al., 2015; Devine et al., 2016).",
"Furthermore, difficulties with mindreading are linked with a range of mental health problems and neurodevel-opmental conditions (Cotter et al., 2018).",
"The task of automatic scoring of mindreading was first proposed by Kovatchev et al. (2020).",
"They gathered the responses of 1066 children aged 7-14 on two standardized tests of mindreading: the Strange Story Task (Happe, 1994) and the Silent Film Task (Devine and Hughes, 2013).",
"After digitalizing and manually scoring the responses, they created MIND-CA, a corpus of 11,311 question-answer pairs.",
"They trained and evaluated several automated systems (i.e., SVM, BILSTM, Transformer) and obtained promising initial results.",
"Data augmentation is a technique for artificially increasing the size of the dataset.",
"It can also be seen as a type of regularization at the level of the data.",
"Data augmentation can be used to increase the number of instances of specific answer types.",
"It can also introduce more variety, and can reduce the imbalance between classes.",
"Data augmentation is used to improve the performance of automated systems, to reduce the risk of overfitting, and to enhance the ability of automated systems to generalize to unseen data.",
"It is widely used in computer vision (Shorten and Khoshgoftaar, 2019).",
"The specifics of natural languages make it more difficult to incorporate data augmentation in NLP.",
"A subtle change to the text can often lead to a substantial difference in meaning and a change of the label.",
"The last two years have seen an increase in the popularity of data augmentation in NLP.",
"Wei and Zou (2019) present a Python library that uses simple augmentation methods for improving text classification.",
"Marivate and Sefara (2020) compare different strategies for augmentation in the context of short-text classification.",
"Dai and Adel (2020) compare different data augmentation strategies for the task of Named Entity Recognition.",
"Several researchers propose more complex augmentation strategies for NLP.",
"Hou et al. (2018) propose a sequence-to-sequence model for data augmentation.",
"Kobayashi (2018) and Gao et al. (2019) use language models in what they call contextual augmentation.",
"Min et al. (2020) use syntactic augmentation to improve the performance and generalizability on Natural Language Inference.",
"In this paper, we take a different approach towards data augmentation.",
"We implement and compare seven different augmentation strategies.",
"Two of the strategies were designed specifically for the task of automatic scoring of mindreading, while the remaining five are task agnostic.",
"We put the emphasis on a systematic evaluation of the augmentation strategies and some key parameters of the augmentation process.",
"We recruit and train in-domain experts to provide intrinsic human evaluation of the data augmentation.",
"We also annotate a new corpus that can measure the performance and improvement on unseen data.",
"We used 7 different strategies for automatic data augmentation.",
"Dictionary and phrase strategies make use of task-specific resources, created by in-domain experts.",
"The other 5 strategies (order, wordnet, ppdb, glove, fasttext) make use of publicly available task-agnostic resources.",
"For a source of the augmentation, we used the MIND-CA corpus (Kovatchev et al., 2020).",
"It contains 11,311 question-answer pairs.",
"There are 11 different questions, and an average of 1,028 responses per question.",
"There are three possible labels reflecting the degree to which the response shows context-appropriate mindreading: 0 (fail), 1 (partial score), and 2 (pass).",
"The label distribution for the full corpus is balanced, however the label distribution for the individual questions vary 1 .",
"We sought to use data augmentation to create a well-balanced dataset in terms of questions and labels.",
"To achieve this, we created a policy for sampling examples that we used in the augmentation.",
"We split the MIND-CA corpus per question and per label, resulting in 33 question-label sub corpora.",
"The average size of each sub-corpora is 343, and the smallest number of instances in a sub corpora is 160.",
"We sampled 125 examples from each question-label sub-corpus, 375 from each question, for a total 4,125 examples.",
"Our sampling strategy ensures that each question-label combination is well represented in the augmentation process.",
"In the original MIND-CA corpus, nine question-label pairs had less than 125 instances.",
"As a preliminary step in the data augmentation process, our in-domain experts rewrote existing responses to improve the balance of the corpus.",
"We used strategy similar to the one used in Hossain et al. (2020).",
"We ran statistical and machine learning experiments to ensure that the additional examples do not introduce biases.",
"1 For more details, please refer to Kovatchev et al. (2020) For our experiments we initially chose a con-servative number of examples (each augmentation increases the original corpus size by 36 %), to avoid overfitting on the underrepresented question-label pairs.",
"We used a different random state for each augmentation strategy and we ensured that each sample is representative in terms of demographic distribution (age and gender of the participants).",
"In a complementary set of experiments, we applied data augmentation directly without the custom sampling strategy.",
"We also experimented with generating larger number of augmented examples (up to 140% of the original corpus size) via over-sampling (see Section 7).",
"The dictionary augmentation strategy is a task-specific synonym substitution.",
"We automatically extract the 20 most frequent words for each of the 11 questions, a total of 220 words.",
"We then ask trained corpus annotators to propose a list of synonyms for each word.",
"The synonyms have the same meaning in the context of the particular question.",
"The meaning of the contextual synonyms may not be the same outside of the context.",
"For example, in Silent Film Question #1, men can be replaced with burglars.",
"We instruct the experts to create as many synonyms as possible for each word.",
"Some words do not have appropriate contextual synonyms.",
"The final synonym dictionary contains 626 synonyms for 148 words 2 .",
"The dictionary augmentation algorithm replaces up to two words in each response with their contextual synonyms.",
"The words and their synonyms are selected at random from the available options.",
"The task-specific phrase augmentation strategy adds a short phrase at the beginning of the response.",
"The appended phrases should not modify the meaning (or score) of the response.",
"An example for such phrase is I think (that).",
"Our experts create phrases that contain mental state words, such as think, know, and believe, as this category of words is important when scoring children's mindreading ability.",
"Our corpus annotators proposed a 2 The implementation of all augmentation strategies and all resources used (lists of synonyms and introductory phrases) can be found online at https://github.com/ venelink/augment-acl21/ list of 15 such phrases.",
"We further modify the 15 phrases with 3 optional conjunctions, resulting in 60 different combinations.The phrase augmentation appends a random phrase at the beginning of each response, if the response does not already begin with such a phrase.",
"Word replacement augmentation is a strategy that automatically replaces up to two randomly selected words with semantically similar words or phrases.",
"The wordnet and ppdb augmentations replace the selected words with a synonym from WordNet (Fellbaum, 1998) or PPDB (Pavlick et al., 2015) respectively.",
"The glove and fasttext augmentations replace the selected words with the most similar words (or phrases) using pre-trained GloVe (Pennington et al., 2014) or FastText (Joulin et al., 2016) word embeddings.",
"We implement the four word replacement augmentations using the NLP Augmentation python library (Ma, 2019).",
"For this set of experiments we decided not to use BERT-based contextual word embeddings for augmentation, since we are using a DistilBERT classifier.",
"The order augmentation strategy changes the position of two words in the sentence.",
"Previous work on data augmentation for NLP (Wei and Zou, 2019; Ma, 2019) implement the order augmentation by changing the position of the two randomly selected words.",
"We enforce a more stringent rule for our algorithm.",
"Specifically, we select one word at random and change its position with one of its neighbouring words.",
"This change is more conser-vative than picking two words at random.",
"It also reflects the naturally occurring responses from 7-to 14-year-old children in the database.",
"The reorder process is repeated up to two times.",
"We also experimented with applying multiple augmentation strategies together.",
"For example the dic-tionary + phrase augmentation first replaces up to two words with contextual synonyms and then adds a phrase at the beginning of the response.",
"The data obtained by combination augmentations was included in the the all-lq and all-hq corpora.",
"The quality of data augmentation models in NLP research is typically evaluated extrinsically, by measuring the performance of automated systems trained on augmented data.",
"Wei and Zou (2019) propose an intrinsic evaluation inspired by the data augmentation research in computer vision.",
"They compare the latent space representations of the original and the augmented sentences and assume that the proximity in latent space indicates that the original labels are conserved.",
"We argue that a direct comparison of the representation of the texts is not sufficient to determine the quality of the augmentation and the extent to which each strategy preserves the original labels.",
"In natural language, unlike in computer vision, a minor difference in the text and the corresponding representation can cause a significant difference in the meaning of the complex expression and ultimately the label or score assigned to that answer.",
"We propose a manual evaluation of the different strategies.",
"For each augmentation strategy, we selected 5 random examples from each question-label sub-corpus, adding up to 165 examples per strategy (4% of the full sample).",
"Two trained annotators independently rate the augmented pairs for the 7 different augmentation strategies (a total of 1,155 question-answer pairs).",
"To ensure a fair evaluation, the annotators receive a single file with the examples for all augmented strategies shuffled at random.",
"The inter-annotator agreement was 87% with a Cohen's Kappa of .83.",
"Table 1 shows the results of the re-annotation for each augmentation strategy.",
"We define quality as the % of examples where the re-annotated label was the same as the original label.",
"We also measure the % of invalid examples, where both annotators agreed not to assign a label due to a semantically incoherent response.",
"Based on the analysis, we distinguish between high quality augmentation strategies (phrase, order, and dictionary) and low quality augmentations (wordnet, fasttext, ppdb, and glove).",
"The high quality augmentations preserve the label in over 94% of the instances and contain less than 4% invalid responses.",
"The low quality augmentations preserve the label in less than 83% of the instances and contain more than 10% invalid responses.",
"According to our raters, GloVe is the worst of all augmentation strategies with 68% quality and 17% invalid.",
"The expert analysis indicates that, at least in our data, there is a substantial difference in the quality of the different augmentation strategies.",
"The task-specific strategies perform much better than the task-agnostic ones, with the exception of change of order augmentation.",
"In the following sections, we perform a number of machine learning experiments to determine if the quality of the data affects the performance of the automated systems.",
"In our experiments, we used the two best systems reported by Kovatchev et al. (2020) a BiL-STM neural network and a DistilBERT transformer.",
"These systems obtained good results on the original MIND-CA corpus and at the same time were lightweight enough to be implemented in a practical end-to-end application for automatic scoring.",
"We used the same configuration and hyperparam-eters as reported by Kovatchev et al. (2020).",
"We modified the existing classes to incorporate and keep track of data augmentation and to implement additional evaluation on UK-MIND-20.",
"All of our code and data are available online 3 .",
"5.1 Automated Systems.",
"Training setup.",
"We trained each of the automated systems on 13 different training sets, shown in Table 2.",
"Each set includes the original corpus (MIND-CA) and a number of augmented samples.",
"For example, the phrase dataset contained the 11,311 examples 3 https://github.com/venelink/ augment-acl21/ Corpus Size Corpus Contents orig 11,311 The MIND-CA corpus uk-20 10,320 The UK-MIND-20 corpus phrase 15,436 MIND-CA + phrase dict 15,436 MIND-CA + dictionary order 15,436 MIND-CA + order wordnet 15,436 MIND-CA + wordnet fasttext 15,436 MIND-CA + fasttext ppdb 15,436 MIND-CA + ppdb glove 15,436 MIND-CA + glove ab-lq 27,811 MIND-CA + wordnet, fasttext, ppdb, and glove all-lq 44,311 MIND-CA + wordnet, fasttext, ppdb, and glove + all 4 synonym substitutions combined with reorder ab-hq 23,686 MIND-CA + phrase, dictionary and order all-hq 40,186 MIND-CA + phrase, dictionary, and order + all four possible combinations of the three strategies Table 2: All Augmented Training Sets from MIND-CA + 4,125 from the phrase augmentation, for a total of 15,436 examples.",
"In addition to the 7 basic augmented training sets (one for each augmentation strategy), we also created 4 larger training sets, containing augmented samples from multiple different strategies.",
"The All Bassic HQ ( ab-hq ) dataset contains the 11,311 examples from MIND-CA + 4,125 from phrase + 4,125 from dictionary + 4,125 from order for a total of 23,686 examples.",
"Similarly, the All Basic LQ ( ab-lq ) dataset contains 27,811 examples from MIND-CA + wordnet, fasttext, ppdb, and glove.",
"The two largest datasets, the all-lq and the all-hq datasets contain the corresponding all basic datasets and additional examples obtained by consecutively applying more than one augmentation strategy to the same original data (the combined augmentations described in Section 3.5).",
"We kept the low quality and the high quality data separated, so we can measure the correlation between the quality and the performance of the automated systems.",
"One of the objectives behind data augmentation is to improve the capabilities of automated systems to generalize to unseen data.",
"However, finding unseen data for the same task is often non-trivial, so researchers typically use train-test split or 10-fold cross validation to evaluate the models.",
"To provide a fair evaluation benchmark for generalizability, we created a new corpus of children's mindreading ability, the UK-MIND-20 corpus.",
"The data for the corpus is part of our own research on children's mindreading in large-scale study involving 1020 8to 13-year-old children (556 girls, 453 boys, 11 not disclosed) from the United Kingdom.",
"Children completed three mindreading tasks during whole-class testing sessions led by trained research assistants: Strange Stories task (Happe, 1994), Silent Film task (Devine and Hughes, 2013), and Triangles Task (Castelli et al., 2000).",
"Each child answered 14 questions: five from the Strange Story Task, six from the Silent Film Task, and three from the Triangles Task.",
"We do not use the responses for the Triangles task for the evaluation of data augmentation, since that task is not part of the MIND-CA corpus.",
"We obtained a total of 10,320 question-answer pairs for the Strange Stories and the Silent Film portion of the corpus.",
"Similar to MIND-CA, UK-MIND-20 also includes the age and gender of the participants and responses to a standardized verbal ability test (Raven, 2008).",
"The children's responses were scored by two trained research assistants, the same assistants that measured the augmentation quality in Section 4.",
"Each response was scored by one annotator.",
"The inter-annotator agreement was measured on a held-out set of questions.",
"We report an inter-annotator agreement of 94% and a Fleiss Kappa score of .91.",
"When creating UK-MIND-20, we used the same procedures for administering, scoring, and digitalizing the children responses as the ones used by Kovatchev et al. (2020).",
"The data for the UK-MIND-20 corpus is gathered in a different time-frame (Oct 2019 Feb 2020) and from different locations than MIND-CA (2014 2019).",
"The task defined by Kovatchev et al. (2020) consists of scoring the children's mindreading abilities based on the open-text responses to 11 different questions from the Strange Stories Task and the Silent Film Task using three categories (i.e., fail,",
"partial, pass).",
"A single automated system has to score all 11 questions.",
"In this paper we evaluate the system performance in three ways: Overall F1 : The macro-F1 on the full test set, containing all 11 questions, shuffled at random.",
"F1-per-Q : We split the test set on 11 parts, one for each question.",
"We obtain the macro-F1 score on each question and calculate the average.",
"STD-per-Q : Similar to F1-per-Q, we obtain the macro-F1 for each question and then calculate the standard deviation of the performance per question.",
"The Overall F1 measures the performance of the system on the full task.",
"F1-per-Q and STD-per-Q measure the consistency of the system across the different questions.",
"A practical end-to-end system needs to obtain good results in both.",
"The additional data facilitates the statistical analysis of the system performance.",
"This evaluation methodology was proposed by Kovatchev et al. (2019).",
"For each system we performed a 10-fold cross validation using each corpus from Table 2.",
"For each fold, we evaluated on both the corresponding test set and on the full UK-MIND-20 corpus.",
"Our code dynamically removes from the current training set any augmented examples that are based on the current test set to ensure a fair evaluation.",
"All test sets contain only gold-standard human-labeled examples and do not include any augmented data.",
"Table 3 presents the results of the 13 different training configurations with the DistilBERT transformer, using both question and answer as input 4 .",
"The numbers are the average across 10-fold cross validation.",
"For reference, we also include the results obtained by training the system on UK-MIND-20 and testing on MIND-CA.",
"The DistilBERT architecture is the best performing system from Kovatchev et al. (2020).",
"The baseline system, trained on the original data already obtained very good results: .925 F1 and .877 F1-per-Q on the MIND-CA corpus and .889 F1 and .839 F1-per-Q on the UK-MIND-20 corpus.",
"We demonstrate that systems trained on either of the two datasets can generalize well on the other one 4 We carried out 4 different sets of experiments: two classifiers (BILSTM and DistilBERT) and two different input setups (i.e., only the answer or both question and an-swer).",
"Due to space restrictions, we report only the results for the best system, DistilBERT (question + answer).",
"The findings apply to all sets of experiments.",
"The code and results for all experiments are available online at https: //github.com/venelink/augment-acl21/ Training Set Test-F1 Test-F1-Q Test-STD UK20-F1 UK20-F1-Q UK20-STD Orig (baseline) .925 .877 .059 .889 .839 .063 UK-MIND-20 .893 .844 .058 .890 .839 .063 Phrase .946 .930 .031 .893 .854 .024 Dictionary .947 .936 .028 .892 .853 .024 Order .947 .933 .025 .891 .852 .022 FastText .942 .924 .030 .890 .851 .023 GloVe .942 .925 .028 .891 .849 .021 PPDB .946 .929 .030 .893 .851 .022 WordNet .947 .932 .033 .894 .853 .023 AB-LQ .967 .957 .021 .895 .855 .021 AB-HQ .972 .963 .022 .897 .858 .020 All-LQ .978 .973 .015 .895 .957 .021 All-HQ .985 .980 .011 .898 .858 .023 Table 3: Performance of a DistilBERT classifier using different augmented sets for training.",
"It is evident in the table that all of the augmentation strategies successfully improved the performance of the automated systems across all evaluation criteria.",
"For the MIND-CA corpus: F1 improved between 1.7 points (FastText) and 6 points (All-HQ); F1-per-Qiestion improved between 4.7 points (FastText) and 10.3 points (All-HQ); STD-per-Question was reduced by between 1.6 points (WordNet) and 4.8 points (All-HQ).",
"For the UK-MIND-20 corpus: F1 improved between 0.1 point (FastText) and 0.9 point (All-HQ); F1-per-Question improved between 1 point (GloVe) and 1.9 points (All-HQ); STD-per-Question was reduced between 3.9 points (dictionary) and 4.2 points (AB-HQ).",
"Based on these results, we can draw two conclusions.",
"First , data augmentation can successfully be used to improve the performance of the systems on the MIND-CA corpus.",
"Second , data augmentation also improves the performance of the automated systems on the unseen examples from UK-MIND-20.",
"While the improvement is not as substantial as seen on MIND-CA, the improvement on all three criteria on UK-MIND-20 indicates that the systems are not just overfitting to MIND-CA.",
"We use the Autorank Python library (Herbold, 2020) to carry out a statistical analysis on the results and compare the performance gain from each of the augmentation strategies.",
"We use the data from both algorithms and input formats, a total of 480 machine learning models, 40 for each dataset.",
"Based on the provided data, Autorank determines that the most appropriate statistical test is the Friedman-Nemeyni test (Demsar, 2006).",
"The Friedman test reports that there is a statistically significant difference between the median values of the populations.",
"That means that some training sets are consistently performing better (or worse) than others.",
"The post-hoc Nemenyi test can be used to determine and visualise which training sets are better and which are worse.",
"Figure 1 shows the Critical Difference diagram of the post-hoc Nemenyi test for all training sets.",
"Each set is plotted with its average ranking across all systems.",
"The difference between systems connected with a line is not statistically significant.",
"The original corpus is the worst performing of all datasets with an average rank of 9.",
"The 7 basic training sets are grouped in the middle (rank 6.5 to 8).",
"That is, they are all better than the original corpus, but worse than the combined training sets.",
"There is a significant difference between All-HQ, All-LQ, AB-HQ, and AB-LQ.",
"Collectively they are also better than the original training set and the basic training sets.",
"Figure 2 shows the Critical Difference diagram of the post-hoc Nemenyi test applied only to the 7 basic augmentations.",
"After removing the outliers (the original corpus and the collections of multiple augmentation), we can observe a clear, statistically significant distinction between high quality augmentations (dictionary, phrase, and order) and low quality augmentations (glove, fasttext, wordnet, and ppdb).",
"Based on the statistical analysis, we can draw two additional conclusions.",
"Third , we found that the most important factor affecting the system performance is the number of training examples.",
"We obtain the best results by combining the examples from various different augmentation strategies.",
"Fourth , we demonstrated that when the training size is comparable, the high quality augmentations improve the performance more than the low quality ones.",
"The difference is significant and is consistent both in basic datasets and in combined datasets.",
"Vector based augmentations (GloVe and FastText) are performing worse than augmentations based on task-specific or task-agnostic knowledge bases.",
"The intrinsic and extrinsic evaluation presented in Section 4 and Section 6 answered the main research questions posed in this paper.",
"We demonstrated that data augmentation can improve the performance of automated systems including on novel, unseen data.",
"We found that the data augmentation strategies vary in preserving the original label and in how much they improve the machine learning systems trained on them.",
"We also showed that automated scoring systems can generalize well from MIND-CA corpus to UK-MIND-20 and the other way around.",
"All these findings are important for further research on mindreading.",
"At the same time, our data augmentation strategies and evaluation methodology can also be extended to other tasks and domains, contributing to the research of Data Augmentation in NLP in general.",
"In this section we present additional experiments and an analysis of the impact of several different factors in the process of data augmentation 5 .",
"Corpus Size Our experiments indicated that the most important factor for improving the system performance is the corpus size.",
"In Table 3 the systems that perform best are trained on the largest possible amount of data (all-lq/all-hq).",
"To further explore the impact of corpus size, we ran an additional set of experiments.",
"We sampled 500 examples for each question-label subcorpora instead of the original 125, increasing the corpus size four times.",
"For each augmentation strategy this resulted in a corpus approximately the same size as ab-lq.",
"As expected, the performance of each system increased with corpus size.",
"The ranking of the individual systems remained similar to the one reported with 125 base examples.",
"High quality augmentations still performed better than low quality ones.",
"The F1, F1-per-Q, and STD-per-Q for the basic low quality strategies was approximately the same as the performance for ab-lq.",
"The F1, F1-per-Q, and STD-per-Q for the basic high quality strategies was approximately the same as the performance for ab-hq.",
"This new set of experiments confirmed the importance of corpus size.",
"Even strategies that human experts perceive as low quality are improving the performance of the automated systems.",
"And while the ranking consistently favors the high quality augmentations, the absolute difference is relatively small.",
"This is in line with the findings on noisy learning which show that machine learning models can be very noise-tolerant (Natarajan et al., 2013).",
"We performed one final experiment by combining the all-lq and all-hq data together, but found no increase or decrease of performance compared with using only the all-hq data.",
"Sampling Strategy In our experiments, we designed a sampling strategy to ensure that each 5 Due to space restrictions, we only discuss the overall tendencies.",
"The actual results are available online.",
"question-response combination appears in the training data with sufficient frequency.",
"In a complementary set of experiments, we evaluated the importance of the sampling.",
"For each augmentation strategy, we created an augmented dataset with 1500 examples for each question, using a standard sampling that keeps the original ratio of the responses.",
"The size of the dataset is the same as sampling 500 examples for each of the 3 labels.",
"We found that for all strategies, the sampling improves Test-F1-Q between .6 and 1 point and reduces STD-per-Q by 1 point.",
"This finding validates our choice of sampling strategy.",
"Augmentation Strategy In Section 6 we demonstrated that when all parameters (sampling, corpus size) are equal the high-quality strategies rank higher than the low-quality ones.",
"While the absolute difference in F1 and STD is relatively small on our datasets, the consistency of the performance of the high-quality strategies has to be taken into consideration.",
"Furthermore, the quantitative performance is only one factor that has to be considered when choosing a strategy for data augmentation.",
"Reducing the noise in the training data can be a desirable characteristic when interpreting the performance of the neural network models, or when working with sensitive data, such as (e.g.) in the health domain.",
"The task-specific augmentations that we proposed and used may require in-domain experts, however the design is rather simple and the process is not time or labour intensive.",
"After the task-specific resource (dictionary, list of phrases) is created, it can be reused for multiple examples and scales very well with corpus size.",
"We presented a systematic comparison of multiple data augmentation strategies for the task of automatic scoring of children's mindreading ability.",
"We argued that the nature of natural language requires a more in-depth analysis of the quality and performance of the different data augmentation strategies.",
"We recruited in-domain experts and incorporated them in the process of evaluation.",
"We demonstrated that, for some of the augmentation strategies (glove, fasttext, ppdb) there is a substantial portion of the examples (over 20%) where the rating changes or cannot be assigned due to semantically incoherent text.",
"These differences in the datasets cannot be captured trivially via the visualisation techniques that are typically used for intrinsic evaluation.",
"We also found that the difference in augmentation quality corresponds to a difference in the performance of automated systems trained on the data.",
"To the best of our knowledge, this is the first evaluation of data augmentation in NLP that involves both expert evaluation and automatic metrics and the first study that demonstrates the connection between the two.",
"We carried out further experiments measuring the importance of factors such as corpus size and sampling strategy.",
"Our findings on the quality and efficiency of data augmentation strategies and on the use of task-specific resources are relevant for researchers in the area of data augmentation, specifically in domains where the quality of the training gold examples is important or where the amount of data is very limited.",
"For the purpose of evaluation, we also created a new corpus: UK-MIND-20.",
"It is the second corpus for automatic scoring of mind reading in children.",
"We demonstrated that systems trained on MIND-CA generalize well on UK-MIND-20.",
"We also showed that data augmentation improves the performance on unseen data.",
"These findings are promising both for the task of scoring children's mindreading and for the use of data augmentation in NLP.",
"To the best of our knowledge, this is the first work where augmentation is evaluated on novel, unseen data for the same task.",
"This work opens several directions of future work.",
"As a direct continuation of this research, we will incorporate the best performing automated systems and data augmentation techniques in the work of developmental psychologists.",
"This will facilitate a large-scale studies on mindreading in children and adolescents.",
"We are also exploring the possibility of using NLP to address other time and labour intensive problems within psychology.",
"Open-ended short text responses are widely-used within psychological research and the good results obtained in this paper can be replicated in other similar tasks.",
"We would like to thank Imogen Grumley Traynor and Irene Luque Aguilera for the annotation and the creation of the lists of synonyms and phrases.",
"We also want to thank the anonymous reviewers for their feedback and suggestions.",
"This project was funded by a grant from Wellcome to R. T. Devine.",
"The study was approved by the University of Birmingham STEM Research Ethics Committee and complies with the British Psychological Society Code of Human Research Ethics (2014).",
"Parents and caregivers were provided with detailed information about the study at least one week in advance of data collection and given the opportunity to opt out of the study.",
"Children were also permitted to opt out of the study on the day of data collection without consequence.",
"Data were anonymous at source as children did not provide names or contact information to the research team."
] | [
"method",
"objective",
"result",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"method",
"objective",
"result",
"abstain",
"method",
"result",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"other",
"abstain",
"method",
"objective",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"result",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"objective",
"method",
"abstain",
"objective",
"abstain",
"objective",
"result",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"objective",
"result",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Deep learning has emerged as a compelling solution to many NLP tasks with remarkable performances.",
"However, due to their opacity, such models are hard to interpret and trust.",
"Recent work on explaining deep models has introduced approaches to provide insights toward the model's behaviour and predictions, which are helpful for assessing the reliability of the model's predictions.",
"However, such methods do not improve the model's reliability.",
"In this paper, we aim to teach the model to make the right prediction for the right reason by providing explanation training and ensuring the alignment of the model's explanation with the ground truth explanation.",
"Our experimental results on multiple tasks and datasets demonstrate the effectiveness of the proposed method, which produces more reliable predictions while delivering better results compared to traditionally trained models.",
"It is unfortunate that our data is often plagued by meaningless or even harmful statistical biases.",
"When we train a model on such data, it is possible that the classifier focuses on irrelevant biases to achieve high performance on the biased data.",
"Recent studies demonstrate that deep learning models noticeably suffer from this issue (Agrawal et al., 2016; Wadhwa et al., 2018; Gururangan et al., 2018).",
"Due to the black-box nature of deep models and the high dimensionality of their inherent representations, it is difficult to interpret and trust their behaviour and predictions.",
"Recent work on explanation and interpretation has introduced a few approaches (Simonyan et al., 2013; Ribeiro et al., 2016; Lei et al., 2016; Li et al., 2016, 2017; Ghaeini et al., 2018b; Ribeiro et al., 2018) for explanation.",
"Such methods provide insights toward the model's behaviour, which is helpful for detecting biases in our models.",
"However, they do not correct them.",
"Here, we investigate how to incorporate explanations into the learning process to ensure that our model not only makes correct predictions but also makes them for the right reason.",
"Specifically, we propose to train a deep model using both ground truth labels and additional annotations suggesting the desired explanation.",
"The learning is achieved via a novel method called saliency learning , which regulates the model's behaviour using saliency to ensure that the most critical factors impacting the model's prediction are aligned with the desired explanation.",
"Our work is closely related to Ross et al. (2017), which also uses the gradient/saliency information to regularize model's behaviour.",
"However, we differ in the following points: 1) Ross et al. (2017) is limited to regularizing model with gradient of the model's input.",
"In contrast, we extend this concept to the intermediate layers of deep models, which is demonstrated to be beneficial based on the experimental results; 2) Ross et al. (2017) considers annotation at the dimension level, which is not appropriate for NLP tasks since the individual dimensions of the word embeddings are not interpretable; 3) most importantly, Ross et al. (2017) learns from annotations of irrelevant parts of the data, whereas we focus on positive annotations identifying parts of the data that contributes positive evidence toward a specific class.",
"In textual data, it is often unrealistic to annotate a word (even a stop word) to be completely irrelevant.",
"On the other hand, it can be reasonably easy to identify group of words that are positively linked to a class.",
"We make the following contributions: 1) we propose a new method for teaching the model where to pay attention; 2) we evaluate our method on multiple tasks and datasets and demonstrate that our method achieves more reliable predictions while delivering better results than traditionally trained models; 3) we verify the sensitivity of our saliency-trained model to perturbations introduced on part of the data that contributes to the explanation.",
"Our goal is to teach the model where to pay attention in order to avoid focusing on meaningless statistical biases in the data.",
"In this work, we focus on positive explanations.",
"In other words, we expect the explanation to highlight information that contributes positively towards the label.",
"For example, if a piece of text contains the mention of a particular event, then the explanation will highlight parts of the text indicating the event, not non-existence of some other events.",
"This choice is because positive evidence is more natural for human to specify.",
"Formally, each training example is a tuple ( X, y, Z ) , where X = [ X 1 , X 2 , . . . , X n ] is the input text (length n ), y is the ground-truth label, and Z { 0 , 1 } n is the ground-truth explanation as a binary mask indicating whether each word contributes positive evidence toward the label y .",
"Recent studies have shown that the model's predictions can be explained by examining the saliency of the inputs (Simonyan et al., 2013; Hechtlinger, 2016; Ross et al., 2017; Li et al., 2016) as well as other internal elements of the model (Ghaeini et al., 2018b).",
"Given an example, for which the model makes a prediction, the saliency of a particular element is computed as the derivative of the model's prediction with respect to that element.",
"Saliency provides clues as to where the model is drawing strong evidence to support its prediction.",
"As such, if we constrain the saliency to be aligned with the desired explanation during learning, our model will be coerced to pay attention to the right evidence.",
"In computing saliency, we are dealing with high-dimensional data.",
"For example, each word is represented by an embedding of d dimensions.",
"To aggregate the contribution of all dimensions, we consider sum of the gradients of all dimensions as the overall vector/embedding contribution.",
"For the i -th word, if Z [ i ] = 1 , then its vector should have a positive gradient/contribution, otherwise the model would be penalized.",
"To accomplish this, we incorporate a saliency regularization term to the model cost function using hinge loss.",
"Equation 1 describes our cost function evaluated on a single example ( X, y, Z ) .",
"C ( , X, y, Z ) = L ( , X, y ) + n (cid:88) i =1 max 0 , Z i d (cid:88) j =1 f ( X, y ) X i,j (1) where L is a traditional model cost function (e.g. cross-entropy), is a hyper parameter, f specifies the model with parameter , and fX i,j represents the saliency of the j -th dimension of word X i .",
"The new term in the C penalizes negative gradient for the marked words in Z (contributory words).",
"Since C is differentiable respect to , it can be optimized using existing gradient-based optimization methods.",
"It is important to note that while Equation 1 only regularizes the saliency of the input layer, the same principle can be applied to the intermediate layers of the model (Ghaeini et al., 2018b) by considering the intermediate layer as the input for the later layers.",
"Note that if Z = 0 then C = L .",
"So, in case of lacking proper annotations for a specific sample or sequence, we can simply use 0 as its annotation.",
"This property enables our method to be easily used in semi-supervised or active learning settings.",
"To teach the model where to pay attention, we need ground-truth explanation annotation Z , which is difficult to come by.",
"As a proof of concept, we modify two well known real tasks (Event Extraction and Cloze-Style Question Answering) to simulate approximate annotations for explanation.",
"Details of the main tasks and datasets could be found in section B of the Appendix.",
"We describe the modified tasks as follows: 1) Event Extraction: Given a sentence, the goal is to determine whether the sentence contains an event.",
"Note that event extraction benchmarks contain the annotation of event triggers, which we use to build the annotation Z .",
"In particular, the Z value of every word is annotated to be zero unless it belongs to an event trigger.",
"For this task, we consider two well known event extraction datasets, namely ACE 2005 and Rich ERE 2015.",
"2) Cloze-Style Question Answering: Given a sentence and a query with a blank, the goal is to determine whether the sentence contains the correct replacement for the blank.",
"Here, annotation of each word is zero unless it belongs to the gold Sentence Conv-W3 Conv-W5 Max-Pooling Dim & Seq Max-Pooling Sentence Conv-W3 Conv-W5 Max-Pooling Dim & Seq Max-Pooling Query Conv-W3 Conv-W5 Max-Pooling Max-Pooling",
"replacement.",
"For this task, we use two well known cloze-style question answering datasets: Children Book Test Named Entity (CBT-NE) and Common Noun (CBT-CN) (Hill et al., 2015).",
"Here, we only consider the simple binary tasks as a first attempt to examine the effectiveness of our method.",
"However, our method is not restricted to binary tasks.",
"In multi-class problems, each class can be treated as the positive class of the binary classification.",
"In such a setting, each class would have its own explanation and annotation Z .",
"Note that for both tasks if an example is negative, its explanation annotation will be all zero.",
"In other words, for negative examples we have C = L .",
"We use simple CNN based models to avoid complexity.",
"Figure 1 illustrates the models used in this paper.",
"Both models have a similar structure.",
"The main difference is that QA has two inputs (sen-tence and query).",
"We first describe the event extraction model followed by the QA model.",
"Figure 1",
"(a) shows the event extraction model.",
"Given a sentence W = [ w 1 , . . . , w n ] where w i R d , we first pass the embeddings to two CNNs with feature size of d and window size of 3 and 5 .",
"Next we apply max-pooling to both CNN outputs.",
"It will give us the representation I R n d , which we refer to as the intermediate representation .",
"Then, we apply sequence-wise and dimension-wise max-poolings to I to capture D seq R d and D dim R n respectively.",
"D dim will be referred as decision representation .",
"Finally we pass the concatenation of D seq and D dim to a feed-forward layer for prediction.",
"Figure 1",
"(b) depicts the QA model.",
"The main difference is having query as an extra input.",
"To process the query, we use a similar structure to the main model.",
"After CNNs and max-pooling we end Dataset S. a P. b R. c F1 Acc.",
"up with Q R m d where m is the length of query.",
"To obtain a sequence independent vector, we apply another max-pooling to Q resulting in a query representation q R d .",
"We follow a similar approach to in event extraction for the given sentence.",
"The only difference is that we apply a dot product between the intermediate representations and query representation ( I i = I i (cid:12) q ).",
"As mentioned previously, we can apply saliency regularization to different levels of the model.",
"In this paper, we apply saliency regularization on the following three levels: 1) Word embeddings ( W ).",
"2) Intermediate representation ( I ).",
"3) Decision representation ( D dim ).",
"Note that the aforementioned levels share the same annotation for training.",
"For training details please refer to Section C of the Appendix.",
"Table 1 shows the performance of the trained models on ACE, ERE, CBT-NE, and CBT-CN datasets using the aforementioned models with and without saliency learning.",
"The results indicate that using saliency learning yields better accuracy and F1 measure on all four datasets.",
"It is interesting to note that saliency learning consistently helps the models to achieve noticeably higher precision without hurting the F1 measure and accuracy.",
"This observation suggests that saliency learning is effective in providing proper guidance for more accurate predictions Note that here we only have guidance for positive prediction.",
"To verify the statistical significance of the observed performance improvement over traditionally trained Dataset S. W. a I. b D. c ACE No 61.60 66.05 63.27 Yes 99.26 77.92 65.49 ERE No 51.62 56.71 44.37 Yes 99.77 77.45 51.78 CBT-NE No 52.32 65.38 68.81 Yes 98.17 98.34 95.56 CBT-CN No 47.78 53.68 45.15 Yes 99.13 98.94 97.06 a Word Level Saliency Accuracy.",
"models without saliency learning, we conducted the one-sided McNemar's test.",
"The obtained p-values are 0 .",
"03 , 0 .",
"03 , 0 .",
"0001 , and 0 .",
"04 for ACE, ERE, CBT-NE, and CBT-CN respectively, indicating that the performance gain by saliency learning is statistically significant.",
"In this section, we examine how well does the saliency of the trained model aligns with the annotation.",
"To this end, we define a metric called saliency accuracy ( s acc ), which measures what percentage of all positive positions of annotation Z indeed obtain a positive gradient.",
"Formally, s acc = 100 (cid:80) i ( Z i G i > 0) (cid:80) i Z i where G i is the gradient of element i and is the indicator function.",
"Table 2 shows the saliency accuracy at different layers of the trained model with and without saliency learning.",
"According to Table 2, our method achieves a much higher saliency accuracy for all datasets indicating that the learning was indeed effective in aligning the model saliency with the annotation.",
"In other words, important words will have positive contributions in the saliency-trained model, and as such, it learns to focus on the right part(s) of the data.",
"This claim can also be verified by visualizing the saliency, which is provided in the next section.",
"Here, we visualize the saliency of three positive samples from the ACE dataset for both the traditionally trained (Baseline Model) and the saliency-trained model (saliency-trained Model).",
"Table 3 shows the top 6 salient words (words with highest saliency/gradient) of three positive samples along with their contributory words (annotation Z ), the baseline model prediction ( PB ), and the saliency-trained model prediction ( PS ).",
"Darker red color indicates more salient words.",
"According to Table 3, both models correctly predict 1 and the saliency-trained model successfully pays attention to the expected meaningful words while the baseline model pays attention to mostly irrelevant ones.",
"More analyses are provided in section D of the Appendix.",
"Up to this point, we show that using saliency learning yields noticeably better precision, F1 measure, accuracy, and saliency accuracy.",
"Here, we aim to verify our claim that saliency learning coerces the model to pay more attention to the critical parts.",
"The annotation Z describes the influential words toward the positive labels.",
"Our hypothesis is that removing such words would cause more impact on the saliency-trained models since by training, they should be more sensitive to these words.",
"We measure the impact as the percentage change of the model's true positive rate.",
"This measure is cho-sen because negative examples do not have any annotated contributory words, and hence we are particularly interested in how removing contributory words of positive examples would impact the model's true positive rate (TPR).",
"Table 4 shows the outcome of the aforementioned experiment, where the last column lists the TPR reduction rates.",
"From the table, we see a consistently higher rate of TPR reduction for saliency-trained models compared to traditionally trained models, suggesting that the saliency-trained models are more sensitive to the perturbation of the contributory word(s) and confirming our hypothesis.",
"It is worth noting that we observe less substantial change to the true positive rate for the event task.",
"This is likely due to the fact that we are using trigger words as simulated explanations.",
"While trigger words are clearly related to events, there are often other words in the sentence relating to events but not annotated as trigger words.",
"In this paper, we proposed saliency learning , novel approach for teaching a model where to pay attention.",
"We demonstrated the effectiveness id Baseline Model Saliency-trained Model Z PBPS 1 The judge at Hassan's The judge at Hassan 's extradition 1 1 extradition hearing said extradition hearing said hearing that he found the French that he found the French said handwriting report very handwriting report very problematic, very confusing, problematic, very confusing, and with suspect conclusions.",
"of our method on multiple tasks and datasets using simulated explanations.",
"The results show that saliency learning enables us to obtain better precision, F1 measure and accuracy on these tasks and datasets.",
"Further, it produces models whose saliency is more properly aligned with the desired explanation.",
"In other words, saliency learning gives us more reliable predictions while delivering better performance than traditionally trained models.",
"Finally, our verification experiments illustrate that the saliency-trained models show higher sensitivity to the removal of contributory words in a positive example.",
"For future work, we will extend our study to examine saliency learning on NLP tasks in an active learning setting where real explanations are requested and provided by a human.",
"This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract N66001-17-2-4030."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"objective",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"objective",
"method",
"objective",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"objective",
"objective",
"method",
"result",
"abstain",
"method",
"result",
"objective",
"other"
] |
[
"Natural language generation lies at the core of generative dialogue systems and conversational agents.",
"We describe an ensemble neural language generator, and present several novel methods for data representation and augmentation that yield improved results in our model.",
"We test the model on three datasets in the restaurant, TV and laptop domains, and report both objective and subjective evaluations of our best model.",
"Using a range of automatic metrics, as well as human evaluators, we show that our approach achieves better results than state-of-the-art models on the same datasets.",
"There has recently been a substantial amount of research in natural language processing (NLP) in the context of personal assistants, such as Cortana or Alexa.",
"The capabilities of these conversational agents are still fairly limited and lacking in various aspects, one of the most challenging of which is the ability to produce utterances with humanlike coherence and naturalness for many different kinds of content.",
"This is the responsibility of the natural language generation (NLG) component.",
"Our work focuses on language generators whose inputs are structured meaning representations (MRs).",
"An MR describes a single dialogue act with a list of key concepts which need to be conveyed to the human user during the dialogue.",
"Each piece of information is represented by a slot-value pair, where the slot identifies the type of information and the value is the corresponding content.",
"Dialogue act (DA) types vary depending on the dialogue manager, ranging from simple ones, such as a goodbye DA with no slots at all, to complex ones, such as an inform DA containing multiple slots with various types of values (see example in Table 1).",
"A natural language generator must produce a syntactically and semantically correct utterance from a given MR. The utterance should express all the information contained in the MR, in a natural and conversational way.",
"In traditional language generator architectures, the assembling of an utterance from an MR is performed in two stages: sentence planning , which enforces semantic correctness and determines the structure of the utterance, and surface realization , which enforces syntactic correctness and produces the final utterance form.",
"Earlier work on statistical NLG approaches were typically hybrids of a handcrafted component and a statistical training method (Langkilde and Knight, 1998; Stent et al., 2004; Rieser and Lemon, 2010).",
"The handcrafted aspects, however, lead to decreased portability and potentially limit the variability of the outputs.",
"New corpus-based approaches emerged that used semantically aligned data to train language models that output utterances directly from their MRs (Mairesse et al., 2010; Mairesse and Young, 2014).",
"The alignment provides valuable information during training, but the semantic annotation is costly.",
"The most recent methods do not require aligned data and use an end-to-end approach to training, performing sentence planning and surface realization simultaneously (Konstas and Lapata, 2013).",
"The most successful systems trained on unaligned data use recurrent neural networks (RNNs) paired with an encoder-decoder system design (Mei et al., 152 2016; Dusek and Jurccek, 2016), but also other concepts, such as imitation learning (Lampouras and Vlachos, 2016).",
"These NLG models, however, typically require greater amount of data for training due to the lack of semantic alignment, and they still have problems producing syntactically and semantically correct output, as well as being limited in naturalness (Nayak et al., 2017).",
"Here we present a neural ensemble natural language generator, which we train and test on three large unaligned datasets in the restaurant, television, and laptop domains.",
"We explore novel ways to represent the MR inputs, including novel methods for delexicalizing slots and their values, automatic slot alignment, as well as the use of a semantic reranker.",
"We use automatic evaluation metrics to show that these methods appreciably improve the performance of our model.",
"On the largest of the datasets, the E2E dataset (Novikova et al., 2017b) with nearly 50K samples, we also demonstrate that our model significantly outperforms the baseline E2E NLG Challenge 1 system in human evaluation.",
"Finally, after augmenting our model with stylistic data selection, subjective evaluations reveal that it can still produce overall better results despite a significantly reduced training set.",
"NLG is closely related to machine translation and has similarly benefited from recent rapid development of deep learning methods.",
"State-of-the-art NLG systems build thus on deep neural sequence-to-sequence models (Sutskever et al., 2014) with an encoder-decoder architecture (Cho et al., 2014) equipped with an attention mechanism (Bahdanau et al., 2015).",
"They typically also rely on slot delexicalization (Mairesse et al., 2010; Henderson et al., 2014), which allows the model to better generalize to unseen inputs, as exemplified by TGen (Dusek and Jurccek, 2016).",
"However, Nayak et al. (2017) point out that there are frequent scenarios where delexicalization behaves inadequately (see Section 5.1 for more details), and Agarwal and Dymetman (2017) show that a character-level approach to NLG may avoid the need for delexicalization, at the potential cost of making more semantic omission errors.",
"1 http://www.macs.hw.ac.uk/InteractionLab/E2E/",
"utterances with fewer missing or redundant slots.",
"Cuayahuitl et al. (2014) perform automatic slot labeling using a Bayesian network trained on a labeled dataset, and show that a method using spectral clustering can be extended to unlabeled data with high accuracy.",
"In one of the first successful neural approaches to language generation, Wen et al. (2015a) augment the generator's inputs with a control vector indicating which slots still need to be realized at each step.",
"Wen et al. (2015b) take the idea further by embedding a new sigmoid gate into their LSTM cells, which directly conditions the generator on the DA.",
"More recently, Dusek and Jurccek (2016) supplement their encoder-decoder model with a trainable classifier which they use to rerank the beam search candidates based on missing and redundant slot mentions.",
"Our work builds upon the successful attentional encoder-decoder framework for sequence-to-sequence learning and expands it through ensembling.",
"We explore the feasibility of a domain-independent slot aligner that could be applied to any dataset, regardless of its size, and beyond the reranking task.",
"We also tackle some challenges caused by delexicalization in order to improve the quality of surface realizations, while retaining the ability of the neural model to generalize.",
"We evaluated the models on three datasets from different domains.",
"The primary one is the recently released E2E restaurant dataset (Novikova et al., 2017b) with 48K samples.",
"For benchmarking we use the TV dataset and the Laptop dataset (Wen et al., 2016) with 7K and 13K samples, respectively.",
"Table 2 summarizes the proportions of the training, validation, and test sets for each dataset.",
"The E2E dataset is by far the largest one available for task-oriented language generation in the restaurant domain.",
"The human references were 153 Figure 1: Proportion of unique MRs in the datasets.",
"collected using pictures as the source of information, which was shown to inspire more informative and natural utterances (Novikova et al., 2016).",
"With nearly 50K samples, it offers almost 10 times more data than the San Francisco restaurant dataset introduced in Wen et al. (2015b), which has frequently been used for benchmarks.",
"The reference utterances in the E2E dataset exhibit su-perior lexical richness and syntactic variation, including more complex discourse phenomena.",
"It aims to provide higher-quality training data for end-to-end NLG systems to learn to produce more naturally sounding utterances.",
"The dataset was released as a part of the E2E NLG Challenge.",
"Although the E2E dataset contains a large number of samples, each MR is associated on average with 8 .",
"65 different reference utterances, effectively offering less than 5K unique MRs in the training set (Fig. 1).",
"Explicitly providing the model with multiple ground truths, it offers multiple alternative utterance structures the model can learn to apply for the same type of MR. The delexicalization, as detailed later in Section 5.1, improves the ability of the model to share the concepts across different MRs. The dataset contains only 8 different slot types, which are fairly equally distributed.",
"The number of slots in each MR ranges between 3 and 8, but the majority of MRs consist of 5 or 6 slots.",
"Even though most of the MRs contain many slots, the majority of the corresponding human utterances, however, consist of one or two sentences only (Ta-ble 3), suggesting a reasonably high level of sentence complexity in the references.",
"The reference utterances in the TV and the Laptop datasets were collected using Amazon Mechani-slots",
"cal Turk (AMT), one utterance per MR. These two datasets are similar in structure, both using the same 14 DA types.",
"2 The Laptop dataset, however, is almost twice as large and contains 25% more slot types.",
"Although both of these datasets contain more than a dozen different DA types, the vast majority (68% and 80% respectively) of the MRs describe a DA of either type inform or recommend (Fig. 2), which in most cases have very similarly structured realizations, comparable to those in the E2E dataset.",
"DAs such as suggest ,",
"?request , or goodbye are represented by less than a dozen samples, but are significantly easier to learn to generate an utterance from because the corresponding MRs contain three slots at the most.",
"Our model uses the standard encoder-decoder architecture with attention, as defined in Bahdanau et al. (2015).",
"Encoding the input into a sequence of context vectors instead of a single vector enables the decoder to learn what specific parts of the 2 We noticed the MRs with the",
"?request DA type in the TV dataset have no slots provided, as opposed to the Laptop dataset, so we imputed these in order to obtain valid MRs. 154 Decoder Encoder LSTM LSTM LSTM w 1 w 2 w LLSTM LSTM LSTM u l u 2 u 1 1 , 2 2 , 2 L, 2 Figure 3: Standard architecture of a single-layer encoder-decoder LSTM model with attention.",
"input sequence to pay attention to, given the output generated so far.",
"In this attentional encoder-decoder architecture, the probability of the output at each time step t of the decoder depends on a distinct context vector q t in the following way: P ( u t | u 1 , . . . , u t 1 , w ) = g ( u t 1 , s t , q t ) , where in the place of function g we use the softmax function over the size of the vocabulary, and s t is a hidden state of the decoder RNN at time step t , calculated as: s t = f ( s t 1 , u t 1 , q t ) .",
"The context vector q t is obtained as a weighted sum of all the hidden states h 1 , . . . , h L of the encoder: q t = LX i =1 t,i h i , where t,i corresponds to the attention score the t -th word in the target sentence assigns to the i -th item in the input MR. We compute the attention score t,i using a multi-layer perceptron (MLP) jointly trained with the entire system (Bahdanau et al., 2015).",
"The en-coder's and decoder's hidden states at time i and t , respectively, are concatenated and used as the input to the MLP, namely: t,i = softmax (cid:0) w T tanh ( W [ h i ; s t ]) (cid:1) , where W and w are the weight matrix and the vector of the first and the second layer of the MLP, respectively.",
"The learned weights indicate the level of influence of the individual words in the input sequence on the prediction of the word at time step t of the decoder.",
"The model thus learns a soft alignment between the source and the target sequence.",
"In order to enhance the quality of the predicted utterances, we create three neural models with different encoders.",
"Two of the models use a bidirectional LSTM (Hochreiter and Schmidhuber, 1997) encoder, whereas the third model has a CNN (Le-Cun et al., 1998) encoder.",
"We train these models individually for a different number of epochs and then combine their predictions.",
"Initially, we attempted to combine the predictions of the models by averaging the log-probability at each time step and then selecting the word with the maximum log-probability.",
"We noticed that the quality, as well as the BLEU score of our utterances, decreased significantly.",
"We believe that this is due to the fact that different models learn different sentence structures and, hence, combining predictions at the probability level results in incoherent utterances.",
"Therefore, instead of combining the models at the log-probability level, we accumulate the top 10 predicted utterances from each model type using beam search and allow the reranker (see Section 4.4) to rank all candidate utterances taking the proportion of slots they successfully realized into consideration.",
"Finally, our system predicts the utterance that received the highest score.",
"Our training data is inherently unaligned, meaning our model is not certain which sentence in a multi-sentence utterance contains a given slot, which limits the model's robustness.",
"To accommodate this, we create a heuristic-based slot aligner which automatically preprocesses the data.",
"Its primary goal is to align chunks of text from the reference utterances with an expected value from the MR. Applications of our slot aligner are described in subsequent sections and in Table 4.",
"In our task, we have a finite set of slot mentions which must be detected in the corresponding utterance.",
"Moreover, from our training data we can see that most slots are realized by inserting a specific set of phrases into an utterance.",
"Using this insight, we construct a gazetteer, which primarily searches for overlapping content between the MR and each 155 sentence in an utterance, by associating all possible slot realizations with their appropriate slot type.",
"We additionally augment the gazetteer using a small set of handcrafted rules which capture cases not easily encapsulated by the above process, for example, associating the priceRange slot with a chunk of text using currency symbols or relevant lexemes, such as cheap or high-end.",
"While handcrafted, these rules are transferable across domains, as they target the slots, not the domains, and mostly serve to counteract the noise in the E2E dataset.",
"Finally, we use WordNet (Fellbaum, 1998) to further augment the size of our gazetteer by accounting for synonyms and other semantic relationships, such as associating pasta with the food[Italian] slot.",
"As discussed in Section 4.2, our model uses beam search to produce a pool of the most likely utterances for a given MR. While these results have a probability score provided by the model, we found that relying entirely on this score often results in the system picking a candidate which is objectively worse than a lower scoring utterance (i.e. one missing more slots and/or realizing slots in-correctly).",
"We therefore augment that score by multiplying it by the following score which takes the slot alignment into consideration: s align = N ( N u + 1) ( N o + 1) , where N is the number of all slots in the given MR, and N u and N o represent the number of unaligned slots (those not observed by our slot aligner) and over-generated slots (those which have been realized but were not present in the original MR), respectively.",
"We enhance the ability of our model to generalize the learned concepts to unseen MRs by delexicalizing the training data.",
"Moreover, it reduces the amount of data required to train the model.",
"We identify the categorical slots whose values always propagate verbatim to the utterance, and replace the corresponding values in the utterance with placeholder tokens.",
"The placeholders are eventually replaced in the output utterance in postprocessing by copying the values from the input MR. Examples of such slots would be name or near in the E2E dataset, and screensize or processor in the TV and the Laptop dataset.",
"Previous work identifies categorical slots as good delexicalization candidates that improve the performance of the model (Wen et al., 2015b; Nayak et al., 2017).",
"However, we chose not to delexicalize those categorical slots whose values can be expressed in alternative ways, such as less than $20 and cheap, or on the riverside and by the river.",
"Excluding these from delexicalization may lead to an increased number of incorrect realizations, but it encourages diversity of the model's outputs by giving it a freedom to choose among alternative ways of expressing a slot-value in different contexts.",
"This, however, assumes that the training set contains a sufficient number of samples displaying this type of alternation so that the model can learn that certain phrases are synonymous.",
"With its multiple human references for each MR, the E2E dataset has this property.",
"As Nayak et al. (2017) point out, delexicalization affects the sentence planning and the lexical choice around the delexicalized slot value.",
"For example, the realization of the slot food[ Italian ] in the phrase serves Italian food is valid, while the realization of food[ fast food ] in serves fast food food is clearly undesired.",
"Similarly, a naive delexicalization can result in a Italian restaurant, whereas the article should be an.",
"Another problem with articles is singular versus plural nouns in the slot value.",
"For example, the slot accessories in the TV dataset, can take on values such as remote control, as well as 3D glasses, where only the former requires an article before the value.",
"We tackle this issue by defining different placeholder tokens for values requiring different treatment in the realization.",
"For instance, the value Italian of the food slot is replaced by slot vow cuisine food , indicating that the value starts with a vowel and represents a cuisine, while fast food is replaced by slot con food , indicating that the value starts with a consonant and cannot be used as a term for cuisine.",
"The model thus learns to generate a before slot con food and an before slot vow cuisine food when appropriate, as well as to avoid generating the word food after food -slot placeholders that do not contain the word cuisine.",
"All these rules are general and 156 can automatically be applied across different slots and domains.",
"In our initial experiments, we tried expanding the training set by permuting the slot ordering in the MRs as suggested in Nayak et al. (2017).",
"From different slot orderings of every MR we sampled five random permutations (in addition to the original MR), and created new pseudo-samples with the same reference utterance.",
"The training set thus increased six times in size.",
"Using such an augmented training set might add to the model's robustness, nevertheless it did not prove to be helpful with the E2E dataset.",
"In this dataset, we observed the slot order to be fixed across all the MRs, both in the training and the test set.",
"As a result, for the majority of the time, the model was training on MRs with slot orders it would never encounter in the test set, which ultimately led to a decreased performance in prediction on the test set.",
"Taking a more utterance-oriented approach, we augment the training set with single-sentence utterances paired with their corresponding MRs. These new pseudo-samples are generated by splitting the existing reference utterances into single sentences and using the slot aligner introduced in Section 4.3 to identify the slots that correspond to each sentence.",
"The MRs of the new samples are created as the corresponding subsets of slots and, whenever the sentence contains the name (of the restaurant/TV/etc.) or a pronoun referring to it (such as it or its), the name slot is included too.",
"Finally, a new position slot is appended to every new MR, indicating whether it represents the first sentence or a subsequent sentence in the original utterance.",
"An example of this splitting technique can be seen in Table 4.",
"The training set almost doubled in size through this process.",
"Since the slot aligner works heuristically, not all utterances are successfully aligned with the MR. The vast majority of such cases, however, is caused by reference utterances in the datasets having incorrect or entirely missing slot mentions.",
"There is a noticeable proportion of those, so we leave them in the training set with the unaligned slots removed from the MR so as to avoid confusing the model when learning from such samples.",
"The quality of the training data inherently imposes an upper bound on the quality of the predictions of our model.",
"Therefore, in order to bring our model to produce more sophisticated utterances, we experimented with filtering the training data to contain only the most natural sounding and structurally complex utterances for each MR. For instance, we prefer having an elegant, single-sentence utterance with an apposition as the reference for an MR, rather than an utterance composed of three simple sentences, two of which begin with it (see the examples in Table 5).",
"We assess the complexity and naturalness of each utterance by the use of discourse phenomena, such as contrastive cues, subordinate clauses, or aggregation.",
"We identify these in the utterance's parse-tree produced by the Stanford CoreNLP toolkit (Manning et al., 2014) by defining a set of rules for extracting the discourse phenomena.",
"Furthermore, we consider the number of sentences used to convey all the information in the corresponding MR, as longer sentences tend to exhibit more advanced discourse phenomena.",
"Penalizing utterances for too many sentences contributes to reducing the proportion of generic reference utter-157 ances, such as the simple example in the above table, in the filtered training set.",
"Researchers in NLG have generally used both automatic and human evaluation.",
"Our results report the standard automatic evaluation metrics: BLEU (Papineni et al., 2002), NIST (Przybocki et al., 2009), METEOR (Lavie and Agarwal, 2007), and ROUGE-L (Lin, 2004).",
"For the E2E dataset experiments, we additionally report the results of the human evaluation carried out on the CrowdFlower platform as a part of the E2E NLG Challenge.",
"We built our ensemble model using the seq2seq framework (Britz et al., 2017) for TensorFlow.",
"Our individual LSTM models use a bidirectional LSTM encoder with 512 cells per layer, and the CNN models use a pooling encoder as in Gehring et al. (2017).",
"The decoder in all models was a 4-layer RNN decoder with 512 LSTM cells per layer and with attention.",
"The hyperparameters were determined empirically.",
"After experimenting with different beam search parameters, we settled on the beam width of 10.",
"Moreover, we employed the length normalization of the beams as defined in Wu et al. (2016), in order to encourage the decoder to favor longer sequences.",
"The length penalty providing the best results on the E2E dataset was 0.6, whereas for the TV and Laptop datasets it was 0.9 and 1.0, respectively.",
"We start by evaluating our system on the E2E dataset.",
"Since the reference utterances in the test set were kept secret for the E2E NLG Challenge, we carried out the metric evaluation using the validation set.",
"This was necessary to narrow down the models that perform well compared to the baseline.",
"The final model selection was done based on a human evaluation of the models' outputs on the test set.",
"In the first experiment, we assess what effect the augmenting of the training set via utterance splitting has on the performance of different models.",
"The results in Table 6 show that both the LSTM and the CNN models clearly benefit from additional pseudo-samples in the training set.",
"This can likely be attributed to the model having access to BLEU NIST METEOR ROUGE LSTM s 0.6664 8.0150 0.4420 0.7062 s 0.6930 8.4198 0.4379 0.7099 CNN s 0.6599 7.8520 0.4333 0.7018 s 0.6760 8.0440 0.4448 0.7055 Table 6: Automatic metric scores of different models tested on the E2E dataset, both unmodified ( s ) and augmented ( s ) through the utterance splitting.",
"more granular information about which parts of the utterance correspond to which slots in the MR. This may assist the model in sentence planning and building a stronger association between parts of the utterance and certain slots, such as that it is a substitute for the name.",
"Testing our ensembling approach reveals that reranking predictions pooled from different models produces an ensemble model that is overall more robust than the individual submodels.",
"The submodels fail to perform well in all four metrics at once, whereas the ensembling creates a new model that is more consistent across the different metric types (Table 7).",
"3 While the ensemble model decreases the proportion of incorrectly realized slots compared to its individual submodels on the validation set, on the test set it only outperforms two of the submodels in this aspect (Ta-ble 8).",
"Analyzing the outputs, we also observed that the CNN model surpassed the two LSTM models in the ability to realize the fast food and pub values reliably, both of which were hardly present in the validation set but very frequent in the test set.",
"On the official E2E test set, our ensemble model performs comparably to the baseline model, TGen (Dusek and Jurccek, 2016), in terms of automatic metrics (Table 9).",
"It is known that automatic metrics function only as a general and vague indication of the quality of an utterance in a dialogue (Liu et al., 2016; Novikova et al., 2017a).",
"Systems which score similarly according to these metrics could produce utterances that are significantly different because automatic 3 The scores here correspond to the model submitted to the E2E NLG Challenge.",
"Subsequently, we found better performing models according to some metrics: see Table 6.",
"metrics fail to capture many of the characteristics of natural sounding utterances.",
"Therefore, to better assess the structural complexity of the predictions of our model, we present the results of a human evaluation of the models' outputs in terms of both naturalness and quality, carried out by the E2E NLG Challenge organizers.",
"Quality examines the grammatical correctness and adequacy of an utterance given an MR, whereas naturalness assesses whether a predicted utterance could have been produced by a native speaker, irrespective of the MR. To obtain these scores, crowd workers ranked the outputs of 5 randomly selected systems from worst to best.",
"The final scores were produced using the TrueSkill algorithm (Sakaguchi et al., 2014) through pairwise comparisons of the human evaluation scores among the 20 competing systems.",
"Our system, trained on the E2E dataset without stylistic selection (Section 5.3), achieved the highest quality score in the E2E NLG Challenge, and was ranked second in naturalness.",
"4 The system's performance in quality (the primary metric) was significantly better than the competition according to the TrueSkill evaluation, which used bootstrap resampling with a p -level of p 0 .",
"05 .",
"Comparing these results with the scores achieved by the baseline model in quality and naturalness (5th and 6th 4 The system that surpassed ours in naturalness was ranked the last according to the quality metric.",
"place, respectively) reinforces our belief that models that perform similarly on the automatic metrics (Table 9) can exhibit vast differences in the structural complexity of their generated utterances.",
"After filtering the E2E training set as described in Section 5.3, the new training set consisted of approximately 20K pairs of MRs and utterances.",
"Interestingly, despite this drastic reduction in training samples, the model was able to learn more complex utterances that contained the natural variations of the human language.",
"The generated utterances exhibited discourse phenomena such as contrastive cues (see Example #1 in Table 10), as well as a more conversational style (Example #2).",
"Nevertheless, the model also failed to realize slots more frequently.",
"In order to observe the effect of stylistic data selection, we conducted a human evaluation where we assessed the utterances based on error rate and naturalness .",
"The error rate is calculated as the percentage of slots the model failed to realize divided by the total number of slots present among all samples.",
"The annotators ranked samples of utterance triples corresponding to three different ensemble models by naturalness from 1 to 3 (3 being the most natural, with possible ties).",
"The conservative model combines three submodels all trained on the full training set, the progressive one combines submodels solely trained on the filtered dataset, and finally, the hybrid is an ensemble of three models only one of which is trained on the full training set, so as to serve as a fallback.",
"training samples becomes evident by looking at the score of the progressive model (Table 11), where this model trained solely on the reduced dataset had the highest error rate.",
"We observe, however, that a hybrid ensemble model manages to perform the best in terms of the error rate, as well as the naturalness.",
"These results suggest that filtering the dataset through careful data selection can help to achieve better and more natural sounding utterances.",
"It significantly improves the model's ability to produce more elegant utterances beyond the [name] is... It is/has... format, which is only too common in neural language generators in this domain.",
"In order to provide a better frame of reference for the performance of our proposed model, we utilize the RNNLG benchmark toolkit 5 to evaluate our system on two additional, widely used datasets in NLG, and compare our results with those of a state-of-the-art model, SCLSTM (Wen et al., 2015b).",
"As Table 12 shows, our ensemble model performs competitively with the baseline on the TV dataset, and it outperforms it on the Laptop dataset by a wide margin.",
"We believe the higher error rate of our model can be explained by the significantly less aggressive slot delexicalization than the one used in SCLSTM.",
"That, however, gives our model a greater lexical freedom and, with it, the ability to produce more natural utterances.",
"The model trained on the Laptop dataset is also a prime example of how an ensemble model is capable of extracting the best learned concepts from each individual submodel.",
"By combining their knowledge and compensating thus for each other's weaknesses, the ensemble model can achieve a lower error rate, as well as a better overall quality, than any of the submodels individually.",
"In this paper we presented our ensemble attentional encoder-decoder model for generating natural utterances from MRs. Moreover, we presented novel methods of representing the MRs to improve performance.",
"Our results indicate that the proposed utterance splitting applied to the training set greatly improves the neural model's accuracy and ability to generalize.",
"The ensembling method paired with the reranking based on slot alignment also contributed to the increase in quality of the generated utterances, while minimizing the number of slots that are not realized during the generation.",
"This also enables the use of a less aggressive delexicalization, which in turn stimulates diversity in the produced utterances.",
"We showed that automatic slot alignment can be utilized for expanding the training data, as well as for utterance reranking.",
"Our alignment currently relies in part on empirically observed heuristics, and a more robust aligner would allow for more flexible expansion into new domains.",
"Since the stylistic data selection noticeably improved the diversity of our system's outputs, we believe this is a method with future potential, which we intend to further explore.",
"Finally, it is clear that current automatic evaluation metrics in NLG are only sufficient for providing a vague idea as to the system's performance; we postulate that leveraging the reference data to train a classifier will result in a more conclusive automatic evaluation metric.",
"This research was partially supported by NSF Robust Intelligence #IIS-1302668-002."
] | [
"abstain",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"result",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"objective",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"method",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"result",
"objective",
"objective",
"result",
"other"
] |
[
"Deep attention models have advanced the modelling of sequential data across many domains.",
"For language modelling in particular, the Transformer-XL a Transformer augmented with a long-range memory of past activations has been shown to be state-of-the-art across a variety of well-studied benchmarks.",
"The Transformer-XL incorporates a long-range memory at every layer of the network, which renders its state to be thousands of times larger than RNN predecessors.",
"However it is unclear whether this is necessary.",
"We perform a set of interventions to show that comparable performance can be obtained with 6X fewer long range memories and better performance can be obtained by limiting the range of attention in lower layers of the network.",
"When we read a book, we maintain representations of the characters and events in the text that help us understand the story.",
"We do this with a selective memorisation process; most of the finer details of the text are quickly forgotten and we retain a relatively compact representation of the book's details.",
"Early models of natural language used recurrent neural networks (RNNs) such as the Long Short-Term Memory (Hochreiter and Schmidhuber, 1997) which emulated this selective memory approach by modelling the past in a compact state vector.",
"The model learns to store relevant information within its state implicitly in order to optimise the task loss.",
"The LSTM has reigned as a state-of-the-art language model for over two decades since its inception in the '90s (Melis et al., 2017) and is arguably the most ubiquitous neural sequence model.",
"Unlike human memory systems, however, the LSTM struggles to reason over long-range contexts when reading text.",
"This has been observed in multiple contexts.",
"In the carefully curated LAMBADA benchmark (Paperno et al., 2016) which tests language model predictions on sections of book text that have long term structure as decided by human raters, LSTMs completely fail.",
"Namely LSTMs guess the correct word 0% of the time, where humans are considered to be above 70% accuracy.",
"For regular language modelling, Daniluk et al. (2017) observed that an LSTM augmented with attention would rarely attend beyond seven preceding words of context.",
"Samples from LSTMs language models quickly devolve into generic text devoid of an overall theme.",
"This has lead many to wonder whether there is any non-negligible long-range signal in the task of language modelling.",
"Recently we have seen that deep attention models can draw long-range signal from text, even when the objective is as simple as next-word prediction.",
"With the advent of the Transformer (Vaswani et al., 2017), significant gains in language modelling performance can be obtained by extending the models' attention to thousands of words.",
"The Transformer-XL (Dai et al., 2019), a Transformer variant specialised for long-range sequence modelling via the introduction of a cache of past activations, obtained state-of-the-art results in the four major LM benchmarks PTB (Mikolov et al., 2010), LM1B (Chelba et al., 2013), Enwik8 (Hut-ter, 2012), and WikiText (Merity et al., 2016).",
"In the case of the latter two, Dai et al. (2019) showed the model effectively used over one thousand words of context, and the resulting samples reflect a thematic consistency spanning paragraphs.",
"When Transformers are paired with long contexts and a large amount of data, e.g. GPT-2 (Radford et al., 2019) and Megatron (Shoeybi et al., 2019), the resulting samples are remarkable in their long-range consistency and stylistic realism.",
"However Transformers abandon the compact and selective representation of the past.",
"They store a hidden activation at every time-step (up to a given attention range) and every layer within the network.",
"This can consume orders of magnitude more space than prior RNN hidden states, or the original text.",
"E.g. a typical state-of-the-art LSTM language model state size may range from 4KB (Rae et al., 2018) to model Wikipedia articles to 64KB (Jozefowicz et al., 2016) to model news and is never greater than 1MB.",
"Whereas a current state-of-the-art 18-layer Transformer-XL state size for Wikipedia articles is 112MB.",
"The state is so large because a separate memory (e.g. 1600 vectors of size d=1024) is maintained per layer.",
"If this were found to be unnecessary then we can reduce the state's memory considerably.",
"In this paper we investigate a simple question: can we use short-range attention for the majority of layers in the Transformer and recover the same performance?",
"The hypothesis is that this should be possible, because many steps of reasoning will only involve short-range correlations, i.e. to piece characters together to form words or phrases.",
"We find indeed it is possible.",
"We recover comparable performance for long-range language modelling by using a small fraction (1/6th) of long-range memories to the baseline TransformerXL.",
"Crucially, we find it matters where long-range memories are placed in the network.",
"Placing them in the lower layers of the network is ineffective; placing them in the latter layers or interleaved across the network works much better.",
"We show that such a model trains with 2 X less time and memory, due to the reduction in expensive attention operations.",
"The Transformer is a deep neural network for processing sequences (Vaswani et al., 2017), it processes a window of n consecutive inputs x t n , . . . , x t in parallel.",
"At each layer it reasons over time using multi-head attention which we will briefly describe.",
"For a given layer l , let h t R 1 d be the hidden activation at time t , and h t R t d be the preceding activations in the same window.",
"Let k be the number of attention heads, then Q i , K i , V i R d dk are a set of learnable weight matrices which generate queries , keys , and values per attention head.",
"These are defined to be q i = h t Q i as the query, k i = h t K i to be the keys, and v i = h t V i to be the values for attention head i .",
"The attention head output is defined to be, attn i ( h t , h t ) = ( q i k Ti ) v i Figure 1: Comparison of arrangement patterns for long-range and short-range memories across the layers of a Transformer.",
"where ( ) is defined to be the softmax operator.",
"Attention is the linear combination of each attention head, attn = (cid:80) ki =1 W i attn i with a learnable weight.",
"The attention operation consumes O ( n ) compute per step and thus O ( n 2 ) for the window of inputs at each layer.",
"The Transformer-XL (TXL) proposes concatenating the past activations from the same window h t with a memory of size m n of past activations from the preceding windows of inputs (Dai et al., 2019).",
"This results in an attention cost of O ( n ( n + m )) which can be significantly cheaper than processing all n + m inputs in parallel, which would require O (( n + m ) 2 ) .",
"The TXL's memory can be considered to be a state, alike to an RNN.",
"However it requires a considerable space: l m d .",
"For character-level language modelling Dai et al. (2019) use a 24-layer model on Enwik8, with memory size m = 3800 , and hidden size d = 1024 ; this consumes 356MB at single precision.",
"In contrast, the average article size is 8KB.",
"We investigate whether the Transformer-XL can perform comparably with fewer long-range memory (LRM) layers on the two prominent long-range language modelling benchmarks, Enwik8 and WikiText-103.",
"We perform intervention experiments where we replace the long-range memory, for a given layer, with a short-range memory (SRM) of size m s = 128 for a subset of layers.",
"We choose m s = 128 because the TPUv3 contains a 128x128 matrix multiply unit, and any smaller size (other than zero) is padded up to 128.",
"Thus it is a reasonable small size.",
"We chose m s > 0 such that the oldest activations have some context.",
"Because we only modify the 0.0 0.5 1.0 1.5 Training tokens (B) 1.0 1.05 1.1 1.15 1.2 1.25 1.3 1.35 B i t s p e r c h a r a c t e r ( BPC ) No.",
"memory sizes of the model, which are independent of parameter count, the number of model parameters is always held constant (277M for Enwik8 and 257M for WikiText-103).",
"We consider a model with a varying number of LRMs from l (the number of layers in the network, i.e. the usual case) to a range of fewer values, l 2 , l 6 , 1 , and 0 .",
"We also consider where the LRMs should be arranged within the network; considering",
"(i) interleaved with equal spacing,",
"(ii) the first layer(s) of the network, and",
"(iii) the latter layer(s) of the network.",
"This is displayed visually in Figure 1. 3.2 Model Setup Aside from memory configurations, we use an identical model setup to Dai et al. (2019).",
"During training we periodically evaluate on the validation set to choose an early stopping criterion.",
"In the case of Enwik8 we periodically evaluate on the first 500K characters of the validation set to speed up model evaluation.",
"We train all models with an overall batch size of 32 , using 16 TPUv3 chips running synchronously.",
"We use a window size of n = 384 , a long-range memory (LRM) size of m = 2304 .",
"At test-time we extend the LRM size to m = 6000 , chosen from a sweep over the validation set.",
"We plot the Enwik8 learning curves for a subset of layer variants in Figure 2. The worst-performing, is the variant with a single long-term memory at the lowest layer (black curve).",
"However perhaps more surprisingly, we see a model with 12 LRMs at the lower layers of the network is actually worse than a model with a single LRM on the final layer 0 1 4 12 24 Num.",
"(dark green).",
"We then see that the full TXL with 24 LRMs is seemingly identical to the 12 LRM models, with either LRMs interleaved across the whole model or LRMs placed in the final 12 layers.",
"Note, we were not able to run these models with multiple seeds per hyper-parameter configuration but we do generally find language models optimise consistently (e.g. unlike deep reinforcement learning models).",
"We show the final test performance in bits-per-character (BPC) alongside the corresponding word-level perplexity for models with a varying number of LRMs and LRM arrangements in Figure 3. Position clearly matters, if we place long-range memories in the first layers then performance is significantly worse.",
"We hypothesise that this is because it is better to build up representations with local context before exploiting long-range correlations.",
"For example, we need to piece together characters into an identified named entity (say) before we should query thousands of time-steps back for its prior occurrence.",
"We followed-up by running an additional arrangement of only placing LRMs in the middle layers and found this to be worse than interleaved or final ( 1 . 01 bpc for 4 long-range memories) which shows there is significant benefit to having some long-range memories in the higher layers.",
"Crucially, we are able to match (and slightly exceed) the full model's test performance with 12 LRMs, and even a model with 4 LRMs is very close ( 0 . 9846 w/ 24 vs 0 . 9916 w/ 4 interleaved).",
"It is worth noting that our TXL baseline actually outperforms the published version on Enwik8: 0 .",
"985 BPC (ours) vs 0 .",
"993 (Dai et al., 2019), which provides credence to the quality of the experimental setup.",
"We also inspect word-level language modelling on WikiText-103 , using the same 18 -layer TransformerXL parameters (Dai et al., 2019).",
"We obtain a baseline test perplexity of 18 .",
"3 (matching the published value), and obtain 18.4 and 18.6 for interleaved and last-layer spacing respectively when using l/ 6 (i.e. 3) LRMs.",
"We also try placing 3 LRMs on the first three layers and obtain 20.1 perplexity.",
"We remark that",
"(i) long-range memory is important for a significant improvement in performance,",
"(ii) it is better to not place LRMs in the shallow layers, and",
"(iii) it is not necessary to have as many long-range memories as model-layers for comparable modelling performance.",
"We show the performance of training the Transformer-XL with a varying number of LRMs for the Enwik8 architecture in Table 1. This shows the latency (per input token) and peak activation memory consumption during a training iteration on Enwik8 for a range of long-range memory layers.",
"We see the reduction of long-range memories from 24 layers to 4 layers cuts the activation peak memory by 3X.",
"Thus it can be a worthwhile and simple performance improvement.",
"In the preceding experiments we fix the short-range memory (SRM) length to 128 and vary the frequency and arrangement of long-range memory layers.",
"We now consider varying the length of SRM for an architecture with l 6 long-range memories to determine whether this impacts modelling performance.",
"We train (and evaluate) the model with twenty SRM lengths from 32-2048, and incorporate four interleaved LRM layers (trained at 2304, evaluated at 6000).",
"The results are plotted in Figure 4.",
"Shortening the memory size to less than 128 provides no speedup for our TPU training setup, as matrices are multiplied in 128x128 blocks, however it incurs a drop in modelling performance.",
"Furthermore 32.0 64.0 128.0 256.0 512.0 1024.0 2048.0 Short-Range Memory Size 0.96 0.98 1.00 1.02 1.04 1.06 BPC Figure 4: Enwik8 test performance for varying short-range memory length (at both train and test).",
"increasing the memory size beyond 512 further slows the model down and reduces modelling performance.",
"We see an optimal SRM length is around 512 steps which obtains 0.974 BPC on Enwik8 a non-trivial performance boost over the 0.99BPC TransformerXL baseline.",
"Thus we conclude that limiting the range of attention can not only speed up the model but improve performance.",
"There have been several recent works exploring deep sequence models with a small attention window per layer.",
"Wu et al. (2019) proposed the dynamic convolution , where the model directly produces a set of weights over a sequence in memory and then combines them with a convolution.",
"The attention window is thus restricted to the convolution kernel size a couple of words.",
"Wu et al. (2019) show comparable performance to the Transformer at sentence-level machine translation.",
"However they do not investigate longer-context applications.",
"Rae et al. (2019) propose shortening the range of attention for Transformers by compressing the distant past.",
"They find the first layers of the model are the most compressible, and obtain state-of-the-art in several long-range language model benchmarks (WikiText-103 and Enwik8).",
"However they do not consider restricting the range of attention for a subset of layers to save compute and space.",
"Sukhbaatar et al. (2019) propose an adaptive attention scheme for the TransformerXL where the model can learn to modulate the size of its attention window per attention head.",
"They observe the neural network converges to using smaller attention spans for lower layers in the network, which adds additional evidence to the finding that long-range memories are not useful in these lower layers.",
"Because Sukhbaatar et al. (2019) place the range of attention in the optimisation problem it is very flex-ible.",
"In this study we promote interpretability by making a set of direct interventions to the memory size across layers.",
"This does result in less generality, as we explicitly create two types of attention ranges, where adaptive attention can select many.",
"However ultimately the two approaches of generality and interpretability complement one another.",
"(Fan et al., 2020) show that one can train a transformer by having all layers attend to a single memory that is the linear combination of all layers' memories.",
"Thus at training all layers' memories are maintained, but at evaluation or generation time there can be a single memory.",
"This gives evidence that we do not need to store many separate representations for long-range memory to perform well at test time, but the approach does require storing them during training and incurs significant slowdown to the model.",
"We explore a set of interventions to the Transformer-XL's architecture that are very simple to implement, i.e. a few lines of code, but shed light on the fundamental workings of the model when modelling long sequences of text.",
"In our set of interventions, we only modify the flow of information within the network, versus the number of trainable parameters.",
"Thus we do not have confounding factors of varying network capacity.",
"Our finding is that we do not need long-range memories at every layer of the network.",
"Comparable performance can be obtained with a fraction (1/6th) of long-range memories if they are spaced equally across the network, or in the latter layers.",
"We hypothesise this is because modelling long-range correlations is best done when representations are first formed from short-range correlations.",
"We also find a real performance drop using a single long-range memory, proving long-range dependency is not superfluous to the task.",
"This study has implications for practitioners interested in speeding up deep Transformer-XL models.",
"There have been a number of long-range transformer variants published in the past year (Lample et al., 2019; Rae et al., 2019; Roy et al., 2020; Ki-taev et al., 2020) which aim to extend the range of attention via sparsity or compression.",
"However these models maintain the use of uniform memory capacity for each layer.",
"Here we show that long-range attention does not need to be scaled for every layer, and thus these architectures can be further sped-up with this observation.",
"This study also has implications for researchers using a single long-range memory, which has typically been the approach in traditional RNN + attention systems.",
"For example, the Differentiable Neural Computer (Graves et al., 2016) and recent memory-augmented agents for reinforcement learning, which utilise a distinct working memory with a single long-range episodic memory (Fortunato et al., 2019).",
"Perhaps performance could be improved by adding additional layers of episodic memories.",
"The practice of storing deep long-range memories is not scalable if we wish for neural networks to have the kinds of large-horizon reasoning that humans possess.",
"We believe the solution of maintaining a small number of long-range memories is a step towards tractable lifelong memory."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"result",
"method",
"result",
"abstain",
"result",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"method"
] |
[
"The scarcity of parallel data is a major obstacle for training high-quality machine translation systems for low-resource languages.",
"Fortunately, some low-resource languages are linguistically related or similar to high-resource languages; these related languages may share many lexical or syntactic structures.",
"In this work, we exploit this linguistic overlap to facilitate translating to and from a low-resource language with only monolingual data, in addition to any parallel data in the related high-resource language.",
"Our method, NMT-Adapt, combines denoising autoencoding, back-translation and adversarial objectives to utilize monolingual data for low-resource adaptation.",
"We experiment on 7 languages from three different language families and show that our technique significantly improves translation into low-resource language compared to other translation baselines.",
"While machine translation (MT) has made incredible strides due to the advent of deep neural machine translation (NMT) (Sutskever et al., 2014; Bah-danau et al., 2014) models, this improvement has been shown to be primarily in well-resourced languages with large available parallel training data.",
"However with the growth of internet communication and the rise of social media, individuals worldwide have begun communicating and producing content in their native low-resource languages.",
"Many of these low-resource languages are closely related to a high-resource language.",
"One such example are dialects: variants of a language traditionally considered oral rather than written.",
"Machine translating dialects using models trained on This work was conducted while author was working at Facebook AI the formal variant of a language (typically the high-resource variant which is sometimes considered the standardized form) can pose a challenge due to the prevalence of non standardized spelling as well significant slang vocabulary in the dialectal variant.",
"Similar issues arise from translating a low-resource language using a related high-resource model (e.g., translating Catalan with a Spanish MT model).",
"While an intuitive approach to better translating low-resource related languages could be to obtain high-quality parallel data.",
"This approach is often infeasible due to lack specialized expertise or bilingual translators.",
"The problems are exacerbated by issues that arise in quality control for low-resource languages (Guzman et al., 2019).",
"This scarcity motivates our task of learning machine translation models for low-resource languages while leveraging readily available data such as parallel data from a closely related language or monolingual data in the low-resource language.",
"1 The use of monolingual data when little to no parallel data is available has been investigated for machine translation.",
"A few approaches involve synthesising more parallel data from monolingual data using backtranslation (Sennrich et al., 2015) or mining parallel data from large multilingual corpora (Tran et al., 2020; El-Kishky et al., 2020b,a; Schwenk et al., 2019).",
"We introduce NMT-Adapt, a zero resource technique that does not need parallel data of any kind on the low resource language.",
"We investigate the performance of NMT-Adapt at translating two directions for each low-resource language: (1) low-resource to English and (2) English to low-resource.",
"We claim that translating into English can be formulated as a typical unsupervised domain adaptation task, with the high-resource language as the source domain and the 1 We use low-resource language and dialect or variant interchangeably.",
"related low-resource, the target domain.",
"We then show that adversarial domain adaptation can be applied to this related language translation task.",
"For the second scenario, translating into the low-resource language, the task is more challenging as it involves unsupervised adaptation of the generated output to a new domain.",
"To approach this task, NMT-Adapt jointly optimizes four tasks to perform low-resource translation: (1) denoising autoencoder (2) adversarial training (3) high-resource translation and (4) low-resource backtranslation.",
"We test our proposed method and demonstrate its effectiveness in improving low-resource translation from three distinct families: (1) Iberian languages, (2) Indic languages, and (3) Semitic languages, specifically Arabic dialects.",
"We make our code and resources publicly available.",
"2 2 Related Work Zero-shot translation Our work is closely related to that of zero-shot translation (Johnson et al., 2017; Chen et al., 2017; Al-Shedivat and Parikh, 2019).",
"However, while zero-shot translation translates between a language pair with no parallel data, there is an assumption that both languages in the target pair have some parallel data with other languages.",
"As such, the system can learn to process both languages.",
"In one work, Currey and Heafield (2019) improved zero-shot translation using monolingual data on the pivot language.",
"However, in our scenario, there is no parallel data between the low-resource language and any other language.",
"In other work, Arivazhagan et al. (2019) showed that adding adversarial training to the encoder output could help zero shot training.",
"We adopt a similar philosophy in our multi-task training to ensure our low-resource target is in the same latent space as the higher-resource language.",
"Unsupervised translation A related set of work is the family of unsupervised translation techniques; these approaches translate between language pairs with no parallel corpus of any kind.",
"In work by Artetxe et al. (2018); Lample et al. (2018a), unsupervised translation is performed by training denoising autoencoding and backtranslation tasks concurrently.",
"In these approaches, multiple pretraining methods were proposed to better initialize the model (Lample et al., 2018b; Lample and Con-neau, 2019; Liu et al., 2020; Song et al., 2019).",
"Different approaches were proposed that used parallel data between X-Y to improve unsupervised translation between X-Z (Garcia et al., 2020a; Li et al., 2020; Wang et al., 2020).",
"This scenario differs from our setting as it does not assume that Y and Z are similar languages.",
"These approaches leverage a cross-translation method on a multilingual NMT model where for a parallel data pair ( S x , S y ), they translate S x into language Z with the current model to get S (cid:48) z .",
"Then use ( S y , S (cid:48) z ) as an additional synthesized data pair to further improve the model.",
"Garcia et al. (2020b) experiment using multilingual cross-translation on low-resource languages with some success.",
"While these approaches view the parallel data as auxiliary, to supplement unsupervised NMT, our work looks at the problem from a domain adaptation perspective.",
"We attempt to use monolingual data in Z to make the supervised model trained on X-Y generalize to Z. Leveraging High-resource Languages to Improve Low-resource Translation Several works have leveraged data in high-resource languages to improve the translation of similar low-resource languages.",
"Neubig and Hu (2018) showed that it is beneficial to mix the limited parallel data pairs of low-resource languages with high-resource language data.",
"Lakew et al. (2019) proposed selecting high-resource language data with lower perplexity in the low-resource language model.",
"Xia et al. (2019) created synthetic sentence pairs by unsupervised machine translation, using the high-resource language as a pivot.",
"However these previous approaches emphasize translating from the low-resource language to English, while the opposite direction is either unconsidered or shows poor translation performance.",
"Siddhant et al. (2020) trained multilingual translation and denoising simultaneously, and showed that the model could translate languages without parallel data into English near the performance of supervised multilingual NMT.",
"Similar language translation Similar to our work, there have been methods proposed that leverage similar languages to improve translation.",
"Hassan et al. (2017) generated synthetic English-dialect parallel data from English-main language corpus.",
"However, this method assumes that the vocabulary in the main language could be mapped word by word into the dialect vocabulary, and they calculate the corresponding word for substitution using localized projection.",
"This approach differs from our work in that it relies on the existence of a seed bilingual lexicon to the dialect/similar language.",
"Additionally, the approach only considers translating from a dialect to English and not the reverse direction.",
"Other work trains a massively multilingual many-to-many model and demonstrates that high-resource training data improves related low-resource language translation (Fan et al., 2020).",
"In other work, Lakew et al. (2018) compared ways to model translations of different language varieties, in the setting that parallel data for both varieties is available, the variety for some pairs may not be labeled.",
"Another line of work focus on translating between similar languages.",
"In one such work, Pourdamghani and Knight (2017) learned a character-based cipher model.",
"In other work, Wan et al. (2020) improved unsupervised translation between the main language and the dialect by separating the token embeddings into pivot and private parts while performing layer coordination.",
"We describe the NMT-Adapt approach to translating a low-resource language into and out of English without utilizing any low-resource language parallel data.",
"In Section 3.1, we describe how NMT-Adapt leverages a novel multi-task domain adaptation approach to translating English into a low-resource language.",
"In Section 3.2, we then describe how we perform source-domain adaptation to translate a low-resource language into English.",
"Finally, in Section 3.3, we demonstrate how we can leverage these two domain adaptations, to perform iterative backtranslation further improving translation quality in both directions.",
"To translate from English into a low-resource language, NMT-Adapt is initialized with a pretrained mBART model whose pretraining is described in (Liu et al., 2020).",
"Then, as shown in Figure 1, we continue to train the model simultaneously with four tasks inspired by (Lample et al., 2018a) and update the model with a weighted sum of the gradients from different tasks.",
"The language identifying tokens are placed at the same position as in mBART.",
"For the encoder, both high and low-resource language source text, with and without noise, use the language token of the high-resource language [HRL] in the pretrained mBART.",
"For the decoder, the related high and low-resource languages use their own, different , language tokens.",
"We initialize the language token embedding of the low-resource language with the embedding from the high-resource language token.",
"Task 1: Translation The first task is translation from English into the high-resource language (HRL) which is trained using readily available high-resource parallel data.",
"This task aims to transfer high-resource translation knowledge to aid in translating into the low-resource language.",
"We use the cross entropy loss formulated as follows: L t = LCE ( D ( Z En , [ HRL ]) , XHRL ) (1) , where Z En = E ( X En , [ En ]) .",
"( X En , XHRL ) is a parallel sentence pair.",
"E , D denotes the encoder and decoder functions, which take (input, language token) as parameters.",
"LCE denotes the cross entropy loss.",
"Task 2: Denoising Autoencoding For this task, we leverage monolingual text by introducing noise to each sentence, feeding the noised sentence into the encoder, and training the model to generate the original sentence.",
"The noise we use is similar to (Lample et al., 2018a), which includes a random shuffling and masking of words.",
"The shuffling is a random permutation of words, where the position of words is constrained to shift at most 3 words from the original position.",
"Each word is masked with a uniform probability of 0.1.",
"This task aims to learn a feature space for the languages, so that the encoder and decoder could transform between the features and the sentences.",
"This is especially necessary for the low-resource language if it is not already pretrained in mBART.",
"Adding noise was shown to be crucial to translation performance in (Lample et al., 2018a), as it forces the learned feature space to be more robust and contain high-level semantic knowledge.",
"We train the denoising autoencoding on both the low-resource and related high-resource languages and compute the loss as follows: L da = (cid:88) i = LRL,HRL LCE ( D ( Z i , [ i ]) , X i ) (2) , where Z i = E ( N ( X i ) , [ HRL ]) .",
"X i is from the monolingual corpus.",
"English to low-resource backtranslation data.",
"The aim of this task is to capture a language-modeling effect in the low-resource language.",
"We describe how we obtain this data using the high-resource translation model to bootstrap backtranslation in Section 3.3.",
"The objective used is, L bt = LCE ( D ( Z (cid:48) En , [ LRL ]) , XLRL ) (3) , where Z (cid:48) En = E ( Y En , [ En ]) .",
"Task 4: Adversarial Training The final task aims to make the encoder output language-agnostic features.",
"The representation is language agnostic to the noised high and low-resource languages as well as English.",
"Ideally, the encoder output should contain the semantic information of the sentence and little to no language-specific information.",
"This way, any knowledge learned from the English to high-resource parallel data can be directly applied to generating the low-resource language by simply switching the language token during inference, without capturing spurious correlations (Gu et al., 2019a).",
"To adversarially mix the latent space of the encoder among the three languages, we use two critics (discriminators).",
"The critics are recurrent networks to ensure that they can handle variable-length text input.",
"Similar to Gu et al. (2019b), the adversarial component is trained using a Wasserstein loss, which is the difference of expectations between the two types of data.",
"This loss minimizes the earth mover's distance between the distributions of different languages.",
"We compute the loss function as follows: L adv 1 = E [ Disc ( ZHRL )] E [ Disc ( ZLRL )] (4) L adv 2 = E [ Disc ( ZHRL ZLRL )] E [ Disc ( Z En Z (cid:48) En )] (5) As shown in Equation 4, the first critic is trained to distinguish between the high and low-resource languages.",
"Similarly, in Equation 5, the second critic is trained to distinguish between English and non-English (both high, and low-resource languages).",
"Fine-tuning with Backtranslation : Finally, we found that after training with the four tasks concurrently, it is beneficial to fine-tune solely using backtranslation for one pass before inference.",
"We posit that this is because while spurious correlations are reduced by the adversarial training, they are not completely eliminated and using solely the language tokens to control the output language is not sufficient.",
"By fine-tuning on backtranslation, we are further adapting to the target side and encouraging the output probability distribution of the decoder to better match the desired output language.",
"We propose to model translating from the low-resource language to English as a domain adaptation task and design our model based on insights from domain-adversarial neural network (DANN) (Ganin et al., 2017), a domain adaptation technique widely used in many NLP tasks.",
"This time, we train three tasks simultaneously: Task 1: Translation We train high-resource to English translation on parallel data with the goal of adapting this knowledge to translate low-resource sentences.",
"We compute this loss as follows: L t = LCE ( D ( ZHRL , [ En ]) , X En ) (6) , where ZHRL = E ( XHRL , [ HRL ]) .",
"Task 2: Backtranslation Low-resource to English backtranslation translation, which we describe in Section 3.3.",
"The objective is as follows: L t = LCE ( D ( Z (cid:48) LRL , [ En ]) , X En ) (7) , where Z (cid:48) LRL = E ( YLRL , [ HRL ]) .",
"Task 3: Adversarial Training We feed the sentences from the monolingual corpora of the high-and low-resource corpora into the encoder, and the encoder output is trained so that its input language cannot be distinguished by a critic.",
"The goal is to encode the low-resource data into a shared space with the high-resource, so that the decoder trained on the translation task can be directly used.",
"No noise was added to the input, since we did not observe an improvement.",
"There is only one recurrent critic, which uses the Wasserstein loss and is computed as follows: L adv = E [ Disc ( ZHRL )] E [ Disc ( ZLRL )] (8) , where ZLRL = E ( XLRL , [ HRL ]) .",
"Similar to the reverse direction, we initialize NMT-Adapt with a pretrained mBART, and use the same language token for high-resource and low-resource in the encoder.",
"We describe how we can alternate training into/out-of English models to create better backtranslation data improving overall quality.",
"The iterative training process is described in Algorithm 1.",
"We first create English to low-resource backtranslation data by fine-tuning mBART on the high-resource to English parallel data.",
"Using this model, we translate monolingual low-resource text into English treating the low-resource sentences as if they were in the high-resource language.",
"The resulting sentence pairs are used as backtranslation data to train the first iteration of our English to low-resource model.",
"After training English to low-resource, we use the model to translate the English sentences in the English-HRL parallel data into the low-resource language, and use those sentence pairs as backtranslation data to train the first iteration of our low-resource to English model.",
"We then use the first low-resource to English model to generate backtranslation pairs for the second English to low-resource model.",
"We iteratively repeat this process of using our model of one direction to improve the other direction.",
"We experiment on three groups of languages.",
"In each group, we have a large quantity of parallel training data for one language(high-resource) and no parallel for the related languages to simulate a low-resource scenario.",
"Our three groupings include",
"(i) Iberian languages , where we treat Spanish as the high-Language Group Training Set Train-Size Test Set Test-size Monolingual Mono-Size Spanish Iberian QED (Guzman et al., 2013) 694k N/A -CC-100 1M Catalan Iberian N/A Global Voices (Tiedemann, 2012) 15k CC-100 1M Portuguese Iberian N/A TED (Qi et al., 2018) 8k CC-100 1M Hindi Indic IIT Bombay (Kunchukuttan et al., 2018) 769k N/A -CC-100 1M Marathi Indic N/A TICO-19 (Anastasopoulos et al., 2020) 2k CC-100 1M Nepali Indic N/A FLoRes (Guzman et al., 2019) 3k CC-100 1M Urdu Indic N/A TICO-19 (Anastasopoulos et al., 2020) 2k CC-100 1M MSA Arabic QED (Guzman et al., 2013) 465k N/A -CC-100 1M Egyptian Ar.",
"resource and Portuguese and Catalan as related lower-resource languages.",
"(ii) Indic languages where we treat Hindi as the high-resource language, and Marathi, Nepali, and Urdu as lower-resource related languages",
"(iii) Arabic , where we treat Modern Standard Arabic (MSA) as the high-resource, and Egyptian and Levantine Arabic dialects as low-resource.",
"Among the languages, the relationship between Urdu and Hindi is a special setting; while the two languages are mutually intelligible as spoken languages, they are written using different scripts.",
"Additionally, in our experimental setting, all low-resource languages except for Nepali were not included in the original mBART pretraining.",
"The parallel corpus for each language is described in Table 1.",
"Due to the scarcity of any parallel data for a few low-resource languages, we are not able to match the training and testing domains.",
"For monolingual data, we randomly sample 1M sentences for each language from the CC-100 corpus 3 (Conneau et al., 2020; Wenzek et al., 2020).",
"For quality control, we filter out sentences if more than 40% of characters in the sentence do not belong to the alphabet set of the language.",
"For quality and memory constraints, we only use sentences with length between 30 and 200 characters.",
"Collecting Dialectical Arabic Data While obtaining low-resource monolingual data is relatively straightforward, as language identifiers are often readily available for even low-resource text (Jauhi-ainen et al., 2019), identifying dialectical data is often less straightforward.",
"This is because many dialects have been traditionally considered oral rather than written, and often lack standardized spelling, significant slang, or even lack of mutual intelligibility from the main language.",
"In general, dialectical data has often been grouped in with the main lan-3 http://data.statmt.org/cc-100/ guage in language classifiers.",
"We describe the steps we took to obtain reliable dialectical Arabic monolingual data.",
"As the CC-100 corpus does not distinguish between Modern Standard Arabic (MSA) and its dialectical variants, we train a finer-grained classifier that distinguishes between MSA and specific colloquial dialects.",
"We base our language classifier on a BERT model pretrained for Arabic (Safaya et al., 2020) and fine-tune it for six-way classification:",
"(i) Egyptian,",
"(ii) Levantine,",
"(iii) Gulf,",
"(iv) Maghrebi,",
"(v) Iraqi dialects as well as",
"(vi) the literary Modern Standard Arabic (MSA).",
"We use the data from (Bouamor et al., 2018) and (Zaidan and Callison-Burch, 2011) as training data, and the resulting classifier has an accuracy of 91 % on a held-out set.",
"We take our trained Arabic dialect classifier and further classify Arabic monolingual data from CC-100 and select MSA, Levantine and Egyptian sentences as Arabic monolingual data for our experiments.",
"We use the RMSprop optimizer with learning rate 0 .",
"01 for the critics and the Adam optimizer for the rest of the model.",
"We train our model using eight GPUs and a batch size of 1024 tokens per GPU.",
"We update the parameters once per eight batches.",
"For the adversarial task, the generator is trained once per three updates, and the critic is trained every update.",
"Each of the tasks of",
"(i) translation,",
"(ii) backtranslation as well as",
"(iii) LRL and HRL denoising (only for En LRL direction), have the same number of samples and their cross entropy loss has equal weight.",
"The adversarial loss, L adv , has the same weight on the critic, while it has a multiplier of 60 on the generator (encoder).",
"This multiplier was tuned to ensure convergence and is negative as it's opposite to the discriminator loss.",
"For the first iteration, we train 128 epochs from En LRL Un-adapted Model Adapted Models LRL HRL En HRL Adv BT BT+Adv BT+Adv+fine-tune Portuguese Spanish 3.8 10.1 14.8 18.0 21.2 Catalan Spanish 6.8 9.1 21.2 22.5 23.6 Marathi Hindi 7.3 8.4 9.5 15.6 16.1 Nepali Hindi 11.2 17.6 16.7 25.3 26.3 Urdu Hindi 0.3 3.4 0.2 7.2 Egyptian Arabic MSA 3.5 3.8 8.0 8.0 8.0 Levantine Arabic MSA 2.1 2.1 4.8 5.1 4.7 Table 2: BLEU score of the first iteration on the English to low-resource direction.",
"English to the low-resource language and 64 iterations from low-resource language to English.",
"For the second iteration we train 55 epochs for both directions.",
"We follow the setting of (Liu et al., 2020) for all other settings and training parameters.",
"The critics consist of four layers: the third layer is a bidirectional GRU and the remaining three are fully connected layers.",
"The hidden layer sizes are 512 , 512 and 128 and we use an SELU activation function.",
"We ran experiments on 8-GPUs.",
"Each iteration took less than 3 days and we used publicly available mBART-checkpoints for initialization.",
"GPU memory usage of our method is only slightly larger than mBART.",
"While we introduce additional parameters in discriminators, these additional parameters are insignificant compared to the size of the mBART model.",
"We present results of applying NMT-Adapt to low-resource language translation.",
"We first evaluate performance of translating into the low-resource language.",
"We compare the first iteration of NMT-Adapt to the following baseline systems:",
"(i) En HRL Model: directly using the model trained for En HRL translation.",
"(ii) Adversarial: Our full model without using the backtranslation objective and without the final fine-tuning.",
"(iii) Backtranslation: mBART fine-tuned on backtranslation data created using the HRL En model.",
"(iv) BT+Adv: Our full model without the final fine-tuning.",
"(v) BT+Adv+fine-tune: Our full model (NMT-Adapt) as described in Section 3.",
"As seen in Table 2, using solely the adversarial component only, we generally see improvement in the BLEU scores over using the high-resource translate model.",
"This suggests that our proposed method of combining denoising autoencoding with adversarial loss is effective in adapting to a new target output domain.",
"Additionally, we observe a large improvement using only backtranslation data.",
"This demonstrates that using the high-resource translation model to create LRL-En backtranslation data is highly effective for adapting to the low-resource target.",
"We further see that combining adversarial and backtranslation tasks further improve over each individually, showing that the two components are complementary.",
"We also experimented on En-HRL translation with backtranslation but without adversarial loss.",
"However, this yielded much worse results, showing that the improvement is not simply due to multitask learning.",
"For Arabic, backtranslation provides most of the gain, while for Portuguese and Nepali, the adversarial component is more important.",
"For some languages like Marathi, the two components provides small gains individually, but shows a large improvement while combined.",
"For Urdu, we found that backtranslation only using the Hindi model completely fails; this is intuitive as Hindi and Urdu are in completely different scripts and using a Hindi model to translate Urdu results in effectively random backtranslation data.",
"When we attempt to apply models trained with the adversarial task, the model generates sentences with mixed Hindi, Urdu, and English.",
"To ensure our model solely outputs Urdu, we restricted the output tokens by banning all tokens containing English or Devanagari (Hindi) characters.",
"This allowed our model to output valid and semantically meaningful translations.",
"This is an interesting result as it shows that our adversarial mixing allows translating similar languages even if they're written in different scripts.",
"We report the BLEU score with the restriction.",
"Since the tokens are already restricted, we skip the final fine-tuning step.",
"Table 3 shows the results of the first iteration from translating from a low-resource language into English.",
"We compare the following systems",
"(i) HRL En model: directly using the model trained for HRL En translation.",
"(ii) Adversarial: similar to our full model, but without using the backtranslation objective.",
"(iii) Backtranslation: mBART fine-tuned on backtranslation data from our full model in the English-LRL direction.",
"(iv) BT+Adv: Our full model.",
"For this direction, we can see that both the backtranslation and the adversarial domain adaptation components are generally effective.",
"The exception is Arabic which may be due to noisiness of our dialect classification compared to low-resource language classification.",
"Another reason could be due to the lack of written standardization for spoken dialects in comparison to low-resource, but standardized languages.",
"Table 4 shows the results of two iterations of training.",
"For languages other than Arabic dialects, the second iteration generally shows improvement over the first iteration, showing that we can leverage an improved model in one direction to further improve the reverse direction.",
"We found that the improvement after the third iteration is marginal.",
"We compare our results with a baseline using the HRL language as a pivot.",
"The baseline uses a fine tuned mBART (Liu et al., 2020) to perform supervised translation between English and the HRL, and uses MASS (Song et al., 2019) to perform unsupervised translation between the HRL and the LRL.",
"The mBART is tuned on the same parallel data used in our method, and the MASS uses the same monolingual data as in our method.",
"For all languages and directions, our method significantly outperforms the pivot baseline.",
"In table 5, we compare a cross translation method using parallel corpora with multiple languages as auxiliary data (Garcia et al., 2020b) as well as results reported in (Guzman et al., 2019) and (Liu et al., 2020).",
"All methods use the same test set, English-Hindi parallel corpus, and tokenization for fair comparison.",
"For English to Nepali, NMT-Adapt outperforms previous unsupervised methods using Hindi or multilingual parallel data, and is competitive with supervised methods.",
"For Nepali to English direction, our method achieves similar performance to previous unsupervised methods.",
"Note that we use a different tokenization than in table 3 and 4, to be consistent with previous work.",
"Table 6 shows the first iteration English to Marathi results while varying the amount of monolingual data used.",
"We see that the BLEU score increased from 11 .",
"3 to 16 .",
"1 as the number of sentences increased from 10 k to 1 M showing additional monolingual data significantly improves performance.",
"We presented NMT-Adapt, a novel approach for neural machine translation of low-resource languages which assumes zero parallel data or bilingual lexicon in the low-resource language.",
"Utilizing parallel data in a similar high resource language as well as monolingual data in the low-resource language, we apply unsupervised adaptation to facilitate translation to and from the low-resource language.",
"Our approach combines several tasks including adversarial training, denoising language modeling, and iterative back translation to facilitate the adaptation.",
"Experiments demonstrate that this combination is more effective than any task on its own and generalizes across many different language groups."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"method",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"method",
"method",
"abstain"
] |
[
"Vision and language navigation (VLN) is a challenging visually-grounded language understanding task.",
"Given a natural language navigation instruction, a visual agent interacts with a graph-based environment equipped with panorama images and tries to follow the described route.",
"Most prior work has been conducted in indoor scenarios where best results were obtained for navigation on routes that are similar to the training routes, with sharp drops in performance when testing on unseen environments.",
"We focus on VLN in outdoor scenarios and find that in contrast to indoor VLN, most of the gain in outdoor VLN on unseen data is due to features like junction type embedding or heading delta that are specific to the respective environment graph, while image information plays a very minor role in generalizing VLN to unseen outdoor areas.",
"These findings show a bias to specifics of graph representations of urban environments, demanding that VLN tasks grow in scale and diversity of geographical environments.",
"1 1 Introduction Vision and language navigation (VLN) is a challenging task that requires the agent to process natural language instructions and ground them in a visual environment.",
"The agent is embodied in the environment and receives navigation instructions.",
"Based on the instructions, the observed surroundings, and the current trajectory the agent decides its next action.",
"Executing this action changes the position and/or heading of the agent within the environment, and eventually the agent follows the described route and stops at the desired goal location.",
"The most common evaluation metric in VLN is the proportion of successful agent navigations, called task completion (TC).",
"While early work on grounded navigation was confined to grid-world scenarios (MacMahon et al., 2006; Chen and Mooney, 2011), recent work has studied VLN in outdoor environment consisting of real-world urban street layouts and corresponding panorama pictures (Chen et al., 2019).",
"Recent agent models for outdoor VLN treat the task as a sequence-to-sequence problem where the instructions text is the input and the output is a sequence of actions (Chen et al., 2019; Xiang et al., 2020; Zhu et al., 2021b).",
"In contrast to indoor VLN (An-derson et al., 2018; Ku et al., 2020), these works only consider a seen scenario , i.e., the agent is tested on routes that are located in the same area as the training routes.",
"However, studies of indoor VLN (Zhang et al., 2020) show a significant performance drop when testing in previously unseen areas.",
"The main goal of our work is to study outdoor VLN in unseen areas , pursuing the research question of which representations of an environment and of instructions an agent needs to succeed at this task.",
"We compare existing approaches to a new approach that utilizes features based on the observed environment graph to improve generalization to unseen areas.",
"The first feature, called junction type embedding, encodes the number of outgoing edges at the current agent position; the second feature, called heading delta, encodes the agent's heading change relative to the previous timestep.",
"As our experimental studies show, representations of full images do not contribute very much to successful VLN in outdoor scenarios beyond these two features.",
"One reason why restricted features encoding junction type and heading delta are successful in this task is that they seem to be sufficient to encode peculiarities of the graph representation of the environments.",
"Another reason is the current restriction of outdoor environments to small urban areas.",
"In our case, one dataset is the widely used Touchdown dataset introduced by Chen et al. (2019), the 7519 other dataset is called map2seq and has recently been introduced by Schumann and Riezler (2021).",
"The map2seq dataset was created for the task of navigation instructions generation but can directly be adopted to VLN.",
"We conduct a detailed analysis of the influence of general neural architectures, specific features such as junction type or heading delta, the role of image information and instruction token types, to outdoor VLN in seen and unseen environments on these two datasets.",
"Our specific findings unravel the contributions of these features on several VLN subtasks such as orientation, directions, stopping.",
"Our general finding is that current outdoor VLN suffers a bias towards urban environments and to artifacts of their graph representation, showing the necessity of more diverse datasets and tasks for outdoor VLN.",
"Our main contributions are the following: We describe a straightforward agent model that achieves state-of-the-art task completion and is used as a basis for our experiments.",
"We introduce the unseen scenario for outdoor VLN and propose two environment-dependent features to improve generalization in that setting.",
"We compare different visual representations and conduct language masking experiments to study the effect in the unseen scenario.",
"We adopt the map2seq dataset to VLN and show that merging it with Touchdown improves performance on the respective test sets.",
"The goal of the agent is to follow a route and stop at the desired target location based on natural language navigation instructions.",
"The environment is a directed graph with nodes v V and labeled edges ( u, v ) E .",
"Each node is associated with a 360 panorama image p and each edge is labeled with an angle ( u,v ) .",
"The agent state s S consists of a node and the angle at which the agent is heading: ( v, ( v,u ) | u N outv ) , where N outv are all outgoing neighbors of node v .",
"The agent can navigate the environment by performing an action a { FORWARD , LEFT , RIGHT , STOP } at each timestep t .",
"The FORWARD action moves the agent from state ( v, ( v,u ) ) to ( u, ( u,u ) ) , where ( u, u ) is the edge with an angle closest to ( v,u ) .",
"The RIGHT and LEFT action rotates the agent towards ( Instructions Encoder Attention Visual Encoder Attention 1stRNN O bservation 2ndRNN FFNNR ecurrent R ecurrent A ttention I n i t i a li z e RNN c e ll Encoder Decoder Junction Type Step Count Prev.",
"the closest edge angle in clockwise or counterclockwise direction, respectively: ( v, ( v,u ) ) .",
"Given a starting state s 1 and instructions text x , the agent performs a series of actions a 1 , ..., a T until the STOP action is predicted.",
"If the agent stops within one neighboring node of the desired target node (goal location), the navigation was successful.",
"The described environment and location finding task was first introduced by (Chen et al., 2019) and we will also refer to it as \"outdoor VLN task\" throughout this paper.",
"In this section we introduce the model that we use to analyze navigation performance in the unseen and seen scenario for outdoor VLN.",
"The architecture is inspired by the cross-modal attention model for indoor VLN (Krantz et al., 2020).",
"First we give a high level overview of the model architecture and rough intuition.",
"Afterwards we provide a more 7520 formal description.",
"As depicted in Figure 1, the model follows a sequence-to-sequence architecture where the input sequence is the navigation instructions text and the output is a sequence of agent actions.",
"At each decoding timestep, a new visual representation of the current agent state within the environment is computed, where the agent state is dependent on the previously predicted actions.",
"The decoder RNN has two layers where the first encodes metadata and a visual representation.",
"The second RNN layer encodes a contextualized text and visual representation and eventually predicts the next action.",
"The intuition behind the model architecture is to firstly accumulate plain observations available at the current timestep and entangle them with previous observations in the first recurrent layer.",
"Based on these observations, the model focuses attention to certain parts of the instructions text and visual features which are again entangled in the second recurrent layer.",
"Thus, we use the acronym ORAR (observation-recurrence attention-recurrence) for the model.",
"In detail, the instructions encoder embeds and encodes the tokens in the navigation instructions sequence x = x 1 , ..., x L using a bidirectional LSTM (Graves et al., 2005): x i = embedding ( x i ) (( w 1 , ..., w L ) , z wL ) = Bi-LSTM ( x 1 , ..., x L ) , where w 1 , ..., w L are the hidden representations for each token and z wL is the last LSTM cell state.",
"The visual encoder, described in detail below, emits a fixed size representation p t of the current panorama view and a sequence of sliced view representations p 1 t , ..., p St .",
"The state z first 0 of the cell in the first decoder LSTM layer is initialized using z wL .",
"The input to the first decoder layer is the concatenation ( ) of visual representation p t , previous action embedding a t 1 , junction type embedding n t , and heading delta d t .",
"The output of the first decoder layer, h firstt = LSTM first ([ a t 1 n t d t p t ]) , is then used as the query of multi-head attention (Vaswani et al., 2017) over the text encoder.",
"The resulting contextualized text representation c wt is then used to attend over the sliced visual representations: c wt = MultiHeadAttention ( h firstt , ( w 1 , ..., w L )) c pt = MultiHeadAttention ( c wt , ( p 1 t , ..., p St )) .",
"h secondt = LSTM second ([ t h firstt c wt c pt ]) ,",
"where t is the embedded timestep t .",
"The hidden representation h secondt of the second decoder LSTM layer is then passed through a feed forward network to predict the next agent action a t .",
"At each timestep t the panorama at the current agent position is represented by extracted visual features.",
"We slice the panorama into eight projected rectangles with 60 field of view, such that one of the slices aligns with the agent's heading.",
"This centering slice and the two left and right of it are fed into a ResNet pretrained 2 on ImageNet (Russakovsky et al., 2015).",
"We consider two variants of ResNet derived panorama features.",
"One variant extracts low level features from the fourth to last layer ( 4th-to-last ) of a pretrained ResNet-18 and concatenates each slice's feature map along the width dimension, averages the 128 CNN filters and cuts out 100 dimensions around the agents heading.",
"This results in a feature matrix of 100 100 ( p 1 t , ..., p 100 t ).",
"The full procedure is described in detail in Chen et al. (2019) and Zhu et al. (2021b).",
"The other variant extracts high level features from a pretrained ResNet-50's pre-final layer for each of the 5 slices: p 1 t , ..., p 5 t .",
"Each slice vector p st is of size 2 , 048 resulting in roughly the same number of extracted ResNet features for both variants, making a fair comparison.",
"Further, we use the semantic segmentation representation of the panorama images.",
"We employ omnidirectional semantic segmentation (Yang et al., 2020) to classify each pixel by one of the 25 classes of the Mapillary Vistas dataset (Neuhold et al., 2017).",
"The classes include e.g. car, truck, traffic light, vegetation, road, sidewalk.",
"See Figure 1 bottom right for a visualization.",
"Each panorama slice ( p 1 t , ..., p 5 t ) is then represented by a 25 dimensional vector where each value is the normalized area covered by the corresponding class (Zhang et al., 2020).",
"For either feature extraction method, the fixed sized panorama representation p t is computed by concatenating the slice features p 1 t , ..., p St and passing them to a feed forward network.",
"2 https://pytorch.org/vision/0.8/models.",
"The junction type embedding is a feature that we introduce to better analyze generalization to unseen areas.",
"It embeds the number of outgoing edges of the current environment node and is categorized into {2, 3, 4, >4}.",
"It provides the agent information about the type of junction it is positioned on: a regular street segment, a three-way intersection, a four way intersection or an intersection with more than four outgoing streets.",
"We want to point out that the number of outgoing edges isn't oracle information in the environment described in Section 2.",
"The agent can rotate left until the same panorama view is observed and thus counting the number of outgoing edges by purely interacting with the environment.",
"But it is clear that the feature leverages the fact that the environment is based on a graph and it would not be available in a continuous setting (Krantz et al., 2020).",
"As described in Section 2, the environment defined and implemented by Chen et al. (2019) only allows states where the agent is heading towards an outgoing edge.",
"As a consequence the environment automatically rotates the agent towards the closest outgoing edge after transitioning to a new node.",
"The environment behavior is depicted in Figure 2a) for a transition between two regular street segments.",
"However, as depicted in Figure 2b), a problem arises when the agent is walking towards a three-way intersection.",
"The automatic rotation introduces unpredictable behavior for the agent and we hypothesis that it hinders generalization to unseen areas.",
"To correct for this environment artifact, we introduce the heading delta feature d t which encodes the change in heading direction relative to the previous timestep.",
"The feature is normalized to ( 1 , 1] where a negative value indicates a left rotation and a positive value indicates a right rotation.",
"The magnitude signals the degree of the rotation up to 180 .",
"We use the Touchdown (Chen et al., 2019) and the map2seq (Schumann and Riezler, 2021) datasets in our experiments.",
"Both datasets contain human written navigation instructions for routes located in the same environment.",
"The environment consists of 29,641 panorama images from Manhattan and the corresponding connectivity graph.",
"The Touchdown dataset (Chen et al., 2019) for vision and language navigation consists of 9,326 routes paired with human written navigation instructions.",
"The annotators navigated the panorama environment based on a predefined route and wrote down navigation instructions along the way.",
"The map2seq (Schumann and Riezler, 2021) dataset was created for the task of navigation instructions generation.",
"The 7,672 navigation instructions were written by human annotators who saw a route on a rendered map, without the corresponding panorama images.",
"The annotators were told to include visual landmarks like stores, parks, churches, and other amenities into their instructions.",
"A different annotator later validated the written navigation instructions by using them to follow the described route in the panorama environment (without the map).",
"This annotation procedure allows us to use the navigation instructions in the map2seq dataset for the vision and language navigation task.",
"We are the first to report VLN results on this dataset.",
"Despite being located in the same environment, the routes and instructions from each dataset differ in",
"multiple aspects.",
"The map2seq instructions typically include named entities like store names, while Touchdown instructions focus more on visual features like the color of a store.",
"Both do not include street names or cardinal directions and are written in egocentric perspective.",
"Further, in map2seq the agent starts by facing in the correct direction, while in Touchdown the initial heading is random and the first part of the instruction is about orientating the agent (\"Turn around such that the scaffolding is on your right\").",
"A route in map2seq includes a minimum of three intersections and is the shortest path from the start to the end location.",
"3 In Touchdown there are no such constraints and a route can almost be circular.",
"The routes in both datasets are around 35-45 nodes long with some shorter outliers in Touchdown.",
"On average instructions are around 55 tokens long in map2seq and around 89 tokens long in Touchdown.",
"We are interested in the generalization ability to unseen areas and how it is influenced by the two proposed features, types of visual representation, navigation instructions and training set size.",
"Alongside of the results in the unseen scenario, we report results in the seen scenario to interpret performance improvements in relation to each other.",
"All experiments 4 are repeated ten times with different random seeds.",
"The reported numbers are the average over the ten repetitions.",
"Results printed in bold are significantly better than non-bold results in the same column.",
"Significance was established by a paired t-test 5 on the ten repetition results and a p-value 0 .",
"05 without multiple hypothesis corrections factor.",
"Individual results can be found in the Appendix.",
"To be able to compare our model with previous work, we use the original training, development and test split (Chen et al., 2019) for the seen scenario on Touchdown.",
"Because we are the first to use the map2seq data for VLN we create a new split for it.",
"The resulting number of instances can be 3 The shortest path bias reduces the number of reasonable directions at each intersection and thus makes the task easier.",
"4 Except comparison models on the Touchdown seen test set for which we copy the results from the respective work.",
"seen in the left column of Table 1.",
"For the unseen scenario, we create new splits for both datasets.",
"We separate the unseen area geographically by drawing a boundary across lower Manhattan (see Figure 3).",
"Development and test instances are randomly cho-sen from within the unseen area.",
"Routes that are crossing the boundary are discarded.",
"The right column of Table 1 shows the number of instances for both splits.",
"Additionally, we merge the two datasets for both scenarios.",
"This is possible because both datasets are located in the same environment and the unseen boundary is equivalent.",
"We train the models with Adam (Kingma and Ba, 2015) by minimizing cross entropy loss in the teacher forcing paradigm.",
"We set the learning rate to 5e-4, weight decay to 1e-3 and batch size to 64.",
"After 150 epochs we select the model with the best shortest path distance (SPD) performance on the development set.",
"We apply dropout of 0.3 after each dense layer and recurrent connection.",
"The multi-head attention mechanism is regularized 7523 Seen Unseen Touchdown map2seq Touchdown map2seq dev test dev test dev test dev test Model nDTW TC nDTW TC nDTW TC nDTW TC nDTW TC nDTW TC nDTW TC nDTW TC RConcat 22.5 10.6 22.9 11.8 30.7 17.1 27.7 14.7 3.9 2.3 3.5 1.9 3.7 2.0 3.8 2.1 GA 25.2 12.0 24.9 11.9 33.0 18.2 30.1 17.0 3.6 1.8 4.0 2.2 3.9 1.8 4.1 1.7 ARC 15.3 14.1 ------ARC+l2s 19.5 16.7 ------VLN Transformer 23.0 14.0 25.3 14.9 31.1 18.6 29.5 17.0 4.7 2.3 5.2 3.1 6.2 3.6 6.1 3.5 ORAR full model ResNet pre-final 38.9 26.0 38.4 25.3 65.0 49.1 62.3 46.7 13.0 9.6 12.1 8.8 34.6 24.2 34.5 24.6 ResNet 4th-to-last 45.1 29.9 44.9 29.1 60.0 43.4 57.8 41.7 22.2 15.4 21.6 14.9 41.0 27.6 42.2 30.3 ORAR full model ResNet 4th-to-last ResNet pre-final ResNet 4th-to-last ResNet 4th-to-last no heading delta 45.5 30.0 45.3 29.3 63.2 47.7 60.3 44.9 21.6 15.2 21.2 14.8 33.0 22.0 33.6 23.6 no junction type 40.6 25.9 40.9 25.5 65.9 52.9 62.1 47.5 7.9 4.8 7.1 4.3 13.1 7.4 11.8 7.1 no head.",
"by attention dropout of 0.3 and layer normalization.",
"The navigation instructions are lower-cased and split into byte pair encodings (Sennrich et al., 2016) with a vocabulary of 2,000 tokens and we use BPE dropout (Provilkov et al., 2020) during training.",
"The BPE embeddings are of size 32 and the bidirectional encoder LSTM has two layers of size 256.",
"The feed forward network in the visual encoder consists of two dense layers with 512 and 256 neurons, respectively, and 64 neurons in case of using semantic segmentation features.",
"The embeddings that encode previous action, junction type, and step count are of size 16.",
"The two decoder LSTM layers are of size 256 and we use two attention heads.",
"Training the full model takes around 3 hours on a GTX 1080 Ti.",
"We compare the ORAR model to previous works.",
"Because these works only report results for the seen scenario on Touchdown, we evaluate those for which we could acquire the code, on the map2seq dataset and the unseen scenario.",
"The models RConcat (Mirowski et al., 2018; Chen et al., 2019), GA (Chaplot et al., 2018; Chen et al., 2019) and ARC (Xiang et al., 2020) use an LSTM to encode the instructions text and a single layer decoder LSTM to predict the next action.",
"They differ in how the text and image representations are incorporated during each timestep in the decoder.",
"As the name suggests, in RConcat the two representations are concatenated.",
"GA uses gated attention to compute a fused representation of text and image.",
"ARC uses the hidden representation of the previous timestep to attend over the instructions text.",
"This contextualized text representation is then concatenated to the image representation.",
"They further introduce ARC+l2s which cascades the action prediction into a binary stopping decision and a subsequent direction classification.",
"The VLN-Transformer (Zhu et al., 2021b) uses pretrained BERT (Devlin et al., 2019) to encode the instructions and VLN-BERT (Majumdar et al., 2020) to fuse the modalities.",
"We use task completion ( TC ) as the main performance metric.",
"It represents the percentage of successful agent navigations (Chen et al., 2019).",
"We further report normalized Dynamic Time Warping ( nDTW ) which quantifies agent and gold trajectory overlap for all routes (Ilharco et al., 2019).",
"The shortest path distance ( SPD ) is measured within the environment graph from the node the agent stopped to the goal node (Chen et al., 2019).",
"The two upper sections of Table 2 show the results of the ORAR model introduced in Section 3 in comparison to other work.",
"While the model sig-7524 Unseen Touchdown map2seq Visual Features dev test dev test ResNet pre-final 9.6 8.8 24.2 24.6 no junction type 4.4 4.0 10.7 11.0 ResNet 4th-to-last 15.4 14.9 27.6 30.3 no junction type 4.8 4.3 7.4 7.1 semantic segmentation 11.5 11.0 29.0 31.1 no junction type 5.5 5.5 11.6 12.1 no image 11.5 9.5 28.5 30.5 no junction type 3.0 2.8 5.4 5.5 Table 3: Study of visual features for the unseen scenario of Touchdown and map2seq.",
"nificantly outperforms all previous work on both datasets, our main focus is analyzing generalization to the unseen scenario.",
"It is apparent that the type of image features influences agent performance and will be discussed in the next section.",
"The bottom section of Table 2 ablates the proposed heading delta and junction type features for the best models.",
"Removing the heading delta feature has little impact in the seen scenario, but significantly reduces task completion in the unseen scenario of the map2seq dataset.",
"Surprisingly, the feature has no impact in the unseen scenario of Touchdown.",
"We believe this is a consequence of the different data collection processes.",
"Touchdown was specifically collected for VLN and annotators navigated the environment graph, while map2seq annotators wrote instructions only seeing the map.",
"Removing the junction type embedding leads to a collapse of task completion in the unseen scenario on both datasets.",
"This shows that without this explicit feature, the agent lacks the ability to reliably identify intersections in new areas.",
"Table 3 shows results for different types of visual features in the unseen scenario.",
"We compare high level ResNet features (pre-final), low level ResNet features (4th-to-last), semantic segmentation features and using no image features.",
"For the ResNet based features, the low level 4th-to-last features perform better than pre-final on both datasets.",
"On map2seq the no image baseline performs on par with models that have access to visual features.",
"When we remove the junction type embedding, the task completion rate drops significantly, which shows that the agent is not able to reliably locate intersections from any type of visual features.",
"The agent has to predict a sequence of actions in order to successfully reach the goal location.",
"In Touchdown this task can be divided into three subtasks (see Section 4).",
"First the agent needs to orientate itself towards the correct starting heading.",
"Next the agent has to predict the correct directions at the intersections along the path.",
"The third subtask is stopping at the specified location.",
"Providing oracle actions (during testing) for two of the three sub-tasks lets us look at the completion rate of the remaining sub-task.",
"Table 4 shows the completion rates for each of the three sub-tasks when using ResNet pre-final, 4th-to-last and no image features.",
"In the seen scenario we can observe that the pre-final features lead to the best performance for the directions task.",
"The 4th-to-last features on the other hand lead to the best orientation task performance and the stopping task is not influenced by the choice of visual features.",
"In the unseen scenario 4th-to-last features again provide best orientation task performance but no image features lead to the best performance for the directions task.",
"This shows that the ResNet 4th-to-last features are primarily useful for the orientation sub-task and explains the discrepancy of the no image baseline on Touchdown and map2seq identified in the previous subsection.",
"In the Appendix we use this knowledge to train a mixed-model that uses 4th-to-last features 7525 0% 20% 40% 60% 80% 100% 0 5 10 15 20 25 30 T a s k C o m p l e t i o n Touchdown Seen 0% 20% 40% 60% 80% 100% Token Masking Percentage 0 5 10 15 20 25 30 T a s k C o m p l e t i o n Touchdown Unseen Mask Object Tokens: Mask Direction Tokens: 4th-to-last 4th-to-last no image no image Figure 4: Masking experiments on the seen and unseen test set of Touchdown.",
"for the orientation sub-task and pre-final/no image features for directions and stopping.",
"To analyze the importance of direction and object tokens in the navigation instructions, we run masking experiments similar to Zhu et al. (2021a), except that we mask the tokens during training and testing instead of during testing only.",
"Figure 4 shows the resulting task completion rates for an increasing number of masked direction or object tokens.",
"From the widening gap between masking object and direction tokens, we can see that the direction tokens are more important to successfully reach the goal location.",
"Task completion nearly doesn't change when masking object tokens, indicating that they are mostly ignored by the model.",
"While task completion significantly drops when direction tokens are masked, the agent still performs on a high level.",
"This finding is surprising and in dissent with Zhu et al. (2021a) who report that task completion nearly drops to zero when masking direction tokens during testing only.",
"We believe that in our setting (masking during testing and train-ing), the model learns to infer the correct directions from redundancies in the instructions or context around the direction tokens.",
"Besides the general trend of lower performance on the unseen scenario, we can not identify different utilization of object or direction tokens in the seen and unseen scenario.",
"We train the ORAR full model on the merged dataset (see Section 5.1).",
"Model selection is performed on the merged development set but results are also reported for the individual test sets of Touchdown and map2seq.",
"For comparison with models trained on the non-merged datasets, the first row of Table 5 shows the best results of Table 2.",
"Training on the merged dataset significantly improves nDTW and task completion across both datasets and scenarios.",
"This shows that both datasets are compatible and the merged dataset can further be used by the VLN community to evaluate their models on more diverse navigation instructions.",
"Despite being trained on twice as many instances, the no image baseline still performs on par on map2seq unseen.",
"From this we conclude that the current bottleneck for better generalization to unseen areas is the number of panorama images seen during training instead of number of instructions.",
"Natural language instructed navigation of embodied agents has been studied in generated grid environments that allow a structured representation of the observed environment (MacMahon et al., 2006; Chen and Mooney, 2011).",
"Fueled by the advances in image representation learning (He et al., 2016), the environments became more realistic by using real-world panorama images of indoor locations (Anderson et al., 2018; Ku et al., 2020).",
"Complementary outdoor environments contain street level panoramas connected by a real-world street layout (Mirowski et al., 2018; Chen et al., 2019; Mehta et al., 2020).",
"Agents in this outdoor environment are trained to follow human written navigation instructions (Chen et al., 2019; Xiang et al., 2020), instructions generated by Google Maps (Hermann et al., 2020), or a combination of both (Zhu et al., 2021b).",
"Recent work focuses on analyzing the navigation agents by introducing better trajectory overlap metrics (Jain et al., 2019; Ilharco et al., 2019) or diagnosing the performance under certain constraints such as uni-modal inputs (Thomason et al., 2019) and masking direction or object tokens (Zhu et al., 2021a).",
"Other work used a trained VLN agent to evaluate automatically generated navigation instructions (Zhao et al., 2021).",
"An open problem in indoor VLN is 7526 Seen Unseen Merged Touchdown map2seq Merged Touchdown map2seq dev test test test dev test test test Model nDTW TC nDTW TC nDTW TC nDTW TC nDTW TC nDTW TC nDTW TC nDTW TC best non-merged --44.9 29.1 62.3 46.7 --21.6 14.9 42.2 30.3 ORAR full model no image 37.5 26.6 35.8 24.7 23.0 14.8 58.3 42.1 31.6 22.3 27.0 19.2 16.6 11.7 46.5 33.2 ResNet pre-final 51.3 38.8 49.3 36.8 39.1 27.7 67.3 52.8 28.9 22.0 25.7 20.0 17.4 13.6 41.3 32.1 ResNet 4th-to-last 53.4 37.8 51.8 35.7 46.0 30.1 62.1 45.5 35.7 25.4 33.6 24.2 27.0 19.3 46.1 33.5 Table 5: Results for models trained on the merged dataset.",
"the generalization of navigation performance to previously unseen areas.",
"Proposed solutions include back translation with environment dropout (Tan et al., 2019), multi-modal environment representation (Hu et al., 2019) or semantic segmented images (Zhang et al., 2020).",
"Notably the latter work identifies the same problem in the Touchdown task.",
"We presented an investigation of outdoor vision and language navigation in seen and unseen environments.",
"We introduced the heading delta feature and junction type embedding to correct an artifact of the environment and explicitly model the number of outgoing edges, respectively.",
"Both are helpful to boost and analyze performance in the unseen scenario.",
"We conducted experiments on two datasets and showed that the considered visual features poorly generalize to unseen areas.",
"We conjecture that VLN tasks need to grow in scale and diversity of geographical environments and navigation tasks.",
"The research reported in this paper was supported by a Google Focused Research Award on \"Learn-ing to Negotiate Answers in Multi-Pass Semantic Parsing\"."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"result",
"objective",
"result",
"objective",
"objective",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"abstain",
"result",
"abstain",
"other"
] |
[
"Pre-trained sequence-to-sequence models have significantly improved Neural Machine Translation (NMT).",
"Different from prior works where pre-trained models usually adopt an unidirectional decoder, this paper demonstrates that pre-training a sequence-to-sequence model but with a bidirectional decoder can produce notable performance gains for both Autoregressive and Non-autoregressive NMT.",
"Specifically, we propose CeMAT, a conditional masked language model pre-trained on large-scale bilingual and monolingual corpora in many languages.",
"1 We also introduce two simple but effective methods to enhance the CeMAT, aligned code-switching & masking and dynamic dual-masking .",
"We conduct extensive experiments and show that our CeMAT can achieve significant performance improvement for all scenarios from lowto extremely high-resource languages, i.e., up to +14.4 BLEU on low-resource and +7.9 BLEU on average for Autoregressive NMT.",
"For Non-autoregressive NMT, we demonstrate it can also produce consistent performance gains, i.e., up to +5.3 BLEU.",
"To the best of our knowledge, this is the first work to pre-train a unified model for fine-tuning on both NMT tasks.",
"Pre-trained language models have been widely adopted in NLP tasks (Devlin et al., 2019; Radford and Narasimhan, 2018).",
"For example, XLM (Con-neau and Lample, 2019) demonstrated that cross-lingual pre-training is effective in improving neural machine translation (NMT), especially on low-resource languages.",
"These methods all directly pretrain a bidirectional encoder or an unidirectional decoder.",
"The encoder and decoder in NMT models are then independently initialized with them and 1 Code, data, and pre-trained models are available at https://github.com/huawei-noah/ Pretrained-Language-Model/CeMAT Approach Enc.",
"fine-tuned (Guo et al., 2020; Zhu et al., 2020).",
"Recently, pre-training standard sequence-to-sequence (Seq2Seq) models has shown significant improvements and become a popular paradigm for NMT tasks (Song et al., 2019; Liu et al., 2020; Lin et al., 2020).",
"However, some experimental results from XLM (Conneau and Lample, 2019) have shown that the decoder module initialized by the pre-trained bidirectional masked language model (MLM) (Devlin et al., 2019), rather than the unidirectional causal language model (CLM, Radford and Narasimhan, 2018), would achieve better results on Autoregressive NMT (AT).",
"Especially, compared to random initialization, initialized by GPT (Radford and Narasimhan, 2018) might result in performance degradation sometimes.",
"We conjecture that when fine-tuning on generation tasks (e.g., NMT), the representation capability of the pre-trained models may be more needed than the generation capability.",
"Therefore, during pre-training, we should focus on training the representation capability not only for the encoder, but also for the decoder more explicitly.",
"consists of a bidirectional encoder, a bidirectional decoder, and a cross-attention module for bridging them.",
"Specifically, the model is jointly trained by MLM on the encoder and Conditional MLM (CMLM) on the decoder with large-scale monolingual and bilingual texts in many languages.",
"Table 1 compares our model with prior works.",
"Benefiting from the structure, CeMAT can provide unified initialization parameters not only for AT task, but also for Non-autoregressive NMT (NAT) directly.",
"NAT has been attracting more and more attention because of its feature of parallel decoding, which helps to greatly reduce the translation latency.",
"To better train the representation capability of the model, the masking operations are applied in two steps.",
"First, some source words that have been aligned with target words are randomly selected and then substituted by new words of similar meanings in other languages, and their corresponding target words are masked.",
"We call this method aligned code-switching & masking .",
"Then, the remaining words in both source and target languages will be masked by dynamic dual-masking .",
"Extensive experiments on downstream AT and NAT tasks show significant gains over prior works.",
"Specifically, under low-resource conditions ( < 1M bitext pairs), our system gains up to +14.4 BLEU points over baselines.",
"Even for extremely high-resource settings ( > 25M), CeMAT still achieves significant improvements.",
"In addition, experiments on the WMT16 Romanian English task demonstrate that our system can be further improved (+2.1 BLEU) by the Back-Translation (BT; Sen-nrich et al., 2016a).",
"The main contributions of our work can be summarized as follows: We propose a multilingual pre-trained model CeMAT, which consists of a bidirectional encoder, a bidirectional decoder.",
"The model is pre-trained on both monolingual and bilingual corpora and then used for initializing downstream AT and NAT tasks.",
"To the best of our knowledge, this is the first work to pre-train a unified model suitable for both AT and NAT.",
"We introduce a two-step masking strategy to enhance the model training under the setting of bidirectional decoders.",
"Based on a multilingual translation dictionary and word alignment between source and target sentences, aligned code-switching & masking is firstly applied.",
"Then, dynamic dual-masking is used.",
"We carry out extensive experiments on AT and NAT tasks with data of varied sizes.",
"Consistent improvements over strong competitors demonstrate the effectiveness of CeMAT.",
"Our CeMAT is jointly trained by MLM and CMLM on the source side and the target side, respectively.",
"The overall framework is illustrated in Figure",
"1. In this section, we first introduce the multilingual CMLM task (Section 2.1).",
"Then, we describe the two-step masking, including the aligned code-switching & masking (Section 2.2) and the dynamic dual-masking (Section 2.3).",
"Finally, we present training objectives of CeMAT (Section 2.4).",
"Formally, our training data consists of M language-pairs D = { D 1 , D 2 , ..., DM } .",
"D k ( m, n ) is a collection of sentence pairs in language L m and L n , respectively.",
"In the description below, we denote a sentence pair as ( X m , Y n ) D k ( m, n ) , where X m is the source text in the language L m , and Y n is the corresponding target text in the language L n .",
"For monolingual corpora, we create pseudo bilingual text by copying the sentence, namely, X m = Y n .",
"CMLM predicts masked tokens y mask n , given a source sentence X m and the remaining target sentence Y n \\ y maskn .",
"The probability of each y jn y maskn is independently calculated: P ( y jn | X m , Y n \\ y maskn ) .",
"(1) CMLM can be directly used to train a standard Seq2Seq model with a bidirectional encoder, a unidirectional decoder, and a cross attention.",
"However, it is not restricted to the autoregressive feature on the decoder side because of the independence between masked words.",
"Therefore, following practices of NAT, we use CMLM to pre-train a Seq2Seq model with a bidirectional decoder, as shown in Figure",
"1. Although bilingual sentence pairs can be directly used to train the model together with the conventional CMLM (Ghazvininejad et al., 2019), it is challenging for sentence pairs created from monolingual corpora because of identical source and target sentences.",
"Therefore, we introduce a two-step masking strategy to enhance model training on both bilingual and monolingual corpora.",
"We use aligned code-switching & masking strategy to replace the source word or phrase with a new word in another language, and then mask the corresponding target word.",
"Different from the previous code-switching methods (Yang et al., 2020; Lin et al., 2020) where source words always are randomly selected and replaced directly, our method consists of three steps:",
"1. Aligning : We utilize a multilingual translation dictionary to get a set of aligned words = { , ( x im , y jn ) , } between the source X m and target Y n .",
"The word pair ( x im , y jn ) denotes that the i -th word in X m and j -th word in Y n are translations of each other.",
"For sentence pairs created from monolingual corpora, words in an aligned word pair are identical.",
"2. Code-Switching Replace (CSR) : Given an aligned word pair ( x im , y jn ) , we first select a new word x ik in the language L k that can be used to replace x im in the source sentence X m , x i k = F m ( x i m ) where F m ( x ) is a multilingual dictionary lookup function for a word x in the language L m , x ik is a randomly selected word from the 6381 dictionary, which is a translation of x im in the language L k .",
"3. Code-Switching Masking (CSM) : If the source word x im in the aligned pair ( x im , y jn ) is replaced by x ik , we also mask y jn in Y n by replacing it with a universal mask token.",
"Then, CeMAT will be trained to predict it in the output layers of the bidirectional decoder.",
"For aligning and CSR, we only use available multilingual translation dictionary provided by MUSE (Lample et al., 2018).",
"Figure 2 shows the process of aligned code-switching & masking .",
"According to the given dictionary, dance and tanzen are aligned, then a new French word danse is selected to replace dance, and tanzen replaced by [mask] (marked as red color).",
"During training, at most 15% of the words in the sentence will be performed by CSR and CSM.",
"For monolingual data, we set this ratio to 30%.",
"We use (CSR( X m ) , CSM( Y n )) to denote the new sentence pair after aligned code-switching & masking , which will be further dynamically dual-masked at random.",
"Limited by the dictionary, the ratio of aligned word pairs is usually small.",
"In fact, we can only match aligned pairs for 6% of the tokens on average in the bilingual corpora.",
"To further increase the training efficiency, we perform dynamic dual-masking (DM) on both bilingual and monolingual data.",
"Bilingual data: We first sample a masking ratio from a uniform distribution between [0 . 2 , 0 . 5] , then randomly select a subset of target words which are replaced by [mask].",
"Similarly, we select a subset on the source texts and mask them with a ratio of in a range of [0 . 1 , 0 . 2] .",
"Figure 2 shows an example of dynamic dual-masking on bilingual data.",
"We set to force the bidirectional decoder to obtain more information from the encoder.",
"Monolingual data: Since the source and target are identical before masking, we sample = from a range [0 . 3 , 0 . 4] and mask the same subset of words on both sides.",
"This will avoid the decoder directly copying the token from the source.",
"Follow practices of pre-trained language models, 10% of the selected words for masking remain unchanged, and 10% replaced with a random token.",
"Words replaced by the aligned code-switching & masking will not be selected to prevent the loss of cross-lingual information.",
"We use (DM(CSR( X m )) , DM(CSM( Y n ))) to denote the new sentence pair after dynamic dual-masking, which will be used for pre-training.",
"We jointly train the encoder and decoder on MLM and CMLM tasks.",
"Given the sentence pair ( X m , Y n ) = (DM(CSR( X m )) , DM(CSM( Y n ))) from the masked corpora D , the final training objective is formulated as follows: L = (cid:88) ( X m , Y n ) D (cid:88) y jn y maskn log P ( y jn | X m , Y n ) +(1 ) (cid:88) x im x maskm log P ( x im | X m ) (2) where y maskn are the set of masked target words, x maskm are the set of masked source words, and is a hyper-parameter to balance the influence of both tasks.",
"In our experiments, we set = 0 .",
"7 .",
"Pre-training Data We use the English-centric multilingual parallel corpora of PC32 2 , and then collect 21-language monolingual corpora from common crawl 3 .",
"In this paper, we use ISO language code 4 to identify each language.",
"A [ language code ] token will be prepended to the beginning of the source and target sentence as shown in Figure",
"2. This type of token helps the model to distinguish sentences from different languages.",
"The detailed correspondence and summary of our pre-training corpora can be seen in Appendix A. Data pre-processing We directly learn a shared BPE (Sennrich et al., 2016b) model on the entire data sets after tokenization.",
"We apply Moses to-kenization (Sennrich et al., 2016b) for most languages, and for other languages, we use KyTea 5 2 https://github.com/linzehui/mRASP 3 https://commoncrawl.org/ 4 https://www.loc.gov/standards/ iso639-2/php/code_list.php 5 http://www.phontron.com/kytea/ 6382 Lang-Pairs En-Kk En-Tr En-Et En-Fi En-Lv En-Cs En-De En-Fr Avg Source WMT19 WMT17 WMT18 WMT17 WMT17 WMT19 WMT19 WMT14 Size 91k(low) 207k(low) 1.94M(medium) 2.66M(medium) 4.5M(medium) 11M(high) 38M(extr-high) 41M(extr-high) Direction Direct 0.2 0.8 9.5 12.2 17.9 22.6 20.2 21.8 12.9 15.6 16.5 30.9 41.4 17.1 mBART 2.5 7.4 17.8 22.5 21.4 27.8 22.4 28.5 15.9 19.3 18.0 30.5 41.0 21.2 mRASP 8.3 12.3 20.0 23.4 20.9 26.8 24.0 28.0 21.6 24.4 19.9 35.2 44.3 23.8 CeMAT 8.8 12.9 23.9 23.6 22.2 28.5 25.4 28.7 22.0 24.3 21.5 39.2 43.7 25.0 +8.6 +12.1 +14.4 +11.4 +4.3 +5.9 +5.2 +6.9 +9.1 +8.7 +5.0 +8.3 +2.3 +7.9 Table 2: Comprehensive comparison with mRASP and mBART.",
"for Japanese and jieba 6 for Chinese, and a special normalization for Romanian (Sennrich et al., 2016a).",
"Following Liu et al. (2020), we balance the vocabulary size of languages by up/down-sampling text based on their data size when learning BPE.",
"Model and Settings As shown in Figure 1, we apply a bidirectional decoder so that it can utilize left and right contexts to predict each token.",
"We use a 6-layer encoder and 6-layer bidirectional decoder with a model dimension of 1024 and 16 attention heads.",
"Following Vaswani et al. (2017), we use sinusoidal positional embedding, and apply layer normalization for word embedding and pre-norm residual connection following Wang et al. (2019a).",
"Our model is trained on 32 Nvidia V100 GPUs for 300K steps, The batch size on each GPU is 4096 tokens, and we set the value of update frequency to",
"8. Following the training settings in Transformer, we use Adam optimizer ( (cid:15) = 1 e 6 , 1 = 0 .",
"9 , 2 = 0 .",
"98 ) and polynomial decay scheduling with a warm-up step of 10,000.",
"In this section, we verify CeMAT provides consistent performance gains in low to extremely high resource scenarios.",
"We also compare our method with other existing pre-training methods and further present analysis for better understanding the contributions of each component.",
"The AT model consists of an encoder and a unidirectional decoder.",
"The encoder maps a source sentence X m into hidden representations which are then fed into the decoder.",
"The unidirectional decoder predicts the t -th token in a target language L n conditioned on X m and the previous target tokens 6 https://github.com/fxsjy/jieba y <tn .",
"The training objective of AT is to minimize the negative log-likelihood: L ( ) = (cid:88) ( X m ,Y n ) D ( m,n ) | Y n | (cid:88) t =1 log P ( y tn | X m , y <tn ; ) (3) 4.2 Experimental Settings Benchmarks We selected 9 different language pairs and then use CeMAT to fine-tune on them.",
"They are divided into four categories according to their data size: low-resource ( < 1M), medium-resource ( > 1M and < 10M), high-resource ( > 10M and < 25M), and extremely high-resource ( > 25M).",
"See Appendix B for more details.",
"Configuration We adopt a dropout rate of 0.1 for extremely high-resource En Fr, En De (WMT19); for all other language pairs, we set the value of 0.3.",
"We fine-tune AT with a maximum learning rate of 5 e 4 , a warm-up step of 4000 and label smoothing of 0.2.",
"For inference, we use beam search with a beam size of 5 for all translation directions.",
"For a fair comparison with previous works, all results are reported with case-sensitive and tokenized BLEU scores.",
"Main Results We fine-tune AT systems initialized by our CeMAT on 8 popular language pairs, which are the overlapping language pairs in experiments of mBART (Liu et al., 2020) and mRASP (Lin et al., 2020).",
"Table 2 shows the results.",
"Compared to directly training AT models, our systems with CeMAT as initialization obtain significant improvements on all four scenarios.",
"We observe gains of up to +14.4 BLEU and over +11.4 BLEU on three of the four tasks on low-resource scenarios, i.e., En Tr.",
"Without loss of generality, as the scale of the dataset increases, the benefits of pre-training 6383 models are getting smaller and smaller.",
"However, we can still obtain significant gains when the data size is large enough (extremely high-resource: > 25M), i.e. +8.3 and +2.3 BLEU for En De and En Fr respectively.",
"This notable improvement shows that our model can further enhance extremely high-resource translation.",
"Overall, we obtain performance gains of more than +8.0 BLEU for most directions, and finally observe gains of +7.9 BLEU on average on all language pairs.",
"We further compare our CeMAT with mBART (Liu et al., 2020) and mRASP (Lin et al., 2020), which are two pre-training methods of current SOTA.",
"As illustrated in Table 2, CeMAT outperforms mBART on all language pairs with a large margin (+3.8 BLEU on average), for extremely high-resource, we can obtain significant improvements when mBART hurts the performance.",
"Compared to mRASP, we achieve better performance on 11 out of the total 13 translation directions, and outperforms this strong competitor with an average improvement of +1.2 BLEU on all directions.",
"Comparison with Existing Pre-training Models We further compare our CeMAT with more existing multilingual pre-trained models on three popular translation directions, including WMT14 En De, WMT16 En Ro.",
"Results are shown in Table",
"3. Our CeMAT obtains competitive results on these languages pairs on average, and achieves the best performance on En Ro.",
"Our model also outperforms BT (Sennrich et al., 2016a), which is a universal and stable approach to augment bilingual with monolingual data.",
"In addition, when combining back-translation with our CeMAT on Ro En, we obtain a significantly improvement from 36.8 to 39.0 BLEU, as shown in Table",
"3. This indicates that our method is complementary to BT.",
"The Effectiveness of Aligned Code-Switching and Masking We investigate the effectiveness of aligned code-switching & masking as shown in Table",
"4. We find that utilizing aligned code-switching & masking can help CeMAT improve the performance for all different scenarios with gains of +0.5 BLEU on average, even though we can only match the aligned word pairs for 6% of the tokens on average in the bilingual corpora.",
"We presume the method can be improved more significantly if we adopt more sophisticated word alignment methods.",
"The Effectiveness of Dynamic Masking In the pre-training phase, we use a dynamic strategy when doing dual-masking on the encoder and decoder respectively.",
"We verify the effectiveness of this dynamic masking strategy.",
"As illustrated in Table 4 and Appendix C, we achieve significant gains with margins from +0.4 to +4.5 BLEU, when we adjusted the ratio of masking from a static value to a dynamically and randomly selected value.",
"The average improvement on all language pairs is +2.1 BLEU.",
"This suggests the importance of dynamic masking.",
"In this section, we will verify the performance of our CeMAT on the NAT, which generates translations in parallel, on widely-used translation tasks.",
"As illustrated in Figure 1, NAT also adopts a Seq2Seq framework, but consists of an encoder and a bidirectional decoder which can be used to predict the target sequences in parallel.",
"The training objective of NAT is formulated as follows: L ( ) = (cid:88) ( X m ,Y n ) D ( m,n ) | Y n | (cid:88) t =1 log P ( y tn | X m ; ) (4) In this work, we follow Ghazvininejad et al. (2019), which randomly sample some tokens y maskn for masking from target sentences and train the model by predicting them given source sentences 6384 Lang-Pairs En-Kk En-Tr En-Et En-Fi En-Lv Avg Direction CeMAT 8.8 12.9 23.9 23.6 22.2 28.5 25.4 28.7 22.0 24.3 22.0 .",
"L ( ) = (cid:88) ( X m ,Y n ) D ( m,n ) (cid:88) y jn y maskn log P ( y jn | X m , Y n \\ y maskn ; ) (5)",
"During decoding, given an input sequence to translate, the initial decoder input is a sequence of [mask] tokens.",
"The fine-tuned model generates translations by iteratively predicting target tokens and masking low-quality predictions.",
"This process can make the model re-predict the more challenging cases conditioned on previous high-confidence predictions.",
"NAT Benchmark Data We evaluate on three popular datasets: WMT14 En De, WMT16 En Ro and IWSLT14 En De.",
"For a fair comparison with baselines, we only use the bilingual PC32 corpora to pre-train our CeMAT.",
"We only use knowledge distillation (Gu et al., 2018) on WMT14 En De tasks.",
"Baselines We use our CeMAT for initialization and fine-tune a Mask-Predict model (Ghazvinine-jad et al., 2019) as in Section",
"4. To better quantify the effects of the proposed pre-training models, we build two strong baselines.",
"Direct.",
"We directly train a Mask-Predict model with randomly initialized parameters.",
"mRASP.",
"To verify that our pre-trained model is more suitable for NAT, we use a recently pre-trained model mRASP (Lin et al., 2020) to fine-tune on downstream language pairs.",
"Configuration We use almost the same configuration as the pre-training and AT except the following differences.",
"We use learned positional em-beddings (Ghazvininejad et al., 2019) and set the max-positions to 10,000.",
"The main results on three language pairs are presented in Table",
"5. When using CeMAT to initialize the Mask-Predict model, we observe significant improvements (from +0.9 to +5.3 BLEU) on all different tasks, and finally obtain gains of +2.5 BLEU on average.",
"We also achieve higher results than the AT model on both En De (+2.8 BLEU) and De En (+0.9 BLEU) directions on IWSLT14 datasets, which is the extremely low-resource scenarios where training from scratch is harder and pre-training is more effective.",
"As illustrated in Table 5, on all different tasks, CeMAT outperforms mRASP with a significant margin.",
"On average, we obtain gains of +1.4 BLEU over mRASP.",
"Especially under low-resource settings on IWSLT14 De En, we achieve a large gains of +3.4 BLEU over mRASP.",
"Overall, mRASP shows limited improvement (+0.4 to +1.9 BLEU) compared to CeMAT.",
"This also suggests that although we can use the traditional pre-training method to fine-tune the NAT task, it does not bring a significant improvement like the AT task because of the gap between pre-training and fine-tuning tasks.",
"We further compare the dynamic performance on three language pairs during iterative decoding, as shown in Appendix D. We only need 3 to 6 iterations to achieve the best score.",
"During the iteration, we always maintain rapid improvements.",
"In contrast, mRASP obtains the best result after 6 to 9 iterations.",
"We also observe a phenomenon that the performance during iterations is also unstable on both mRASP and Mask-Predict, but CeMAT appears more stable.",
"We conjecture that our pre-trained model can learn more related information between words in both the same and different languages.",
"This ability alleviated the drawback of NAT assumptions: the individual token predictions are conditionally independent of each other.",
"Multilingual Pre-training Task Conneau and Lample (2019) and Devlin et al. (2019) proposed to pre-train a cross-lingual language model on multi language corpora, then the encoder or decoder of model are initialized independently for fine-tuning.",
"Song et al. (2019), Yang et al. (2020) and Lewis et al. (2020) directly pre-trained a Seq2Seq model by reconstructing part or all of inputs and achieve significant performance gains.",
"Recently, mRASP (Lin et al., 2020) and CSP (Yang et al., 2020) apply the code-switching technology to simply perform random substitution on the source side.",
"Another similar work, DICT-MLM (Chaudhary et al., 2020) introduce multilingual dictionary, pre-training the MLM by mask the words and then predict its cross-lingual synonyms.",
"mRASP2 (Pan et al., 2021) also used code-switching on monolingual and bilingual data to improve the effectiveness, but it is essentially a multilingual AT model.",
"Compared to previous works: 1) CeMAT is the first pre-trained Seq2Seq model with a bidirectional decoder; 2) We introduce aligned code-switching & masking, different from traditional code-switching, we have two additional steps: align between source and target, and CSM; 3) We also introduce a dynamic dual-masking method.",
"Autoregressive Neural Machine Translation Our work is also related to AT, which adopts an encoder-decoder framework to train the model (Sutskever et al., 2014).",
"To improve the performance, back-translation, forward-translation and related techniques were proposed to utilize the monolingual corpora (Sennrich et al., 2016a; Zhang and Zong, 2016; Edunov et al., 2018; Hoang et al., 2018).",
"Prior works also attempted to jointly train a single multilingual translation model that translates multi-language directions at the same time (Firat et al., 2016; Johnson et al., 2017; Aharoni et al., 2019; Wu et al., 2021).",
"In this work, we focus on pre-training a multilingual language model, which can provide initialization parameters for the language pairs.",
"On the other hand, our method can use other languages to further improve high-resource tasks.",
"Non-autoregressive Neural Machine Translation Gu et al. (2018) first introduced a transformer-based method to predict the complete target sequence in parallel.",
"In order to reduce the gap with the AT model, Lee et al. (2018) and Ghazvininejad et al. (2019) proposed to decode the target sentence with iterative refinement.",
"Wang et al. (2019b) and Sun et al. (2019) utilized auxiliary information to enhance the performance of NAT.",
"One work related to us is Guo et al. (2020), which using BERT to initialize the NAT.",
"In this work, CeMAT is the first attempt to pre-train a multilingual Seq2Seq language model on NAT task.",
"In this paper, we demonstrate that multilingually pre-training a sequence-to-sequence model but with a bidirectional decoder produces significant performance gains for both Autoregressive and Non-autoregressive Neural Machine Translation.",
"Benefiting from conditional masking, the decoder module, especially the cross-attention can learn the word representation and cross-lingual representation ability more easily.",
"We further introduce the aligned code-switching & masking to align the representation space for words with similar semantics but in different languages, then we use a dynamic dual-masking strategy to induce the bidirectional decoder to actively obtain the information from the source side.",
"Finally, we verified the effectiveness of these two methods.",
"In the future, we will investigate more effective word alignment method for aligned code-switching & masking .",
"We would like to thank anonymous reviewers for their helpful feedback.",
"we also thank Wenyong Huang, Lu Hou, Yinpeng Guo, Guchun Zhang for their useful suggestion and help with experiments."
] | [
"abstain",
"objective",
"objective",
"abstain",
"result",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"result",
"result",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"other",
"other",
"method",
"abstain",
"other",
"other",
"other",
"method",
"objective",
"objective",
"abstain",
"result",
"result",
"objective",
"other",
"other"
] |
[
"Lemmatization aims to reduce the sparse data problem by relating the inflected forms of a word to its dictionary form.",
"Most prior work on ML based lemmatization has focused on high resource languages, where data sets (word forms) are readily available.",
"For languages which have no linguistic work available, especially on morphology or in languages where the computational realization of linguistic rules is complex and cumbersome, machine learning based lemmatizers are the way to go.",
"In this paper, we devote our attention to lemmatisation for low resource, morphologically rich scheduled Indian languages using neural methods.",
"Here, low resource means only a small number of word forms are available.",
"We perform tests to analyse the variance in monolingual models' performance on varying the corpus size and contextual morphological tag data for training.",
"We show that monolingual approaches with data augmentation can give competitive accuracy even in the low resource setting, which augurs well for NLP in low resource setting.",
"Natural Language Processing (NLP) has seen remarkable growth in all its sub-areas like machine translation, summarization, question answering and so on.",
"For all these tasks, though, morphemes remain the most basic form of information (Otter et al., 2020).",
"Morpheme identification (lemma and aixes) can assist these very useful large applications by solving the data sparsity problem.",
"Good lemmatisers are invaluable tools for handling large vocabulary in morphologically rich languages and thereby boosting performance in downstream tasks, but techniques * These authors contributed equally to this work are limited by resource availability.",
"This is a relevant point for Indian languages.",
"For instance, as many as 197 Indian languages are in the UNESCO's Atlas of the World's Languages in Danger, 2010.",
"Even among the 22 scheduled languages of India, there is a wide disparity in resource availability, for example, for Konkani and Kashmiri (Rajan et al., 2020; Islam et al., 2018).",
"Techniques like Porter stemmer are indeed quick solutions, but they are suited only for alphabetic script languages, like English, and not abugida, like Bengali (Ali et al., 2017), or abjad, like Urdu (Kansal et al., 2012), script languages.",
"Moreover, creating stemmers requires different language specific stemming algorithms.",
"This requirement of language specific measures comes in the way of scaling the enterprise of creating stemmers for the hundreds and thousands of languages that exist in the world.",
"One might think of ML for stemmingfor example, training a neural net with stems and word forms; but almost none of the 22 scheduled Indian languages, which is just a subset of the numerous languages spoken and written in India, have resources sui-cient for training deep models (Bhattacharyya et al., 2019).",
"For a majority of Indian languages, the absence of dictionaries compounds the problem.",
"Most of the current approaches for morphological analysis use the idea of cross-lingual transfer learning from a higher resource language to the low resource language (McCarthy et al., 2019) of interest.",
"We show that even monolingual models can consistently perform with high accuracy with even as little as 500 samples, without cross-lingual training of neural models and without structured information like dictionaries.",
"We further demonstrate good performance in extremely low resource 4089 setting with as few as 100 training examples samples to train on and show a competitive performance against cross-lingual models in the same setting.",
"In Zeman et al. (2018), lemmatisation was performed for small treebanks exploiting the common annotation standard across all languages, and the same task was implicit in Nivre et al. (2017).",
"Recently, there has been a shift to extremely low resource settings with the SIGMORPHON 2019 shared task (Mc-Carthy et al., 2019) focusing on cross-lingual learning.",
"However, their task focuses on the reverse direction: given a lemma and a set of morphological features, generate a target inflected form.",
"A two-step attention process (Anastasopoulos and Neubig, 2019) similar to the SIGMORPHON 2019 morphological inflection task (Mc-Carthy et al., 2019) has been adapted for the setup, which consists of four components: encoder for morphological tags, encoder character sequence, attention and a decoder.",
"The inputs to the model are inflected words and morphological tags, and we use self-attention single layer bidirectional LSTM without positional embeddings as encoders.",
"At each time step, during decoding, two context vectors are created via two different attention matrices over the output from the encoding of inflected word and morphological tag.",
"At the decoder, we use a two-step process: first we create a tag-informed state by attending over tags using the output from the decoder at the previous time step.",
"Second, we use this to attend over the source characters to produce the state vector for the decoder at that time step, which is used for producing the output character for that time step using a fully connected layer followed by a softmax.",
"We also add structural bias to the attention model that encourages Markov assump-tion over alignments, that is, if the i -th source character is aligned to the j -th target one, alignments from the ( i +1) -th or i th to ( j +1) th character are preferred.",
"Neubig (2019) for more details and explanations about the two-step attention process and Cohn et al. (2016) for more details regarding structural bias.",
"From the SIGMORPHON 2019 shared task, we collect language data from the multilingual morphological inflection task for Bengali, Hindi, Kannada, Sanskrit, Telugu, and Urdu.",
"Out of these, Telugu is the only one that does not have a large data set (inflected word forms).",
"We use the same task categorization of high or low resource languages as SIGMORPHON.",
"Each training sample is a triplet: (inflected word, lemma, tag) , where tag refers to the set of morphological features for the inflected word.",
"Table 1: Number of inflected-word lemma pairs",
"available for each language.",
"Total original number of samples, High and Low training dataset size in high and low resource settings.",
"We create the smaller data sets from the high-resource data sets using the sampling method based on probability distributions mentioned in Cotterell et al. (2018).",
"During training for smaller data sets, we use augmentation from Cotterell et al. (2016).",
"This particular augmentation method relies on substituting stems in a word with random sequences of characters while preserving its length.",
"We also annotate data sets with tag information to create multiple data sets for analysing the effects of data set size and the importance of tag information on the accuracy of the models.",
"Warm-up Phase For each triple ( X , Y , T ) in the original data, we create two new tuples ( X , X , [ COPY ]) and ( Y , Y , T ) and train the model on the new tuples (Anastasopou-los and Neubig, 2019).",
"This helps the model learn a monotonic alignment in the attention model, which is effective for character level transduction tasks (Wu and Cotterell, 2019) while avoiding any explicit modelling of such a structural bias.",
"The training switches to the next phase when accuracy on the validation set exceeds 75%.",
"( X , Y , T ) triplet example for Hindi: ( , , V;V.PTCP;PST ).",
"A Spanish example would be ( bailaba , bailar , V;V.PTCP;PRS ).",
"Main Phase The training tuple ( X , Y , T ) is fed into the system, and the model is allowed to learn the distribution over the data.",
"A cool down period is also used while training to improve the accuracy of the model.",
"We also employ early stopping with a higher threshold than the cool down period so that the training stops when no further progress is possible.",
"Hyperparameters for our models are discussed in appendix A.1.",
"We also release all our code online for reproducibility and further research.",
"* 5 Results and Discussions 5.1 Variation with number of training word-pairs We create three models for each training set size.",
"They contain (1) no morphological fea-* https://github.com/krsrv/lemmatisation tures, (2) basic PoS tag data, and (3) all morphological features.",
"We report accuracies over complete string matching for our experiments.",
"Figure 1 shows the graphs for accuracy versus data.",
"When the complete set of morphological features is included in training, most languages achieve extremely high accuracy (at least 95%, except for Kannada), even when data set sizes are as small as 1000.",
"When the data set size is 500, the accuracy drop to the range 80-90% but are still competitive wrt rule-based lemmatisers across languages (Bhat-tacharyya et al., 2014) like Sanskrit(Raulji and Saini, 2019), Hindi(Paul et al., 2013), Ben-gali(Shakib et al., 2019), Urdu(Gupta et al., 2015) and Kannada(Prathibha and Padma, 2015).",
"However, the performance drops drastically when the data set size is reduced to 100.",
"Performance on the augmented data sets shows a marked increase in accuracy over the unaugmented 100 training samples, but is still below the performance of models trained on 500 samples.",
"Telugu is not included in Figure 1 due to the lack of training samples.",
"We train only one model over the available 61 samples (aug-mented to 10,000).",
"The model achieves an accuracy of 80% on the SIGMORPHON Task 1 test set for Telugu.",
"Comparing Figure",
"1(a) and",
"1(b), we see that tag data does not provide substantial additional information to the model when the data set size exceeds 2000, barring the case for San-4091 2000 1000 500 100 100-aug No tag PoS No tag PoS No tag PoS No tag PoS No tag PoS bn -3.60 0.00 -5.72 -2.74 -3.49 1.89 12.77 50.00 2.13 17.02 hi 2.44 9.97 -8.97 2.61 -6.65 -5.10 -5.17 -27.59 -30.77 -3.85 kn -4.70 4.04 -18.15 -3.08 -7.47 -1.87 97.60 33.33 -4.76 -11.90 sa -11.30 -4.86 -5.52 -5.52 -4.32 -40.16 40.91 -22.73 -9.52 14.29 ur -2.93 -0.40 -1.73 -1.53 -8.41 -0.22 141.53 82.20 -36.84 -21.05 Table 2: Each column represents the percentage change in accuracy compared to the accuracy when all morph tags were used.",
"skrit.",
"At 500, there is a spike in accuracy for Sanskrit which is probably explained by the fact that Sanskrit is a morphologically and semantically systematic language with very few ambiguities (evident from its linguistic and grammar text Adhyy by Pini), and thus is the language with highest responsiveness to augmentation with tag data.",
"Below 4000, the morphological tag data substantially improves the accuracy.",
"Sanksrit and Kannada both show worse results compared to other languages, which is likely due to the complex inflection patterns in both languages.",
"The gains from including tag information are better visualised in Table 2. A negative value in the table indicates that the model's performance decreases in absence of tag data.",
"In general, we see that full-tag informed models perform the best, followed by basic PoS tag informed models and finally models without tag information.",
"The table also shows that the importance of tag data increases considerably with decrease in the training set size.",
"However, an anomaly occurs with 100 training samples, when the absence of tag information improves the performance.",
"A possible explanation is that the number of training samples is too low and the model is not able to learn what to focus on effectively.",
"This anomaly disappears when we augment the data before training the model.",
"Note that achieving 100% accuracy on lemmatization without any tag information is not possible with any data set size.",
"Some words can have multiple lemmas and require context for disambiguation: (cid:579) (kee) can map to either (karana) or (kaa) depending on whether it is used as a postposition or a verb.",
"(Accuracy is measured via a complete string match.) 5.3 Comparison with cross-lingual models We also train cross models using the same method as monolingual training and incorporate the training procedure described by Artetxe et al. (2020) (the hyperparameters are listed in appendix A.2).",
"We simulate a low resource language by choosing 100 samples at random and use all the other languages as high resource languages.",
"Macro averaged accuracy for a simulated low resource language shows that monolingual models give comparable accuracies when compared to cross-lingual models, with the exception of Hindi.",
"Performance of Sanskrit and Urdu, especially Urdu, seem to be better when the mono-lingual models are used.",
"The complete list of accuracies for the cross-lingual models are listed in Table 3. The macro-averaged difference between the cross-lingual and monolingual model is -2 in the cross-lingual models' favor.",
"number of word forms) in this paper.",
"For most languages, a monolingual model trained on approximately 1000 training samples gives competitive accuracy, while training on 500 samples gives results at par with rule-based linguistic systems.",
"For extremely-low resource settings as well, monolingual models perform well with the help of data augmentation.",
"Even in these scenarios, monolingual models can give competitive results compared to cross-lingual models, a result that is supported by research in other tasks such as morphological inflection (Anastasopoulos and Neubig, 2019).",
"Additionally, in the low resource setting, additional features are an important source of information.",
"Even PoS tags benefit the training process.",
"The model currently does not exploit any linguistic knowledge available to improve its performance.",
"Incorporating morphological rules or using bilingual knowledge to create transfer models could grant accuracy gains (Ge-breselassie et al., 2020; Faruqui et al., 2015).",
"Moreover, transformers have been shown to improve performance on character level tasks which would be applicable method here (Wu et al., 2020).",
"Another potential area of improvement could be the usage of different data hallucination techniques like in Shcherbakov et al. (2016), which uses phonetics instead of relying on characters for predictions.",
"The work in this paper can be useful for expanding the power of language understanding to ethnic/local languages.",
"This can consequently bring these low-resource language domains within the umbrella of widespread NLP applications in edge computing devices.",
"By focusing on low-resource domains, we understand how lightweight models fare in these settings, thereby leading to potential trimming down of model sizes, training time, compute costs etc., which is a significant step towards maintaining energy and carbon costs.",
"Such developments also spur the progress of languages and the civilisations associated with them by bringing them into the advanced technological manifolds, and thereby bring more equitable distribution of technology and quality of life across the globe."
] | [
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Pre-trained language models such as BERT have been successful at tackling many natural language processing tasks.",
"However, the unsupervised sub-word tokenization methods commonly used in these models (e.g., byte-pair encoding BPE) are sub-optimal at handling morphologically rich languages.",
"Even given a morphological analyzer, naive sequencing of morphemes into a standard BERT architecture is inefficient at capturing morphological compositionality and expressing word-relative syntactic regularities.",
"We address these challenges by proposing a simple yet effective two-tier BERT architecture that leverages a morphological analyzer and explicitly represents morphological compositionality.",
"Despite the success of BERT, most of its evaluations have been conducted on high-resource languages, obscuring its applicability on low-resource languages.",
"We evaluate our proposed method on the low-resource morphologically rich Kinyarwanda language, naming the proposed model architecture KinyaBERT .",
"A robust set of experimental results reveal that KinyaBERT outperforms solid baselines by 2% in F1 score on a named entity recognition task and by 4.3% in average score of a machine-translated GLUE benchmark.",
"KinyaBERT fine-tuning has better convergence and achieves more robust results on multiple tasks even in the presence of translation noise.",
"1 1 Introduction Recent advances in natural language processing (NLP) through deep learning have been largely enabled by vector representations (or embeddings) learned through language model pre-training (Ben-gio et al., 2003; Mikolov et al., 2013; Pennington et al., 2014; Bojanowski et al., 2017; Peters et al., 2018; Devlin et al., 2019).",
"Language models such as BERT (Devlin et al., 2019) are pre-1 Code and data are released at https://github.",
"trained on large text corpora and then fine-tuned on downstream tasks, resulting in better performance on many NLP tasks.",
"Despite attempts to make multilingual BERT models (Conneau et al., 2020), research has shown that models pre-trained on high quality monolingual corpora outperform multilingual models pre-trained on large Internet data (Scheible et al., 2020; Virtanen et al., 2019).",
"This has motivated many researchers to pretrain BERT models on individual languages rather than adopting the language-agnostic multilingual models.",
"This work is partly motivated by the same findings, but also proposes an adaptation of the BERT architecture to address representational challenges that are specific to morphologically rich languages such as Kinyarwanda.",
"In order to handle rare words and reduce the vocabulary size, BERT-like models use statistical sub-word tokenization algorithms such as byte pair encoding (BPE) (Sennrich et al., 2016).",
"While these techniques have been widely used in language modeling and machine translation, they are not optimal for morphologically rich languages (Klein and Tsarfaty, 2020).",
"In fact, sub-word tokenization methods that are solely based on surface forms, including BPE and character-based models, cannot capture all morphological details.",
"This is due to morphological alternations (Muhirwe, 2007) and non-concatenative morphology (McCarthy, 1981) that are often exhibited by morphologically rich languages.",
"For example, as shown in Table 1, a BPE model trained on 390 million tokens of Kinyarwanda text cannot extract the true sub-word lexical units (i.e. morphemes) for the given words.",
"This work addresses the above problem by proposing a language model architecture that explicitly represents most of the input words with morphological parses produced by a morphological analyzer.",
"In this architecture BPE is only used to handle words which cannot be directly decomposed by the morphological analyzer such as misspellings, 5347 Word Morphemes Monolingual BPE Multilingual BPE twagezeyo we arrived there ' tu .",
"proper names and foreign language words.",
"Given the output of a morphological analyzer, a second challenge is in how to incorporate the produced morphemes into the model.",
"One naive approach is to feed the produced morphemes to a standard transformer encoder as a single monolithic sequence.",
"This approach is used by Mohseni and Tebbifakhr (2019).",
"One problem with this method is that mixing sub-word information and sentence-level tokens in a single sequence does not encourage the model to learn the actual morphological compositionality and express word-relative syntactic regularities.",
"We address these issues by proposing a simple yet effective two-tier transformer encoder architecture.",
"The first tier encodes morphological information, which is then transferred to the second tier to encode sentence level information.",
"We call this new model architecture KinyaBERT because it uses BERT's masked language model objective for pre-training and is evaluated on the morphologically rich Kinyarwanda language.",
"This work also represents progress in low resource NLP.",
"Advances in human language technology are most often evaluated on the main languages spoken by major economic powers such as English, Chinese and European languages.",
"This has exacerbated the language technology divide between the highly resourced languages and the underrepresented languages.",
"It also hinders progress in NLP research because new techniques are mostly evaluated on the mainstream languages and some NLP advances become less informed of the diversity of the linguistic phenomena (Bender, 2019).",
"Specifically, this work provides the following research contributions: A simple yet effective two-tier BERT architecture for representing morphologically rich languages.",
"New evaluation datasets for Kinyarwanda language including a machine-translated subset of the GLUE benchmark (Wang et al., 2019) and a news categorization dataset.",
"Experimental results which set a benchmark for future studies on Kinyarwanda language understanding, and on using machine-translated versions of the GLUE benchmark.",
"Code and datasets are made publicly available for reproducibility 1 .",
"Our modeling objective is to be able to express morphological compositionality in a Transformer-based (Vaswani et al., 2017) language model.",
"For morphologically rich languages such as Kinyarwanda, a set of morphemes (typically a stem and a set of functional affixes) combine to produce a word with a given surface form.",
"This requires an alternative to the ubiquitous BPE tokenization, through which exact sub-word lexical units (i.e. morphemes) are used.",
"For this purpose, we use a morphological analyzer which takes a sentence as input and, for every word, produces a stem, zero or more affixes and assigns a part of speech (POS) tag to each word.",
"This section describes how this morphological information is obtained and then integrated in a two-tier transformer architecture (Figure 1) to learn morphology-aware input representations.",
"Kinyarwanda, the national language of Rwanda, is one of the major Bantu languages (Nurse and Philippson, 2006) spoken in central and eastern Africa.",
"Kinyarwanda has 16 noun classes.",
"Modi-fiers (demonstratives, possessives, adjectives, numerals) carry a class marking morpheme that agrees with the main noun class.",
"The verbal morphology (Nzeyimana, 2020) also includes subject and object markers that agree with the class of the subject or object.",
"This agreement therefore enables users of the language to approximately disambiguate referred entities based on their classes.",
"We leverage this syntactic agreement property in designing our unsupervised POS tagger.",
"Our morphological analyzer for Kinyarwanda was built following finite-state two-level morphology principles (Koskenniemi, 1983; Beesley and Karttunen, 2000, 2003).",
"For every inflectable word type, we maintain a morphotactics model using a directed acyclic graph (DAG) that represents the regular sequencing of morphemes.",
"We effectively model all inflectable word types in Kinyarwanda which include verbals, nouns, adjectives, possessive and demonstrative pronouns, numerals and quantifiers.",
"The morphological analyzer also includes many hand-crafted rules for handling mor-phographemics and other linguistic regularities of the Kinyarwanda language.",
"The morphological analyzer was independently developed and calibrated by native speakers as a closed source solution before the current work on language modeling.",
"Similar to Nzeyimana (2020), we use a classifier trained on a stemming dataset to disambiguate between competing outputs of the morphological analyzer.",
"Furthermore, we improve the disambiguation quality by leveraging a POS tagger at the phrase level so that the syntactic context can be taken into consideration.",
"We devise an unsupervised part of speech tagging algorithm which we explain here.",
"Let x = ( x 1 , x 2 , x 3 , ...x n ) be a sequence of tokens (e.g. words) to be tagged with a corresponding sequence of tags y = ( y 1 , y 2 , y 3 , ...y n ) .",
"A sample of actual POS tags used for Kinyarwanda is given in Table 12 the Appendix.",
"Using Bayes' rule, the optimal tag sequence y is given by the following equation: y = arg max y P ( y | x ) = arg max y P ( x | y ) P ( y ) P ( x ) = arg max y P ( x | y ) P ( y ) (1) A standard hidden Markov model (HMM) can decompose the result of Equation 1 using first order Markov assumption and independence assumptions into P ( x | y ) = (cid:81) nt =1 P ( x t | y t ) and P ( y ) = (cid:81) nt =1 P ( y t | y t 1 ) .",
"The tag sequence y can then be efficiently decoded using the Viterbi algorithm (Forney, 1973).",
"A better decoding strategy is presented below.",
"Inspired by Tsuruoka and Tsujii (2005), we devise a greedy heuristic for decoding y using the same first order Markov assumptions but with bidirectional decoding.",
"First, we estimate the local emission probabilities P ( x t | y t ) using a factored model given in the following equation: P ( x t | y t ) P ( x t | y t ) P ( x t | y t ) = P m ( x t | y t ) P p ( x t | y t ) P a ( x t | y t ) (2) In Equation 2, P m ( x t | y t ) corresponds to the probability/score returned by a morphological disambiguation classifier, representing the uncertainty of the morphology of x t .",
"P p ( x t | y t ) corresponds to a local precedence weight between competing POS tags.",
"These precedence weights are man-5349 ually crafted through qualitative evaluation (See Table 12 in Appendix for examples).",
"P a ( x t | y t ) quantifies the local neighborhood syntactic agreement between Bantu class markers.",
"When there are two or more agreeing class markers in neighboring words, the tagger should be more confident of the agreeing parts of speech.",
"A basic agreement score can be the number of agreeing class markers within a window of seven words around a given candidate x t .",
"We manually designed a more elaborate set of agreement rules and their weights for different contexts.",
"Therefore, the actual agreement score P a ( x t | y t ) is a weighted sum of the matched agreement rules.",
"Each of the unnormalized measures P in Equation 2 is mapped to the [0 , 1] range using a sigmoid function ( z | z A , z B ) given in Equation 3, where z is the score of the measure and [ z A , z B ] is its estimated active range.",
"( z | z A , z B ) = [1 + exp ( 8 z z A z B z A )] 8 (3) After estimating the local emission model, we greedily decode y t = arg max y t P ( y t | x ) in decreasing order of P ( x t | y t ) using a first order bidirectional inference of P ( y t | x ) as given in the following equation: P ( y t | x ) = P ( x t | y t ) P ( y t | y t 1 , y t +1 ) P ( y t 1 | x ) P ( y t +1 | x ) if both y t 1 and y t +1 have been decoded; P ( x t | y t ) P ( y t | y t 1 ) P ( y t 1 | x ) if only y t 1 has been decoded; P ( x t | y t ) P ( y t | y t +1 ) P ( y t +1 | x ) if only y t +1 has been decoded; P ( x t | y t ) otherwise (4) The first order transition measures P ( y t | y t 1 ) , P ( y t | y t +1 ) and P ( y t | y t 1 , y t +1 ) are estimated using count tables computed over the entire corpus by aggregating local emission marginals P ( y t ) = (cid:80) x t P ( x t , y t ) obtained through morphological analysis and disambiguation.",
"The overall architecture of our model is depicted in Figure 1.",
"This is a two-tier transformer encoder architecture made of a token-level morphology encoder that feeds into a sentence/document-level encoder.",
"The morphology encoder is made of a small transformer encoder that is applied to each analyzed token separately in order to extract its morphological features.",
"The extracted morphological features are then concatenated with the token's stem embedding to form the input vector fed to the sentence/document encoder.",
"The sentence/document encoder is made of a standard transformer encoder as used in other BERT models.",
"The sentence/document encoder uses untied position encoding with relative bias as proposed in Ke et al. (2020).",
"The input to the morphology encoder is a set of embedding vectors, three vectors relating to the part of speech, one for the stem and one for each affix when available.",
"The transformer encoder operation is applied to these embedding vectors without any positional information.",
"This is because positional information at the morphology level is inherent since no morpheme repeats and each morpheme always occupies a known (i.e. fixed) slot in the morphotactics model.",
"The extracted morphological features are four encoder output vectors corresponding to the three POS embeddings and one stem embedding.",
"Vectors corresponding to the affixes are left out since they are of variable length and the role of the affixes in this case is to be attended to by the stem and the POS tag so that morphological information can be captured.",
"The four morphological output feature vectors are further concatenated with another stem embedding at the sentence level to form the input vector for the main sentence/document encoder.",
"The choice of this transformer-based architecture for morphology encoding is motivated by two factors.",
"First, Zaheer et al. (2020) has demonstrated the importance of having global tokens such as [CLS] token in BERT models.",
"These are tokens that attend to all other tokens in the modeled sequence.",
"These global tokens effectively encapsulate some meaning of the encoded sequence.",
"Second, the POS tag and stem represent the high level information content of a word.",
"Therefore, having the POS tag and stem embeddings be transformed into morphological features is a viable option.",
"The POS tag and stem embeddings thus serve as the global tokens at the morphology encoder level since they attend to all other morphemes that can be associated with them.",
"In order to capture subtle morphological information, we make one of the three POS embeddings span an affix set vocabulary that is a subset of the all affixes power set.",
"We form an affix set vocabu-5350 lary V a that is made of the N most frequent affix combinations in the corpus.",
"In fact, the morphological model of the language enforces constraints on which affixes can go together for any given part of speech, resulting in an affix set vocabulary that is much smaller than the power set of all affixes.",
"Even with limiting the affix set vocabulary V a to a fixed size, we can still map any affix combination to V a by dropping zero or very few affixes from the combination.",
"Note that the affix set embedding still has to attend to all morphemes at the morphology encoder level, making it adapt to the whole morphological context.",
"The affix set embedding is depicted by the purple units in Figure 1 and a sample of V a is given in Table 13 in the Appendix.",
"Similar to other BERT models, we use a masked language model objective.",
"Specifically, 15% of all tokens in the training set are considered for prediction, of which 80% are replaced with [MASK] tokens, 10% are replaced with random tokens and 10% are left unchanged.",
"When prediction tokens are replaced with [MASK] or random tokens, the corresponding affixes are randomly omitted 70% of the time or left in place for 30% of the time, while the units corresponding to POS tags and affix sets are also masked.",
"The pre-training objective is then to predict stems and the associated affixes for all tokens considered for prediction using a two-layer feed-forward module on top of the encoder output.",
"For the affix prediction task, we face a multi-label classification problem where for each prediction token, we predict a variable number of affixes.",
"In our experiments, we tried two methods.",
"For one, we use the KullbackLeibler (KL) divergence 2 loss function to solve a regression task of predicting the N -length affix distribution vector.",
"For this case, we use a target affix probability vector a t RN in which each target affix index is assigned 1 m probability and 0 probability for non-target affixes.",
"Here m is the number of affixes in the word to be predicted and N is the total number of all affixes.",
"We call this method Affix Distribution Regression (ADR) and model variant KinyaBERT ADR .",
"Alternatively, we use cross entropy loss and just predict the affix set associated with the prediction word; we call this method Affix Set Classification (ASC) and the model variant KinyaBERT ASC .",
"In order to evaluate the proposed architecture, we pre-train KinyaBERT (101M parameters for KinyaBERT ADR and 105M for KinyaBERT ASC ) on a 2.4 GB of Kinyarwanda text along with 3 baseline BERT models.",
"The first baseline is a BERT model pre-trained on the same Kinyarwanda corpus and with the same position encoding (Ke et al., 2020), same batch size and pre-training steps, but using the standard BPE tokenization.",
"We call this first baseline model BERTBPE (120M parameters).",
"The second baseline is a similar BERT model pretrained on the same Kinyarwanda corpus but tok-enized by a morphological analyzer.",
"For this model, the input is just a sequence of morphemes, in a similar fashion to Mohseni and Tebbifakhr (2019).",
"We call this second baseline model BERTMORPHO (127M parameters).",
"For BERTMORPHO , we found that predicting 30% of the tokens achieves better results than using 15% because of the many affixes generated.",
"The third baseline is XLM-R (Con-neau et al., 2020) (270M parameters) which is pretrained on 2.5 TB of multilingual text.",
"We evaluate the above models by comparing their performance on downstream NLP tasks.",
"KinyaBERT model was implemented using Py-torch version 1.9.",
"The morphological analyzer and POS tagger were implemented in a shared library using POSIX C. Morphological parsing of the corpus was performed as a pre-processing step, taking 20 hours to segment the 390M-token corpus on an 12-core desktop machine.",
"Pre-training was performed using RTX 3090 and RTX 2080Ti desktop GPUs.",
"Each KinyaBERT model takes on average 22 hours to train for 1000 steps on one RTX 3090 GPU or 29 hours on one RTX 2080Ti GPU.",
"Baseline models (BERTBPE and BERTMORPHO ) were pre-trained on cloud tensor processing units (TPU v3-8 devices each with 128 GB memory) us-5351 ing PyTorch/XLA 3 package and a TPU-optimized fairseq toolkit (Ott et al., 2019).",
"Pre-training on TPU took 2.3 hours per 1000 steps.",
"The baselines were trained on TPU because there were no major changes needed to the existing RoBERTA (base) architecture implemented in fairseq and the TPU resources were available and efficient.",
"In all cases, pre-training batch size was set to 2560 sequences, with maximum 512 tokens in each sequence.",
"The maximum learning rates was set to 4 10 4 which is achieved after 2000 steps and then linearly decays.",
"Our main results and ablation results were obtained from models pre-trained for 32K steps in all cases.",
"Other pre-training details, model architectural dimensions and other hyper-parameters are given in the Appendix.",
"Machine translated GLUE benchmark The General Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2019) has been widely used to evaluate pre-trained language models.",
"In order to assess KinyaBERT performance on such high level language tasks, we used Google Translate API to translate a subset of the GLUE benchmark (MRPC, QNLI, RTE, SST-2, STS-B and WNLI tasks) into Kinyarwanda.",
"CoLA task was left because it is English-specific.",
"MNLI and QQP tasks were also not translated because they were too expensive to translate with Google's commercial API.",
"While machine translation adds more noise to the data, evaluating on this dataset is still relevant because all models compared have to cope with the same noise.",
"To understand this translation noise, we also run user evaluation experiments, whereby four volunteers proficient in both English and Kinyarwanda evaluated a random sample of 6000 translated GLUE examples, and assigned a score to each example on a scale from 1 to 4 (See Table 11 in Appendix).",
"These scores help us characterize the noise in the data and contextualize our results with regards to other GLUE evaluations.",
"Results on these GLUE tasks are shown in Table 3. Named entity recognition (NER) We use the Kinyarwanda subset of the MasakhaNER dataset (Adelani et al., 2021) for NER task.",
"This is a high quality NER dataset annotated by native speakers for major African languages including Kinyarwanda.",
"The task requires predicting four entity types: Persons (PER), Locations (LOC), Or-3 https://github.com/pytorch/xla/ ganizations (ORG), and date and time (DATE).",
"News Categorization Task (NEWS) For a document classification experiment, we collected a set of categorized news articles from seven major news websites that regularly publish in Kinyarwanda.",
"The articles were already categorized, so no more manual labeling was needed.",
"This dataset is similar to Niyongabo et al. (2020), but in our case, we limited the number collected articles per category to 3000 in order to have a more balanced label distribution (See Table 10 in the Appendix).",
"The final dataset contains a total of 25.7K articles spanning 12 categories and has been split into training, validation and test sets in the ratios of 70%, 5% and 25% respectively.",
"Results on this NEWS task are presented in Table 5. For each evaluation task, we use a two-layer feed-forward network on top of the sentence encoder as it is typically done in other BERT models.",
"The fine-tuning hyper-parameters are presented in Table 14 in the Appendix.",
"The main results are presented in Table 3, Table 4, and Table 5. Each result is the average of 10 independent fine-tuning runs.",
"Each average result is shown with the standard deviation of the 10 runs.",
"Except for XLM-R, all other models are pre-trained on the same corpus (See Table 2) for 32K steps using the same hyper-parameters.",
"On the GLUE task, KinyaBERT ASC achieves 4.3% better average score than the strongest baseline.",
"KinyaBERT ASC also leads to more robust results on multiple tasks.",
"It is also shown that having just a morphological analyzer is not enough: BERTMORPHO still under-performs even though it uses morphological tokenization.",
"Multilingual XLM-R achieves least performance in most cases, possibly because it was not pre-trained on Kinyarwanda text and uses inadequate tokenization.",
"On the NER task, KinyaBERT ADR achieves best performance, about 3.2% better average F1 score than the strongest baseline.",
"One of the architectural differences between KinyaBERT ADR and KinyaBERT ASC is that KinyaBERT ADR uses three POS tag embeddings while KinyaBERT ASC uses two.",
"Assuming that POS tagging facilitates named entity recognition, this empirical result suggests that increasing the amount of POS tag information 5352 Task: MRPC QNLI RTE SST-2 STS-B WNLI #Train examples: 3.4K 104.7K 2.5K 67.4K 5.8K 0.6K Translation score: 2.7/4.0 2.9/4.0 3.0/4.0 2.7/4.0 3.1/4.0 2.9/4.0 Model Validation Set XLM-R 84.2/78.3 0 .",
"in the model, possibly through diversification (i.e. multiple POS tag embedding vectors per word), can lead to better NER performance.",
"The NEWS categorization task resulted in differing performances between validation and test sets.",
"This may be a result that solving such task does not require high level language modeling but rather depends on spotting few keywords.",
"Previous research on a similar task (Niyongabo et al., 2020) has shown that simple classifiers based on TF-IDF features suffice to achieve best performance.",
"The morphological analyzer and POS tagger inherently have some level of noise because they do not always perform with perfect accuracy.",
"While we did not have a simple way of assessing the impact of this noise in this work, we can logically expect that the lower the noise the better the results could be.",
"Improving the morphological analyzer and POS tagger and quantitatively evaluating its accuracy is part of future work.",
"Even though our POS tagger uses heuristic methods and was evaluated mainly through qualitative exploration, we can still see its positive impact on the pre-trained language model.",
"We did not use previous work on Kinyarwanda POS tagging because it is largely different from this work in terms of scale, tag dictionary and dataset size and availability.",
"We plot the learning curves during fine-tuning process of KinyaBERT and the baselines.",
"The results in Figure 2 indicate that KinyaBERT fine-tuning has better convergence across all tasks.",
"Additional results also show that positional attention (Ke et al., 2020) learned by KinyaBERT has more uniform and smoother relative bias while BERTBPE and BERTMORPHO have more noisy 5353 Figure 2: Comparison of fine-tuning loss curves between KinyaBERT and baselines on the evaluation tasks.",
"relative positional bias (See Figure 3 in Appendix).",
"This is possibly an indication that KinyaBERT allows learning better word-relative syntactic regularities.",
"However, this aspect needs to be investigated more systematically in future research.",
"While the main sentence/document encoder of KinyaBERT is equivalent to a standard BERT BASE configuration on top of a small morphology encoder, overall, the model actually decreases the number of parameters by more than 12% through embedding layer savings.",
"This is because using morphological representation reduces the vocabulary size.",
"Using smaller embedding vectors at the morphology encoder level also significantly reduces the overall number of parameters.",
"Table 8 in Appendix shows the vocabulary sizes and parameter count of KinyaBERT in comparison to the baselines.",
"While the sizing of the embeddings was done essentially to match BERT BASE configuration, future studies can shed more light on how different model sizes affect performance.",
"We conducted an ablation study to clarify some of the design choices made for KinyaBERT architecture.",
"We make variations along two axes:",
"(i) morphology input and",
"(ii) pre-training task, which gave us four variants that we pre-trained for 32K steps and evaluated on the same downstream tasks.",
"AFS STEM+ASC : Morphological features are captured by two POS tag and one affix set vectors.",
"We predict both the stem and affix set.",
"This corresponds to KinyaBERT ASC presented in the main results.",
"POS STEM+ADR : Morphological features are carried by three POS tag vectors and we predict the stem and affix probability vector.",
"This corresponds to KinyaBERT ADR .",
"AVG STEM+ADR : Morphological features are captured by two POS tag vectors and the pointwise average of affix hidden vectors from the morphology encoder.",
"We predict the stem and affix probability vector.",
"STEM STEM : We omit the morphology encoder and train a model with only the stem parts without affixes and only predict the stem.",
"Ablation results presented in Table 6 indicate that using affix sets for both morphology encoding and prediction gives better results for many GLUE tasks.",
"The under-performance of STEM STEM on high resource tasks (QNLI and SST-2) is an indication that morphological information from affixes is important.",
"However, the utility of this information depends on the task as we see mixed results on other tasks.",
"Due to a large design space for a morphology-aware language model, there are still a number of other design choices that can be explored in future studies.",
"One may vary the amount of POS tag embeddings used, vary the size affix set vocabulary or the dimension of the morphology encoder embeddings.",
"One may also investigate the potential of other architectures for the morphology encoder, such as convolutional networks.",
"Our early attempt of using recurrent neural networks (RNNs) for the morphology encoder was abandoned because it was too slow to train.",
"Multilingual PLMs that include both high-resource and low-resource languages have also been introduced (Devlin et al., 2019; Conneau et al., 2020; Xue et al., 2021; Chung et al., 2020).",
"However, it has been found that these multilingual models are biased towards high-resource languages and use fewer low quality and uncleaned low-resource data (Kreutzer et al., 2022).",
"The included low-resource languages are also very limited because they are mainly sourced from Wikipedia articles, where languages with few articles like Kinyarwanda are often left behind (Joshi et al., 2020; Nekoto et al., 2020).",
"Joshi et al. (2020) classify the state of NLP for Kinyarwanda as Scraping-By, meaning it has been mostly excluded from previous NLP research, and require the creation of dedicated resources and models.",
"Kinyarwanda has been studied mostly in descriptive linguistics (Kimenyi, 1976, 1978a,b, 1988; Jerro, 2016).",
"Few recent NLP works on Kinyarwanda include Morphological Analysis (Muhirwe, 2009; Nzeyimana, 2020), Text Classification (Niyongabo et al., 2020), Named Entity Recognition (Rijhwani et al., 2020; Adelani et al., 2021; Slev and Lignos, 2021), POS tagging (Gar-rette and Baldridge, 2013; Garrette et al., 2013; Duong et al., 2014; Fang and Cohn, 2016; Cardenas et al., 2019), and Parsing (Sun et al., 2014; Mielens et al., 2015).",
"There is no prior study on pre-trained language modeling for Kinyarwanda.",
"for African languages.",
"To the best of our knowledge there is currently only AfriBERT (Ralethe, 2020) that has been pre-trained on Afrikaans, a language spoken in South Africa.",
"In this paper, we aim to increase the inclusion of African languages in NLP community by introducing a PLM for Kinyarwanda.",
"Differently to the previous works (see Table 15 in Appendix) which solely pretrained unmodified BERT models, we propose an improved BERT architecture for morphologically rich languages.",
"Recently, there has been a research push to improve sub-word tokenization by adopting character-based models (Ma et al., 2020; Clark et al., 2022).",
"While these methods are promising for the language-agnostic case, they are still solely based on the surface form of words, and thus have the same limitations as BPE when processing morphologically rich languages.",
"We leave it to future research to empirically explore how these character-based methods compare to morphology-aware models.",
"This work demonstrates the effectiveness of explicitly incorporating morphological information in language model pre-training.",
"The proposed two-tier Transformer architecture allows the model to represent morphological compositionality.",
"Experiments conducted on Kinyarwanda, a low resource morphologically rich language, reveal significant performance improvement on several downstream NLP tasks when using the proposed architecture.",
"These findings should motivate more research into morphology-aware language models.",
"This work was supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program and Google Cloud Research Credits with the award GCP19980904.",
"We also thank the anonymous reviewers for their insightful feedback."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"Aspect-based sentiment analysis involves the recognition of so called opinion target expressions (OTEs).",
"To automatically extract OTEs, supervised learning algorithms are usually employed which are trained on manually annotated corpora.",
"The creation of these corpora is labor-intensive and sufficiently large datasets are therefore usually only available for a very narrow selection of languages and domains.",
"In this work, we address the lack of available annotated data for specific languages by proposing a zero-shot cross-lingual approach for the extraction of opinion target expressions.",
"We leverage multilingual word embeddings that share a common vector space across various languages and incorporate these into a convolutional neural network architecture for OTE extraction.",
"Our experiments with 5 languages give promising results: We can successfully train a model on annotated data of a source language and perform accurate prediction on a target language without ever using any annotated samples in that target language.",
"Depending on the source and target language pairs, we reach performances in a zero-shot regime of up to 77% of a model trained on target language data.",
"Furthermore, we can increase this performance up to 87% of a baseline model trained on target language data by performing cross-lingual learning from multiple source languages.",
"In recent years, there has been an increasing interest in developing sentiment analysis models that predict sentiment at a more fine-grained level than at the level of a complete document.",
"A paradigm coined as Aspect-based Sentiment Analysis (ABSA) addresses this need by defining the sentiment expressed in a text relative to an opinion target (also called aspect ).",
"Consider the following example from a restaurant review: Moules were excellent , lobster ravioli was VERY salty ! In this example, there are two sentiment statements, one positive and one negative.",
"The positive one is indicated by the word excellent and is expressed towards the opinion target Moules .",
"The second, negative sentiment, is indicated by the word salty and is expressed towards the lobster ravioli .",
"A key task within this fine-grained sentiment analysis consists of identifying so called opinion target expressions (OTE).",
"To automatically extract OTEs, supervised learning algorithms are usually employed which are trained on manually annotated corpora.",
"In this paper, we are concerned with how to transfer classifiers trained on one domain to another domain.",
"In particular, we focus on the transfer of models across languages to alleviate the need for multilingual training data.",
"We propose a model that is capable of accurate zero-shot cross-lingual OTE extraction, thus reducing the reliance on annotated data for every language.",
"Similar to Upadhyay et al. (2018), our model leverages multilingual word embeddings (Smith et al., 2017; Lample et al., 2018) that share a common vector space across various languages.",
"The shared space allows us to transfer a model trained on source language data to predict OTEs in a target language for which no (i.e. zero-shot setting) or only small amounts of data are available, thus allowing to apply our model to under-resourced languages.",
"Our main contributions can be summarized as follows: We present the first approach for zero-shot cross-lingual opinion target extraction and achieve up to 87% of the performance of a monolingual baseline.",
"multiple source languages for cross-lingual learning and show that we can improve by 6 to 8 points in F 1 -Score compared to a model trained on a single source language.",
"We investigate the benefit of augmenting the zero-shot approach with additional data points from the target language.",
"We observe that we can save hundreds of annotated data points by employing a cross-lingual approach.",
"We compare two methods for obtaining cross-lingual word embeddings on the task.",
"A common approach for extracting opinion target expressions is to phrase the task as a sequence tagging problem using the well-known IOB scheme (Tjong Kim Sang and Veenstra, 1999) to represent OTEs as a sequence of tags.",
"According to this scheme, each word in our text is marked with one of three tags, namely I , O or B that indicate if the word is at the B eginning 1 , I nside or O utside of a target expression.",
"An example of such an encoding can be seen below: The wine list is also really nice .",
"By rephrasing the task in this way, we can address it using established sequence tagging models.",
"In this work, we use a multi-layer convolutional neural network (CNN) as our sequence tagging model.",
"The model receives a sequence of words as input features and predicts an output sequence of IOB tags.",
"In order to keep our model simple and our results clear, we restrict our input representation to a sequence of word embeddings.",
"While additional features such as Part-of-Speech (POS) tags are known to perform well in the domain of OTE extraction (Toh and Su, 2016; Kumar et al., 2016; Jebbara and Cimiano, 2016), they would require a separately trained model for POS-tag prediction which can not be assumed to be available for every language.",
"We refrain from using more complex architectures such as memory networks as our goal is mainly to investigate the possibility of performing zero-shot cross-lingual transfer learning for OTE prediction.",
"Being the 1 Note that the B token is only used to indicate the boundary of two consecutive phrases.",
"first approach proposing this, we leave the question of how to increase performance of the approach by using more complex architectures to future work.",
"In the following, we describe our monolingual CNN model for OTE extraction which we use as our baseline model.",
"Afterwards, we show how we adapt this model for a cross-lingual and even zero-shot regime.",
"Our monolingual baseline model consists of a word embedding layer, a stack of convolution layers, a standard feed-forward layer followed by a final output layer.",
"Formally, the word sequence w = ( w 1 , . . . , w n ) is passed to the word embedding layer that maps each word w i to its embedding vector x i using an embedding matrix W .",
"The sequence of word embedding vectors x = ( x 1 , . . . , x n ) is processed by a stack of L convolutional layers 2 , each with a kernel width of l conv , d conv filter maps and RELU activation function f (Nair and Hinton, 2010).",
"The final output of these convolution layers is a sequence of abstract representations h L = ( h L 1 , . . . , h Ln ) that incorporate the immediate context of each word by means of the learned convolution operations.",
"The hidden states h Li of the last convolution layer are processed by a regular feed-forward layer to further increase the model's capacity and the resulting sequence is passed to the output layer.",
"In a last step, each hidden state is projected to a probability distribution over all possible output tags q i = ( q Bi , q Ii , q Oi ) using a standard feed-forward layer with weights W tag , bias b tag and a softmax activation function.",
"Since the prediction of each tag can be interpreted as a classification, the network is trained to minimize the categorical cross-entropy between expected tag distribution p i and predicted tag distribution q i of each word i : H ( p i , q i ) = (cid:88) t T p ti log( q ti ) , where T = { I, O, B } is the set of IOB tags, p ti { 0 , 1 } is the expected probability of tag t and q ti [0 , 1] the predicted probability.",
"Figure 1 depicts the sequence labeling architecture.",
"2 The input sequences are padded with zeros to allow the application of the convolution operations to the edge words.",
"Our cross-lingual model works purely with cross-lingual embeddings that have been trained on monolingual datasets and in a second step have been aligned across languages.",
"In fact, the embeddings are pre-computed in an offline fashion and are not adapted while training the convolutional network on data from a specific language.",
"As the inputs to the convolutional network are only the cross-lingual embeddings, the network can be applied to any language for which the embeddings have been aligned.",
"Since the word embeddings for source and target language share a common vector space, the shared parts of the target language model are able to process data samples from the completely unseen target language and perform accurate prediction i.e. enabling zero-shot cross-lingual extraction of opinion target expressions.",
"We rely on two approaches to compute embeddings that are aligned across languages.",
"Both methods rely on fastText (Bojanowski et al., 2017) to compute monolingual embeddings trained on Wikipedia articles.",
"The first method is the one proposed by Smith et al. (2017), which computes a singular value decomposition (SVD) on a dictionary of translated word pairs to obtain an optimal, orthogonal projection matrix from one space into the other.",
"We refer to this method as SVD-aligned .",
"We use these embeddings 3 in our experiments in Sections 3.3, 3.4 and 3.6.",
"The second method proposed by Lample et al. (2018) performs the alignment of embeddings 3 Obtained from: https://github.com/ Babylonpartners/fastText_multilingual across languages in an unsupervised fashion, without requiring translation pairs.",
"The approach uses adversarial training to initialize the cross-lingual mapping and a synthetically generated bilingual dictionary to fine-tune it with the Procrustes algorithm (Schonemann, 1966).",
"We refer to the multilingual embeddings 4 from Lample et al. (2018) as ADV-aligned .",
"These are used in Section 3.5.",
"In this section, we investigate the proposed zero-shot cross-lingual approach and evaluate it on the widely used dataset of Task 5 of the SemEval 2016 workshop.",
"With our evaluation, we answer the following research questions: RQ1: To what degree is the model capable of performing OTE extraction for unseen languages?",
"RQ2: Is there a benefit in training on more than one source language?",
"RQ3: What improvements can be expected when a small amount of samples for the target language are available?",
"RQ4: How big is the impact of the used alignment method on the OTE extraction performance?",
"Before we answer these questions, we give a brief overview over the used datasets and resources.",
"As part of Task 5 of the SemEval 2016 workshop (Pontiki et al., 2016), a collection of datasets for aspect-based sentiment analysis on various languages and domains was published.",
"Due to its relatively large number of samples and high coverage of languages and domains, the datasets are commonly used to evaluate ABSA approaches.",
"To answer our research questions, we make use of a selection of the available datasets.",
"We evaluate our cross-lingual approach on the available datasets for the restaurant domain for the 5 languages Dutch ( nl ), English ( en ), Russian ( ru ), Spanish ( es ) and Turkish ( tr ) 5 .",
"Table 1 gives a brief overview of the used datasets.",
"4 Obtained from: https://github.com/ facebookresearch/MUSE 5 We tried to include the dataset of French reviews in our evaluation but the provided download script no longer works.",
"In all our experiments, we report F 1 -scores for the extracted opinion target expressions computed on exact matches of the character spans as in the original SemEval task (Pontiki et al., 2016).",
"As described in Section 2.2, our model relies on pretrained multilingual embeddings.",
"For both SVD-aligned and ADV-aligned , we use the embeddings as provided by the original authors.",
"However, we restrict our vocabulary to the most frequent 50,000 words per language 6 to reduce memory consumption.",
"For all experiments, we fix our model architecture to 5 convolution layers with each having a kernel size of 3, a dimensionality of 300 units and a ReLU activation function (Nair and Hinton, 2010).",
"The penultimate feed-forward layer has 300 dimensions and a ReLU activation, as well.",
"We apply dropout (Srivastava et al., 2014) on the word embedding layer with a rate of 0.3 and between all other layers with 0.5.",
"The word embeddings and the penultimate layer are L1-regularized (Ng, 2004).",
"The network's parameters are optimized using the stochastic optimization technique Adam (Kingma and Ba, 2015).",
"We optimize the number of training epochs for each model using early stopping (Caruana et al., 2000) but do not tune other hyperparameters of our models.",
"We always pick 20% of our available training data for the validation process.",
"For the zero-shot scenario, this entails that we optimize the number of epochs on the source language and not on the target language to simulate true zero-shot learning.",
"6 As appearing in the respective embedding files.",
"In this section, we present our evaluation for zero-shot learning.",
"We first examine a setting with a single source language.",
"Then, we evaluate the effect of cross-lingual learning from multiple source languages.",
"evaluation addresses our first research question:",
"To answer this question, we perform a set of experiments in the zero-shot setting.",
"We train a model on the training portion of a source language and evaluate the model performance on all possible target languages.",
"Figure 2 shows the obtained scores.",
"The reported results are averaged over 10 runs with different random seeds.",
"The main diagonal represents results of models both trained and tested on target language data.",
"We considered these our monolingual baselines.",
"In general, the proposed approach achieves relatively high scores for some language pairs, although with large performance differences depending on the exact source and target language pairs.",
"Looking at the absolute scores, the best performing cross-lingual language pair is en es with an F 1 -score of 0.5.",
"This is followed by en nl at 0.46.",
"The lowest is es tr with an F 1 -score of 0.14.",
"When considering the results relative to their respective monolingual baselines, the highest relative performance is achieved by en nl at 77% of a nl nl model, followed by en es and ru nl , which both reach an F-Measure of about 74%.",
"The weakest performing language pair is still es tr at 29% relative performance.",
"In general, the Turkish language seems to benefit the least from the cross-lingual transfer learning, while Russian is on average the best source language in terms of relative performance achievement for the target languages.",
"Overall, the presented results show that it is in fact possible for most considered languages to train a model for OTE extraction without ever using any annotated data in that target language.",
"Multiple Source Languages In the next experiment, we want to address our second research question: en es nl ru tr target language en es nl ru tr s o u r c e l a n g u a g e 0.66 0.5 0.46 0.37 0.17 0.43 0.68 0.29 0.28 0.14 0.45 0.44 0.6 0.37 0.17 0.42 0.49 0.45 0.56 0.3 0.33 0.42 0.34 0.35 0.48 Figure 2: Zero-shot F 1 -scores for cross-lingual learning from a single source to a target language.",
"As we explained in Section 2.2, our approach allows us to train and test on any number of source and target languages, provided that we have aligned word embeddings for each considered language.",
"In order to answer our second research question, we train a model on the available training data for all but one language and perform prediction on the test data for the left-out language.",
"The results for these experiments are summarized in Table",
"2. We can see that all languages with the exception of Turkish seem to profit from a cross-lingual transfer setting with multiple source languages.",
"The absolute improvements are in the range of 6 to 8 points in F 1 -Score while the performance on Turkish samples drops by 3 points.",
"We can summarize that we can obtain substantial improvements for most languages when training on a combination of multiple source languages.",
"In fact, for en , es , nl and ru , the results of our cross-lingual models trained on all other languages reach between 78% to 87% relative performance of a model trained with target language data.",
"While our goal is to reduce the effort of annotating huge amounts of data in a target language to which the model is to be transferred, it might still be reasonable to provide a few annotated samples for a target language.",
"Our next research question addresses this issue: RQ3: What improvements can be expected when a small amount of samples for the target language are available?",
"We answer this question by training our models jointly on a source language dataset as well as a small amount of target language samples and compare this to a baseline model that only uses target language samples.",
"By gradually increasing the available target samples, we can directly observe their benefit on the test performance.",
"Figure 3 shows a visualization for the source language en and the target languages es , nl , ru , and tr .",
"We can immediately see that a monolingual model requires at least 100 target samples to produce meaningful results as opposed to a cross-lingual model that performs well with source language samples alone.",
"Training on increasing amounts of target samples improves the model performances monotonically for each target language and the model leveraging the bilingual data consistently outperforms the monolingual baseline model.",
"The benefits of the source language data are especially pronounced when very few target samples are available, i.e. less than 200.",
"As an example, a model trained on bilingual data using all available English samples and 200 Dutch samples is competitive to a monolingual model trained on 1000 Dutch samples (0.55 vs. 0.56).",
"As one would expect, the results in Table 2 and Figure 3 suggest that training the model on more data samples leads to a better performance.",
"Since our model can leverage the data from all languages simultaneously, we can exhaust our resources and train an instance of our model that has access to all training data samples from all languages, including the target training data.",
"This is reflected by the dashed line in Figure",
"3. We see, however, that the model cannot leverage the other source languages 0 20 50 1002005001000 a ll 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 F 1 -S c o r e (en, es) es es es all es 0 20 50 1002005001000 a ll 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 (en, nl) nl nl nl all nl 0 20 50 1002005001000 a ll Number of target samples 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 F 1 -S c o r e (en, ru) ru ru ru all ru 0 20 50 1002005001000 a ll Number of target samples 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 (en, tr) tr tr tr all tr Figure 3: Cross-lingual results for increasing numbers of training samples from the target language.",
"The previous experiments show that we can achieve good performance in a cross-lingual setting for OTE extraction using the multilingual word embeddings proposed by Smith et al. (2017).",
"Now we address our final research question: RQ4: How big is the impact of the used alignment method on the OTE extraction performance?",
"With our final research question, we compare our previous results to an alternative method of aligning word embeddings in multiple languages.",
"We repeat our experiments in Section 3.3 using the embeddings of Lample et al. (2018) which we refer to as ADV-aligned .",
"To enable a direct comparison to the zero-shot results in Section 3.3, we report absolute differences in F 1 -Score to the scores obtained with SVD-aligned for all source and target language combinations.",
"As can be seen in Figure 4, the two methods do perform well overall, albeit different for specific language pairs.",
"In a monolingual setting (i.e. main diagonal), ADV-aligned performs slightly worse than SVD-aligned with the exception of en en .",
"Using ADV-aligned , Spanish appears to be a more effective source en es nl ru tr target language en es nl ru tr s o u r c e l a n g u a g e 0.74 -0.82 -4.6 -2.4 2.3 3.5 -0.41 4.7 1.9 4.9 1.2 0.097 -0.87 -4.6 2.9 2 -1.8 -3.7 -0.44 0.17 3.5 -3.3 -6.2 -2.9 -1.1 Figure 4: Zero-shot results comparing the multilingual embeddings ADV-aligned to SVD-aligned .",
"language than using SVD-aligned as the average performance is about 2.9 points higher.",
"It can also be observed that the cross-lingual transfer learning works better for English as a target language using ADV-aligned since the average performance is about 2.2 points higher than for SVD-aligned .",
"The opposite is true for Dutch as a target language, which shows a reduction in performance by 2.1 points on average.",
"Overall, for 13 of the 25 language pairs, the embeddings based on SVD-aligned perform better than embeddings aligned with ADV-aligned .",
"In this last part of our evaluation, we want to put our work into perspective of prior systems for opinion target extraction on the SemEval 2016 restaurant datasets.",
"We report results for our multilingual model that is trained on the combined training data of all languages and evaluated on the corresponding test datasets.",
"We compare our model to the respective state-of-the-art for each language in Table",
"3. We can see that the competition is strongest for English where we fall behind recent monolingual systems.",
"This corresponds to rank 7 of 19 of the original SemEval competition.",
"Regarding the other languages, we see that we are close to the best Spanish and Dutch systems and even clearly outperform systems for Russian and Turkish by at least 7 points in F 1 -score.",
"With that, we present the first approach on this task to achieve such competitive performances for a variety of languages System en es nl ru tr Toh and Su (2016) 0.723 `Alvarez-Lopez et al. (2016) 0.666 0.685 Kumar et al. (2016) 0.685 0.697 0.644 Pontiki et al. (2016)* 0.441 0.520 0.506 0.493 0.419 Li and Lam (2017) 0.734 all target (Ours) 0.660 0.687 0.624 0.567 0.490 Table 3: Overview of the current state-of-the-art for opinion target extraction for 5 languages.",
"The presented experiments shed light on the performance of our proposed approach under various circumstances.",
"In the following, we want to discuss its limitations and consider explanations for performance differences of different language pairs.",
"Model Limitations The core of our proposed sequence labeling approach consists of aligned word embeddings and shared CNN layers.",
"Due to the limited context of a CNN layer, the model can only base its decisions for each word on the local information around that word.",
"In many cases, this information is sufficient since most opinion target expressions are adjective-noun phrases 7 which are well enough identified by the local context for most considered languages.",
"As future work, it is worth to investigate in how far our findings translate to more complex model architectures that have been proposed for OTE extraction, such as memory networks or attention-based models.",
"Language Characteristics Due to the inherent variability of natural languages and of the used datasets, it is difficult to identify the exact reasons for the observed performance differences between language pairs.",
"However, we suspect that language features such as word order, inflection, or agglutination affect the compatibility of languages.",
"As an example, Turkish is considered a highly agglutinative language, that is, complex words are composed by attaching several suffixes 7 90% of OTEs in the English dataset consist of zero or more adjectives followed by at least one noun.",
"to a word stem.",
"This sets it apart from the other 4 languages.",
"This language feature might present a difficulty in our approach since the appending of suffixes is not optimally reflected in the tok-enization process and the used word embeddings.",
"An approach that performs alignment of languages on subword units might alleviate this problem and lead to performance gains for language pairs with similar inflection rules.",
"Syntactic regularities such as word order might also play a role in our transfer learning approach.",
"It is reasonable to assume that the CNN layers of our approach pick up patterns in the word order of a source language that are indicative of an opinion target expression, e.g. the [NOUN] is good .",
"When applying such a model to a target language with drastically different word order regularities, these patterns might not appear as such in the target language.",
"For the considered languages, we see following characteristics: Where English and Spanish are generally considered to follow a Subject-Verb-Object (SVO) order, Dutch largely exhibits a combination of SOV and SVO cases.",
"Turkish and Russian are overall flexible in their word order and allow a variety of syntactic structures.",
"In the case of Turkish, its morphological and syntactic features seem to explain some of the relatively low results.",
"However, with the small sample of languages and the many potential influencing factors at play, we are aware that it is not possible to draw any strong conclusions.",
"Further research has to be conducted in this direction to answer open questions.",
"Our work brings together the domains of opinion target extraction on the one side and cross lingual learning on the other side.",
"In this section, we give a brief overview of both domains and point out parallels to previous work.",
"Opinion Target Extraction San Vicente et al. (2015) present a system that addresses opinion target extraction as a sequence labeling problem based on a perceptron algorithm with token, word shape and clustering-based features.",
"Toh and Wang (2014) propose a Conditional Random Field (CRF) as a sequence labeling model that includes a variety of features such as Part-of-Speech (POS) tags and dependency tree features, word clusters and features derived from the WordNet taxonomy.",
"The model is later improved using neural network output probabilities (Toh and Su, 2016) and achieved the best results on the SemEval 2016 dataset for English restaurant reviews.",
"Jakob and Gurevych (2010) follow a very similar approach that addresses opinion target extraction as a sequence labeling problem using CRFs.",
"Their approach includes features derived from words, Part-of-Speech tags and dependency paths, and performs well in a single and cross-domain setting.",
"Kumar et al. (2016) present a CRF-based model that makes use of a variety of morphological and linguistic features and is one of the few systems that submitted results for more than one language for the SemEval 2016 ABSA challenge.",
"The strong reliance on high-level NLP features, such as dependency trees, named-entity information and WordNet features restricts its wide applicability to resource-poor languages.",
"Among neural network models Poria et al. (2016) and Jebbara and Cimiano (2016) use deep convolutional neural network (CNN) with Part-of-Speech (POS) tag features.",
"Poria et al. (2016) also extend their base model using linguistic rules.",
"Wang et al. (2017) use coupled multi-layer attentions to extract opinion expressions and opinion targets jointly.",
"This approach, however, relies on additional annotations for opinion expressions alongside annotations for the opinion targets.",
"Li and Lam (2017) propose two LSTMs with memory interaction to detect aspect and opinion terms.",
"In order to generate opinion expression annotations for the SemEval dataset, a sentiment lexicon is used in combination with high precision dependency rules.",
"For a more comprehensive overview of ABSA and OTE extraction approaches we refer to Pontiki et al. (2016).",
"Cross-Lingual and Zero-Shot Learning for Sequence Labelling With the CLOpinionMiner, Zhou et al. (2015) present a method for cross-lingual opinion target extraction that relies on machine translation.",
"The approach derives an annotated dataset for a target language by translating the annotated source language data.",
"Part-of-Speech tags and dependency path-features are projected into the translated data using the word alignment information of the translation algorithm.",
"The approach is evaluated for English to Chinese reviews.",
"A drawback of the presented method is that it requires access to a strong machine translation algorithm for source to target language that also provides word alignment information.",
"Additionally, it builds upon NLP resources that are not available for many potential target languages.",
"Addressing the task of zero-shot spoken language understanding (SLU), Upadhyay et al. (2018) follow a similar approach as our work.",
"They use the aligned embeddings from Smith et al. (2017) in combination with a bidirectional RNN and target zero-shot SLU for Hindi and Turkish.",
"Overall, our work differs from the related work by presenting a simple model for the zero-shot extraction of opinion target expressions.",
"By using no annotated target data or elaborate NLP resources, such as Part-of-Speech taggers or dependency parsers, our approach is easily applicable to many resource-poor languages.",
"In this work, we presented a method for cross-lingual and zero-shot extraction of opinion target expressions which we evaluated on 5 languages.",
"Our approach uses multilingual word embeddings that are aligned into a single vector space to allow for cross-lingual transfer of models.",
"Using English as a source language in a zero-shot setting, our approach was able to reach an F 1 score of 0.50 for Spanish and 0.46 for Dutch.",
"This corresponds to relative performances of 74% and 77% compared to a baseline system trained on target language data.",
"By using multiple source languages, we increased the zero-shot performance to F 1 -scores of 0.58 and 0.53, respectively, which correspond to 85% and 87% in relative terms.",
"We investigated the benefit of augmenting the zero-shot approach with additional data points from the target language.",
"Here, we observed that we can save several hundreds of annotated data points by employing a cross-lingual approach.",
"Among the 5 considered languages, Turkish seemed to benefit the least from cross-lingual learning in all experiments.",
"The reason for this might be that Turkish is the only agglutinative language in the dataset.",
"Further, we compared two approaches for aligning multilingual word embeddings in a single vector space and found their results to vary for individual language pairs but to be comparable overall.",
"Lastly, we compared our multilingual model with the state-of-the-art for all languages and saw that we achieve competitive performances for some languages and even present the best system for Russian and Turkish.",
"Acknowledgments This work was supported in part by the H2020 project Pret-`a-LLOD under Grant Agreement number 825182.",
"Soujanya Poria, Erik Cambria, and Alexander Gel-bukh.",
"2016.",
"Aspect Extraction for Opinion Min-ingwith a Deep Convolutional Neural Network.",
"Knowledge-Based Systems , 108:4249.",
"Peter H. Sch onemann.",
"1966.",
"A generalized solution of the orthogonal procrustes problem.",
"Psychometrika , 31(1):110.",
"Samuel L. Smith, David H. P. Turban, Steven Hamblin, and Nils Y. Hammerla.",
"2017.",
"Offline bilingual word vectors, orthogonal transformations and the inverted softmax.",
"In International Conference on Learning Representations .",
"Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov.",
"2014.",
"Dropout: A Simple Way to Prevent Neural Networks from Overfitting.",
"Journal of Machine Learning Research , 15:19291958.",
"Erik F. Tjong Kim Sang and Jorn Veenstra.",
"1999.",
"Representing text chunks.",
"In Proceedings of European Chapter of the ACL (EACL) , pages 173179."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"abstain",
"method",
"objective",
"result",
"objective",
"result",
"result",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"objective",
"method",
"method",
"method",
"method",
"abstain",
"method",
"objective",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Transferring the knowledge to a small model through distillation has raised great interest in recent years.",
"Prevailing methods transfer the knowledge derived from mono-granularity language units (e.g., token-level or sample-level), which is not enough to represent the rich semantics of a text and may lose some vital knowledge.",
"Besides, these methods form the knowledge as individual representations or their simple dependencies, neglecting abundant structural relations among intermediate representations.",
"To overcome the problems, we present a novel knowledge distillation framework that gathers intermediate representations from multiple semantic granularities (e.g., tokens, spans and samples) and forms the knowledge as more sophisticated structural relations specified as the pair-wise interactions and the triplet-wise geometric angles based on multi-granularity representations.",
"Moreover, we propose distilling the well-organized multi-granularity structural knowledge to the student hierarchically across layers.",
"Experimental results on GLUE benchmark demonstrate that our method outperforms advanced distillation methods.",
"Recent years have witnessed a surge of pre-trained language models (Devlin et al., 2019; Lewis et al., 2020; Clark et al., 2020; Brown et al., 2020).",
"Building upon the transformer architecture (Vaswani et al., 2017) and pre-trained on large-scale corpora using self-supervised objectives, these PLMs have achieved remarkable success in a wide range of natural language understanding and generation tasks.",
"Despite their high performance, these PLMs usually suffer from high computation and memory costs, which hinders them from being deployed Corresponding authors: Chongyang Tao and Dongyan Zhao.",
"and embedded devices.",
"Various attempts have been made to compress the huge PLMs into small ones with minimum performance degradation.",
"As one of the main approaches, knowledge distillation (Hinton et al., 2015) utilizes a large and powerful teacher model to transfer the knowledge to a small student model.",
"Based on the teacher-student framework, Jiao et al. (2020); Wang et al. (2020) distilled the token-level representations and attention dependencies to the student, Sanh et al. (2019); Sun et al. (2019) taught the student to mimic the output logits of the teacher, Sun et al. (2020) enforced the student's representation to be closed to the teacher's while pushing negative samples to be far apart.",
"Although proved effective, existing approaches have some flaws.",
"For one thing, these distillation methods only adopted the representations of mono-granularity language units (i.e., token-level or sample-level), while neglecting other granularity.",
"For another, their distillation objectives either matched the corresponding representations between the teacher and the student or aligned the attention dependencies, failing to capture more sophisticated structural relations between the representations.",
"To address these issues, in this paper we propose a novel knowledge distillation framework named M ultiG ranularity S tructural K nowledge D istillation (MGSKD) through answering the three research questions: (1) which granularity should the knowledge be, (2) what form of knowledge is effective to transfer and (3) how to teach the student using the knowledge.",
"For the which question, given that natural languages have multiple semantic granularities, we consider the intermediate representations in three granularities: tokens, spans and samples.",
"Specifically, we first take the sub-word tokens as the smallest granularity, then 1001 select phrases and whole words as spans for they hold complete meanings, and finally treat the whole input texts as samples.",
"We use mean-pooling to obtain the representations of spans and samples based on token representations.",
"For the what question, we propose to leverage the sophisticated structural relations between the representations as the knowledge.",
"Concretely, instead of aligning the corresponding representations of the teacher and the student, we propose to form the knowledge as the pair-wise interactions and the triplet-wise geometric angels of a group of representations.",
"For the how question, following the recent findings that the bottom layers capture syntactic features while the upper layers encode semantic features (Jawahar et al., 2019), we conduct hierarchical distillation where the bottom layers of the student are taught token-level and span-level knowledge while the upper layers learn sample-level knowledge.",
"We conduct comprehensive experiments on standard language understanding benchmark GLUE (Wang et al., 2018).",
"Experimental results demonstrate that our knowledge distillation framework outperforms strong baselines methods.",
"Surprisingly, MGSKD achieves comparable or better performance than BERT base on most of the tasks on GLUE, while keeping much smaller and faster.",
"Our contributions in this paper are three folds: We are the first to leverage multi-granularity semantic representations in language (i.e., the representations of tokens, spans and samples) for knowledge distillation.",
"We propose to form the knowledge as sophisticated structural relations specified as the pair-wise interactions and the triplet-wise geometric angles based on multi-granularity representations.",
"Language Model Compression.",
"Pre-trained language models (Devlin et al., 2019; Clark et al., 2020; Brown et al., 2020) perform remarkably well on various applications but at the cost of high computation and memory usage.",
"To deploy these powerful models into resource-scarce scenarios, various attempts have been made to compress the language models into small ones.",
"Quantization methods (Zafrir et al., 2019; Shen et al., 2020; Zhang et al., 2020; Bai et al., 2021) convert the model parameters to lower precision.",
"Pruning approaches identify then remove unimportant individual weights or structures (Michel et al., 2019; Fan et al., 2019; Gordon et al., 2020; Hou et al., 2020).",
"Weight sharing techniques (Dehghani et al., 2018; Lan et al., 2019) allow the model to reuse the transformer layer multiple times to reduce parameters.",
"Knowledge Distillation.",
"Knowledge distillation (Hinton et al., 2015) is another major line of research to do model compression, which is the main concentration in this paper.",
"Hinton et al. (2015) first proposed to minimize the KL-divergence between the predicted distributions of the teacher and the student.",
"Sanh et al. (2019); Sun et al. (2019); Liang et al. (2020) adopted this objective to teach the student on masked language modeling or text classification tasks.",
"Romero et al. (2014) proposed to directly match the feature activations of the teacher and the student.",
"Jiao et al. (2020) followed the idea and took the intermediate representations in each transformer layer of the teacher as one of the knowledge to be transferred.",
"Tian et al. (2019) proposed a contrastive distillation framework where the teacher's representations were treated as positives to the corresponding student's representations.",
"Sun et al. (2020); Fu et al. (2021) customized this idea to language model compression and proved its effectiveness.",
"Researchers also attempted to use the mutual relations of representations as the knowledge to transfer.",
"In the literature of image classification, Peng et al. (2019); Tung and Mori (2019); Park et al. (2019) pointed out that the relations of the image representations of the teacher should be preserved in the student's feature space, and adopted a series of geometric measurements to model the sample relations.",
"For distilling transformer models, Park et al. (2021) enforced the relations across tokens and layers between the teacher and the student to be consistent.",
"Jiao et al. (2020); Wang et al. (2020, 2021) used the attention dependencies between tokens to teach the student.",
"In this paper, we propose to transfer the multi-granularity knowledge to the student.",
"Different from previous works that only considered a single granularity of representations, we jointly transfer the token-level, span-level and sample-level structural knowledge.",
"And compared with Shao and Chen (2021) which considered the multi-granularity visual features in an image as the knowledge, our method works in a different modality, presents a different definition of granularity, 1002 polic-eman Jackie Chan acted as a in a series of action movies .",
"and prepares the multi-granularity knowledge as the structural relations among representations.",
"We propose M ultiG ranularity S tructural K nowledge D istillation, a novel framework to distill the knowledge from a large transformer language model to a small one.",
"Different from previous works that transferred the knowledge derived from either token-level or sample-level outputs, we prepare the knowledge in three semantic granularities: token-level, span-level and sample-level.",
"Given some granularity of representations of the teacher model, we form the knowledge as the structural relations, i.e., the pair-wise interactions and the triplet-wise geometric angles, between the representations.",
"We then distill the well-organized structural knowledge to the student hierarchically across layers, where the token-level and the span-level knowledge are transferred to the bottom layers to provide more syntactic guidance while the sample-level knowledge is transferred to the upper layers to offer more help of semantic understanding.",
"The framework of MGSKD is illustrated in Figure 1.",
"Natural languages have multiple granularities of conceptual units.",
"In the context of pre-trained transformers (Devlin et al., 2019), the basic unit is the tokens produced by sub-word tokenizers (Wu et al., 2016; Radford et al., 2019).",
"Several consecutive tokens become a text span, and the sample is comprised of all the tokens it contains.",
"Existing knowledge distillation approaches (Jiao et al., 2020; Wang et al., 2020; Sun et al., 2020; Fu et al., 2021) focused on one granularity of representation, neglecting that texts are built upon language units from multiple granularities.",
"Intuitively, incorporating multi-granularity representations in knowledge distillation may provide more guidance since the student can be taught how to compose the semantic concepts from small granularities to larger ones.",
"Therefore, we propose to gather multi-granularity representations for knowledge distillation.",
"We construct three granularities of representations: tokens, spans that hold complete meanings, and samples.",
"Token Representation.",
"The first granularity is the sub-word token, which is the foundation of high-level granularity.",
"Given an input text, a tokenizer such as WordPiece (Wu et al., 2016) splits it into n tokens x = [ t 1 , t 2 , . . . , t n ] .",
"The tokens are converted to a sequence of continuous representations E = [ e 1 , e 2 , . . . , e n ] R n d through the embedding layer.",
"For the sake of clarity, we treat the embedding layer as the 0 -th layer and set H 0 = E .",
"Then the token embeddings H 0 are passed to L stacked transformer layers.",
"The l -th layer takes the output representations H l 1 of the previous layer as its input, and returns the updated representations H l using multi-head attention (MHA) and position-wise feed-forward network (FFN).",
"Herein, we obtain L +1 layers of token 1003 representations { H l } Ll =0 where H l R n d .",
"Span Representation.",
"The second granularity is the span, which is comprised of several consecutive tokens.",
"Different from SpanBERT (Joshi et al., 2020) that randomly selects token spans whose start positions and lengths are sampled from some distributions for masked language modeling, we propose to extract spans that have complete meanings.",
"Widely adopted sub-word tokenizers in pre-trained transformers split some of the English words into several sub-word tokens.",
"We consider these whole words consisting of multiple sub-word tokens, and phrases, as meaningful spans.",
"Sub-word tokens for whole words are easy to obtain using WordPiece tokenizer (Wu et al., 2016).",
"While for phrase identification, we train a classifier-based English chunker on CoNLL-2000 corpus (Tjong Kim Sang and Buchholz, 2000) following the instructions 1 .",
"We then use the trained chunker to extract noun phrases (NP), verb phrases (VP), and prepositional phrases (PP).",
"These identified phrases are tokenized by WordPiece tokenizer to obtain tokens.",
"Herein, we can obtain n s token spans x span = [ s 1 , s 2 , . . . , s n s ] , where s i = [ t j , t j +1 , . . . , t j + n si 1 ] denotes the i th span that starts at the j -th token and contains n s i tokens.",
"We then build span representations based on token representations using mean pooling: h li = Pool ( H lj : j + n si ) , (1) where h li R d is the representation of the i -th span in layer l .",
"We obtain L + 1 layers of span representations as { H l } Ll =0 where H l R n s d .",
"Sample Representation.",
"The third granularity is the input text sample itself.",
"Based on token representations again, we use mean-pooling to aggregate all the token representations in a text sample to form sample representation: h l = Pool ( H l ) , (2) Herein, we get L + 1 layers of sample representations as { h l } Ll =0 where h l R d .",
"With multi-granularity representations, we then need to formulate the specific knowledge we aim to transfer from the teacher to the student.",
"Considering that an element holds its meaning only when it is put into a semantic space where it has 1 https://www.nltk.org/book/ch07.html various relations to other elements, we propose that the knowledge is better specified as the structural relations of the representations in a semantic space, instead of the individual representations themselves.",
"Therefore, instead of directly matching each hidden representation between the teacher and the student, we propose to extract structural relations from multi-granularity representations as the knowledge to teach the student.",
"We first project the representations into multiple sub-spaces, then we extract two types of structural knowledge: pairwise interactions and triplet-wise geometric angles.",
"Multi-head Modeling.",
"A recent study by Wang et al. (2021) pointed out that distilling knowledge with multiple relation heads helps the student learn better.",
"Therefore, before extracting structural knowledge for intermediate representations, we first project them into m sub-spaces, which we call multi-head modeling.",
"Specifically, given a set of n representations R R n d , we linearly project them into m sub-spaces whose dimensions are d/m .",
"2 We use R R m n d/m to denote the multi-head representations which are then used for extracting structural knowledge.",
"Pair-wise Interaction.",
"Given two vectors r i , r j R d/m in a sub-space, we calculate their interaction as their scaled dot product: ( r i , r j ) = r i r j (cid:112) d/m.",
"Herein, we obtain the multi-head pair-wise interaction features for each pair as P R m n n , where P h,i,j denotes the interaction between the -th representation and the j -th representation in the sub-space of the h -th relation head.",
"Note that P can be considered as the unnormalized self-attention (Vaswani et al., 2017) scores for the given representations, the difference lies in that in our calculation the queries are identical to the keys.",
"Triplet-wise Geometric Angle.",
"Pair-wise interaction features only consider two vectors at once, which is not enough to represent the complicated structural relations between representations in the high-dimensional space.",
"Therefore, we propose to model the high-order relations as the geometric angles for triplets of vectors.",
"Specifically, given 2 For the student model, its representations are linearly projected into intermediate states whose dimensions are the same as the teacher model's hidden dimensions, so that it can be split into m sub-spaces as the teacher model.",
"a triplet of representations r i , r j , r k , we calculate their geometric angle as: ( r i , r j , r k ) = cos r i r j r k = r ij , r kj r ij = r i r j r i r j 2 , r kj = r k r j r k r j 2 .",
"(4) We can calculate the geometric angles for all the triplets, and obtain T R m n n n where T h,i,j,k stands for the angle of r i r j r k in the sub-space of the h -th relation head.",
"As the computation complexity increases cubically with n , such a calculation is infeasible when the number of representations is large.",
"Hereby, we propose a two-stage selection strategy to sequentially select important representations to form angles.",
"Similar to Goyal et al. (2020), we assume that the more attention a representation receives from others, the more important it is.",
"Therefore, we first calculate the self-attention distributions A R m n n by applying softmax function on the last dimension of P .",
"Then for the j -th representation, we calculate a global salient score s j by summing up self-attention distributions across all heads and all queries.",
"Based on the score, we pick the topk 1 salient representations as vertices.",
"Next, if the i -th representation is selected as vertex, we pick k 2 representations with the highest local salient score to form angles with the vertex.",
"We define the local salient score s i,j as the attention posed by the i -th representation on the j -th representation, The salient scores s i and s i,j are calculated as follows: s j = m (cid:88) h =1 n (cid:88) i =1 A h,i,j , s i,j = m (cid:88) h =1 A h,i,j .",
"Therefore, by sequentially selecting salient representations to form angles, we reduce the computation complexity from O ( mn 3 ) to O ( mk 1 k 2 2 ) .",
"By choosing proper k 2 and k 2 , we can facilitate the computation of triplet-wise geometric angles for any number of representations.",
"We utilize the structural knowledge extraction approach described in Sec. 3.2 to prepare knowledge based on three granularities of representations presented in Sec. 3.1 for distillation.",
"Based on the findings that the bottom layers capture syntactic features while the upper layers encode semantic features (Jawahar et al., 2019), we propose to conduct hierarchical distillation for the student where different granularities of knowledge are transferred to different layers.",
"For a teacher model with L t layers and a student model with L s layers, we first define a layer mapping function g ( ) that maps each student layer to a teacher layer that it learns from.",
"Following previous work (Jiao et al., 2020), we adopt the uniform strategy for g ( ) .",
"Then we transfer token-level and span-level knowledge to the bottomM layers of the student, while leveraging sample-level knowledge to teach its upper L s + 1 M layers.",
"Tokenand Span-level.",
"Specifically, given the token-level and the span-level representations of the teacher { H l t , H l t } L t l =0 , we use Eq.",
"3 and Eq.",
"4 to calculate the pair-wise interactions and the triplet-wise geometric angles among tokens and spans within a single sample as { P l t , P l t } L t l =0 and { T l t , T l t } L t l =0 .",
"Similarly, we can obtain the structural relations of the students: { P l s , P l s } L s l =0 and { T l s , T l s } L s l =0 .",
"We then teach the student by minimizing the differences of the structural relations among their representations between the teacher and the student: L token = (cid:88) 0 l<M ( 1 ( P g ( l ) t , P l s ) + 2 ( T g ( l ) t , T l s )) L span = (cid:88) 0 l<M ( 1 ( P g ( l ) t , P l s ) + 2 ( T g ( l ) t , T l s )) .",
"(6) Sample-level.",
"Recall that we obtain { h lt } L t l =0 and { h ls } L s l =0 for the teacher and the student where h lt , h ls R d .",
"Different from the structural knowledge of tokens and spans which is modeled within a sample, the sample-level structural relations rely on a group of sample representations.",
"Although the choice of samples may make a difference to the overall performance, here we simply gather all the sample representations in a mini-batch to calculate their structural relations as the sample-level knowledge.",
"Specifically, we only focus on the triplet-wise relations { T lt } L t l =0 and { T ls } L s l =0 : L sample = (cid:88) M l L s 2 ( T g ( l ) t , T l s ) .",
"1 and 2 in Eq.",
"6 and Eq.",
"7 are loss functions that measure the distance between the structural relations of the teacher's and the student's representations.",
"We empirically choose MSE for 1 and Huber loss ( = 1 ) for 2 .",
"Overall Objectives.",
"The overall distillation objective for multi-granularity structural knowledge distillation is: L 1 = 1 L sample + 2 L token + 3 L span , (8) where 1 , 2 and 3 are weights of loss functions of different granularities.",
"After this, we also teach the student to match the prediction distributions with the teacher's for text classification tasks: L 2 = 2 DKL ( z t / z s / ) , (9) where z t and z s are the predicted probability distributions of the teacher and the student respectively, denotes the temperature.",
"We conduct our experiments on the General Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2018).",
"Sepcifically, there are 2 single-sentence tasks: SST-2 (Socher et al., 2013), CoLA (Warstadt et al., 2019), 3 similarity and paraphrase tasks: MRPC (Dolan and Brockett, 2005), STS-B (Cer et al., 2017), QQP (Chen et al., 2018), and 4 inference tasks: MNLI (Williams et al., 2018), QNLI (Rajpurkar et al., 2016), RTE (Ben-tivogli et al., 2009), WNLI (Levesque et al., 2012).",
"Following previous work (Jiao et al., 2020; Wang et al., 2021; Park et al., 2021), we evaluate our method on 8 datasets except WNLI.",
"We report accuracy on 5 datasets: SST-2, QQP, MNLI, QNLI and RTE.",
"We report F1 score on MRPC, Matthews correlation coefficient on CoLA, and Spearman's rank correlation coefficient on STS-B.",
"We focus on task-specific distillation.",
"We follow Jiao et al. (2020) to augment the training sets for each of the GLUE tasks using the code 3 they released.",
"We fine-tune ELECTRA base on the original training sets as the teacher model, and utilize TinyBERT-4-312 4 which is distilled on general corpora as the initialization of our student model.",
"For token-level and span-level distillation, we use 64 relation heads for calculating pair-wise interactions, and 1 relation head for triplet-wise angles due to its huge computation and memory costs.",
"And we set k 1 = k 2 = 20 for calculating angles.",
"For sample-level distillation, we use 64 relation heads and set k 1 and k 2 as the batch size.",
"We distill token-level and span-level knowledge to the bottom-2 layers of the student and distill sample-level knowledge to the other layers.",
"For the structural distillation objective, we set 1 = 4 , 2 = 3 = 1 to maintain their gradient norms in the same order of magnitude.",
"We first distill the student model using Eq.",
"8 for 50 epochs on CoLA and 20 epochs on other tasks.",
"The learning rate is 1e-5 and the batch size is 32.",
"Then we use Eq.",
"9 to distill the predictions for all tasks except STS-B since we empirically find that directly fine-tuning after distillation using Eq.",
"8 yields better performance for it.",
"For QQP and CoLA, we adopt the original training set and distill the student for 10 epochs while for other 5 tasks we use the augmented training sets and distill the student for 3 epochs.",
"We set as 1.0, the learning 3 https://github.com/huawei-noah/ Pretrained-Language-Model/blob/master/TinyBERT/data_augmentation.py 4 https://huggingface.co/huawei-noah/ TinyBERT_General_4L_312D 1006 Method SST-2 MNLI-(m/mm) MGSKD m=1 92.5 83.6/82.9 MGSKD m=4 92.9 83.9/83.3 MGSKD m=16 93.3 84.3/83.9 MGSKD m=64 93.7 84.7/84.3 MGSKD m=128 93.5 84.8/84.2 Table 2: The impact of relation heads.",
"rate as 1e-5, and the batch size as 32.",
"We release our code to facilitate future research.",
"5 4.3 Comparison Methods Medium-sized Student Models.",
"Most of the existing knowledge distillation methods are conducted on medium-sized student models which have 6 transformer layers, 768 hidden neurons, 12 attention heads, and overall 66M parameters.",
"We adopt 3 of them as baselines: DistilBERT (Sanh et al., 2019), MiniLMv2 (Wang et al., 2021) and CKD (Park et al., 2021).",
"Notice that these models adopted different distillation settings.",
"DistilBERT and MiniLMv2 were firstly under task-agnostic distillation then directly fine-tuned on GLUE, while CKD was under both task-agnostic and task-specific distillation.",
"The corpora they adopted for task-agnostic distillation were also not exactly the same.",
"Nevertheless, we list the results as they reported on GLUE dev set as baselines, and we implement MiniLMv2 and CKD, two state-of-the-art distillation methods under the same distillation setting as ours for a fair comparison, which is described in the next paragraph.",
"Small-sized Student Models.",
"For fair comparisons, we implement two state-of-the-art distillation methods: MiniLMv2 (Wang et al., 2021), CKD (Park et al., 2021) under the same distillation setting as ours.",
"All these methods use the same student model as ours which has 4 transformer layers, 312 hidden neurons, 12 attention heads and overall 14M parameters.",
"We adopt the fine-tuned ELECTRA base as the teacher, and conduct task-specific distillation using the same distillation schedule and hyperparameters on the same augmented training sets as ours.",
"We first evaluate the effectiveness of our proposed distillation framework.",
"The main results are shown in Table 1.",
"We calculate #Params 5 https://github.com/LC97-pku/MGSKD Method SST-2 MNLI-(m/mm) MGSKD 93.7 84.7/84.3 MGSKD w/o token 93.0 84.1/83.7 MGSKD w/o span 93.2 84.3/84.0 MGSKD w/o sample 92.8 83.9/83.6 MGSKD w token p 92.1 83.4/82.9 MGSKD w token t 91.7 82.8/82.6 MGSKD w token p,t 92.5 83.7/83.2 MGSKD w span p 91.8 82.5/82.3 MGSKD w span t 91.8 82.3/82.0 MGSKD w span p,t 92.2 83.0/82.7 MGSKD w sample p 91.9 82.6/82.5 MGSKD w sample t 92.9 83.9/83.5 MGSKD w sample p,t 92.8 83.7/83.6 Table 3: Ablation study of knowledge granularity.",
"by summing up the number of parameters contained in the embedding layer and all the transformer layers.",
"The speed-up ratios are directly taken from previous works (Jiao et al., 2020; Wang et al., 2021).",
"It can be observed that under the same distillation setting (models with in Table 1), Student MGSKD outperforms strong baseline methods (i.e., Student MiniLMv2 and Student CKD ) on 7 of the 8 GLUE tasks.",
"When compared with medium-sized models from the literature which have more parameters but under different distillation settings (e.g., CKD), our method can still beat them on the majority of the 8 tasks.",
"And surprisingly, with a stronger teacher model and data augmentation technique, our method MGSKD enables a 14M student transformer model to achieve comparable performance with BERT base on most of the GLUE tasks, while keeping 9.4 times faster.",
"Also, we observe that although MGSKD performs well on most of the GLUE tasks, it lags behind some baselines on CoLA, where the model is asked to judge the grammatical acceptability of a sentence.",
"One reason might be that CoLA requires the model to focus on syntactic information while paying less attention to the sample-level semantic meanings, thus reducing the need for multi-granularity semantic knowledge that we propose to transfer to the student.",
"The Impact of Relation Heads.",
"Recall that when calculating the structural relations between representations, we project them into m relation heads.",
"We show how the number of relation heads impacts the performance on SST-2 and MNLI.",
"As shown in Table 2, the performance gets better as the 1007 8 12 16 20 24 28 32 the choice of k1, k2 (k1=k2) 90.6 90.8 91.0 91.2 91.4 91.6 91.8 SST-2 81.6 81.8 82.0 82.2 82.4 82.6 82.8 MNLI-m",
"different choices of the boundary layer M .",
"number of relation heads increases, since it eases the trouble for the student to learn the structural relations in the very high-dimensional vector space by providing fine-grained supervision in multiple relatively low-dimensional spaces.",
"We also find that when m is large, continuing to increase m is not worthwhile since the time and memory complexity increase linearly with m .",
"Therefore we choose m = 64 in our setting.",
"Ablation Study of Knowledge Granularity.",
"We transfer the structural knowledge to the student in three granularities: token-level, span-level, and sample-level.",
"We extract pair-wise and triplet-wise structural relations for tokenand span-level, while we adopt triplet-wise relations for sample-level.",
"To verify the effectiveness of each granularity of knowledge and each form of structural relations, we conduct ablation studies and present the results in Table",
"3. (1) We first remove each granularity of knowledge from the objectives of MGSKD individually.",
"6 We can conclude that the sample-level knowledge is most crucial for the overall performance, the token-level knowledge provides moderate benefit, and the span-level knowledge contributes the least.",
"We assume the reason why span-level knowledge distillation performs a little bit worse than token-level lies in that the average number of meaningful spans per sample on the 8 tasks is 7.19, which is 5.2 times fewer than the average number of tokens.",
"Nevertheless, distillation with span-level knowledge still yields comparable performance.",
"Overall, the results prove that each granularity of knowledge brings a positive effect to the model performance.",
"(2) Then for each granularity, we study the effect of each form of structural knowledge (i.e., pair-wise and triplet-wise 6 When the sample-level objective is removed, we use the remaining objectives for all the student layers instead of only the bottom layers, as this setting yields better performance. relations).",
"In this stage, we distill each granularity of knowledge into all the student layers for a fair comparison.",
"It can be observed that for token-level and span-level knowledge, pair-wise relations are more effective than triplet-wise relations, and the model performs better when jointly utilizing both.",
"While for sample-level knowledge, we find that using triplet-wise relations outperforms using pairwise relations by a large margin.",
"Moreover, jointly utilizing the sample-level pair-wise and triplet-wise relations can't further improve the model's performance, therefore we only employ triplet-wise relations as sample-level knowledge.",
"The Impact of k 1 and k 2 for Calculating Angles.",
"To ease the computation and memory complexity, we propose to sequentially select important representations to form angles, leading to the hyperparameters k 1 and k 2 .",
"We test different choices of k 1 and k 2 by adopting token-level and sample-level triplet-wise relations to teach the student respectively.",
"To reduce the search space, we simply set k 1 = k 2 .",
"We draw the accuracy curve for different choices of k 1 , k 2 , as shown in Fig.",
"2. For token-level objectives, we find that increasing k 1 , k 2 improves the accuracy when they are small and when k 1 , k 2 20 , the curves begin to vibrate.",
"Therefore we choose k 1 = k 2 = 20 for token-level angle calculation.",
"While for the triplet-wise relations of sample-level features, we observe that the accuracy increases monotonically with k 1 , k 2 .",
"Therefore we just set k 1 , k 2 as the batch size.",
"The Choice of the Boundary Layer M .",
"We propose the hierarchical distillation strategy where we distill the tokenand span-level knowledge into the bottomM layers of the student and transfer the sample-level knowledge to the upper layers.",
"To verify the effectiveness as well as to find the best choice of the boundary layer M , we conduct exper-1008 iments and show the results in Fig.",
"3. The dashed lines represent the setting dubbed as all , where we distill token-, spanand sample-level knowledge into all the student layers.",
"And the solid lines denote our hierarchical distillation setting with different choices of the boundary layer M .",
"When M = 0 and M = 4 , the student learns sample-level knowledge or tokenand span-level knowledge for all layers.",
"Without the help of other knowledge granularities, the student yields relatively poor performance on both tasks.",
"As M increases from 0 to 4 , we find the model's performance curves surpass the dashed lines, which verifies the effectiveness of our proposed hierarchical distillation strategy which transfers the knowledge to the proper positions of the student.",
"We find the model achieves the highest accuracy when M = 2 , i.e., the middle layer, indicating that both the syntactic knowledge transferred by tokenand span-level features and the semantic knowledge derived from sample-level features are indispensable.",
"In this paper, we propose a novel knowledge distillation framework named MGSKD.",
"We leverage intermediate representations of multi-granularity language units (i.e., tokens, spans and samples), and form the knowledge as the sophisticated structural relations between the representations rather than the individual representations themselves.",
"The well-organized structural knowledge is then distilled into the student hierarchically across layers.",
"Evaluation results on GLUE benchmark verify the effectiveness of our method.",
"In the future, we plan to explore more forms of structural knowledge.",
"We would like to thank the anonymous reviewers for their constructive comments.",
"This work was supported by the National Key Research and Development Program of China (No. 2020AAA0106600).",
"This paper proposes a knowledge distillation framework that leverages multi-granularity structural knowledge to compress a large and powerful language model into a small one with minimum performance degradation, which is beneficial to energy-efficient NLP applications.",
"The research will not pose ethical problems or negative social consequences.",
"The datasets used in this paper are all publicly available and are widely adopted by researchers as the general testbed for natural language understanding evaluation.",
"The proposed method doesn't introduce ethical/social bias or aggravate the potential bias in the data."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"result",
"objective",
"objective",
"method",
"method",
"objective",
"abstain",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"method",
"other",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"other",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"method",
"abstain",
"result",
"objective",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain"
] |
[
"Knowledge distillation (KD) is commonly used to construct synthetic data for training non-autoregressive translation (NAT) models.",
"However, there exists a discrepancy on low-frequency words between the distilled and the original data, leading to more errors on predicting low-frequency words.",
"To alleviate the problem, we directly expose the raw data into NAT by leveraging pretraining.",
"By analyzing directed alignments, we found that KD makes low-frequency source words aligned with targets more deterministically but fails to align sufficient low-frequency words from target to source.",
"Accordingly, we propose reverse KD to rejuvenate more alignments for low-frequency target words.",
"To make the most of authentic and synthetic data, we combine these complementary approaches as a new training strategy for further boosting NAT performance.",
"We conduct experiments on five translation benchmarks over two advanced architectures.",
"Results demonstrate that the proposed approach can significantly and universally improve translation quality by reducing translation errors on low-frequency words.",
"Encouragingly, our approach achieves 28.2 and 33.9 BLEU points on the WMT14 English-German and WMT16 Romanian-English datasets, respectively.",
"Our code, data, and trained models are available at https://github.com/ longyuewangdcu/RLFW-NAT .",
"Recent years have seen a surge of interest in non-autoregressive translation (NAT, Gu et al., 2018), which can improve the decoding efficiency by predicting all tokens independently and simultaneously.",
"The non-autoregressive factorization breaks conditional dependencies among output tokens, Liang Ding and Longyue Wang contributed equally to this work.",
"Work was done when Liang Ding and Xuebo Liu were interning at Tencent AI Lab.",
"which prevents a model from properly capturing the highly multimodal distribution of target translations.",
"As a result, the translation quality of NAT models often lags behind that of autoregressive translation (AT, Vaswani et al., 2017) models.",
"To balance the trade-off between decoding speed and translation quality, knowledge distillation (KD) is widely used to construct a new training data for NAT models (Gu et al., 2018).",
"Specifically, target sentences in the distilled training data are generated by an AT teacher, which makes NAT easily acquire more deterministic knowledge and achieve significant improvement (Zhou et al., 2020).",
"Previous studies have shown that distillation may lose some important information in the original training data, leading to more errors on predicting low-frequency words.",
"To alleviate this problem, Ding et al. (2021b) proposed to augment NAT models the ability to learn lost knowledge from the original data.",
"However, their approach relies on external resources (e.g. word alignment) and human-crafted priors, which limits the applicability of the method to a broader range of tasks and languages.",
"Accordingly, we turn to directly expose the raw data into NAT by leveraging pretraining without intensive modification to model architectures (2.2).",
"Furthermore, we analyze bilingual links in the distilled data from two alignment directions (i.e. source-to-target and target-to-source).",
"We found that KD makes low-frequency source words aligned with targets more deterministically but fails to align low-frequency words from target to source due to information loss.",
"Inspired by this finding, we propose reverse KD to recall more alignments for low-frequency target words (2.3).",
"We then concatenate two kinds of distilled data to maintain advantages of deterministic knowledge and low-frequency information.",
"To make the most of authentic and synthetic data, we combine three complementary approaches (i.e. raw pretraining, bidirectional distillation training and KD finetuning) as a new training strategy for further boosting NAT performance (2.4).",
"We validated our approach on five translation benchmarks (WMT14 En-De, WMT16 Ro-En, WMT17 Zh-En, WAT17 Ja-En and WMT19 EnDe) over two advanced architectures (Mask Predict, Ghazvininejad et al., 2019; Levenshtein Transformer, Gu et al., 2019).",
"Experimental results show that the proposed method consistently improve translation performance over the standard NAT models across languages and advanced NAT architectures.",
"Extensive analyses confirm that the performance improvement indeed comes from the better lexical translation accuracy especially on low-frequency tokens.",
"We show the effectiveness of rejuvenating low-frequency information by pretraining NAT models from raw data.",
"We provide a quantitative analysis of bilingual links to demonstrate the necessity to improve low-frequency alignment by leveraging both KD and reverse KD.",
"We introduce a simple and effective training recipe to accomplish this goal, which is robustly applicable to several model structures and language pairs.",
"Non-Autoregressive Translation Given a source sentence x , an AT model generates each target word y t conditioned on previously generated ones y <t , leading to high latency on the decoding stage.",
"In contrast, NAT models break this autoregressive factorization by producing target words in parallel.",
"Accordingly, the probability of generating y is computed as: p ( y | x ) = T (cid:89) t =1 p ( y t | x ; ) (1) where T is the length of the target sequence, and it is usually predicted by a separate conditional distribution.",
"The parameters are trained to maximize the likelihood of a set of training examples according to L ( ) = arg max log p ( y | x ; ) .",
"Typically, most NAT models are implemented upon the framework of Transformer (Vaswani et al., 2017).",
"Knowledge Distillation Gu et al. (2018) pointed out that NAT models suffer from the multimodality problem , where the conditional independence as-sumption prevents a model from properly capturing the highly multimodal distribution of target translations.",
"Thus, the sequence-level knowledge distillation is introduced to reduce the modes of training data by replacing their original target-side samples with sentences generated by an AT teacher (Gu et al., 2018; Zhou et al., 2020; Ren et al., 2020).",
"Formally, the original parallel data Raw and the distilled data KD can be defined as follows: Raw = { ( x i , y i ) } Ni =1 (2) KD = { ( x i , f s (cid:55) t ( x i )) | x i Raw s } Ni =1 (3) where f s (cid:55) t represents an AT-based translation model trained on Raw data for translating text from the source to the target language.",
"N is the total number of sentence pairs in training data.",
"As shown in Figure 1",
"(a), well-performed NAT models are generally trained on KD data instead of Raw.",
"Motivation Gao et al. (2018) showed that more than 90% of words are lower than 10e-4 frequency in WMT14 En-De dataset.",
"This token imbalance problem biases translation models towards over-fitting to frequent observations while neglecting those low-frequency observations (Gong et al., 2018; Nguyen and Chiang, 2018; Gu et al., 2020).",
"Thus, the AT teacher f s (cid:55) t tends to generate more high-frequency tokens and less low-frequency tokens during constructing distilled data KD.",
"On the one hand, KD can reduce the modes in training data (i.e. multiple lexical choices for a source word), which lowers the intrinsic uncertainty (Ott et al., 2018) and learning difficulty for NAT (Zhou et al., 2020; Ren et al., 2020), making it easily acquire more deterministic knowledge.",
"On the other hand, KD aggravates the imbalance of high-frequency and low-frequency words in training data and lost some important information originated in raw data.",
"Ding et al. (2021b) revealed the side effect of distilled training data, which cause lexical choice errors for low-frequency words in NAT models.",
"Accordingly, they introduced an extra bilingual data-dependent prior objective to augments NAT models the ability to learn the lost knowledge from raw data.",
"We use their findings as our departure point, but rejuvenate low-frequency",
"Our Approach Many studies have shown that pretraining could transfer the knowledge and data distribution, especially for rare categories, hence improving the model robustness (Hendrycks et al., 2019; Mathis et al., 2021).",
"Here we want to transfer the distribution of lost information, e.g. low-frequency words.",
"As illustrated in Figure",
"1(b), we propose to first pretrain NAT models on Raw data and then continuously train them on KD data.",
"The raw data maintain the original distribution especially on low-frequency words.",
"Although it is difficult for NAT to learn high-mode data, the pretraining can acquire general knowledge from authentic data, which may help better and faster learning further tasks.",
"Thus, we early stop pretraining when the model can achieve 90% of the best performance of raw data in terms of BLEU score (Platanios et al., 2019) 1 .",
"In order to keep the merits of low-modes, 1 In preliminary experiments, we tried another simple strategy: early-stop at fixed step according to the size of training data (e.g. training 70K En-De and early stop at 20K / 30K / 40K, respectively).",
"We found that both strategies achieve Data Sentence Raw S ... Raw T Hackman and Oldham propose ... model KDT Heckman and Oddheim propose ... model KDS ... Table 2: An example in different kinds of data.",
"we further train the pretrained model on distilled data KD.",
"As it is easy for NAT to learn deterministic knowledge, we finetune the model for the rest steps.",
"For fair comparison, the total training steps of the proposed method are same as the traditional one.",
"In general, we expect that this training recipe can provide a good trade-off between raw and distilled data (i.e. high-modes and complete vs. low-modes and incomplete).",
"Analyzing Bilingual Links in Data KD simpli-fies the training data by replacing low-frequency target words with high-frequency ones (Zhou et al., 2020).",
"This is able to facilitate easier aligning source words to target ones, resulting in high bilingual coverage (Jiao et al., 2020).",
"Due to the information loss, we argue that KD makes low-frequency target words have fewer opportunities to align with source ones.",
"To verify this, we propose a method to quantitatively analyze bilingual links from two directions, where low-frequency words similar performance.",
"are aligned from source to target (s (cid:55) t) or in an opposite direction (t (cid:55)",
"s).",
"The method can be applied to different types of data.",
"Here we take s (cid:55) t links in Raw data as an example to illustrate the algorithm.",
"Given the WMT14 En-De parallel corpus, we employ an unsupervised word alignment method 2 (Och and Ney, 2003) to produce a word alignment, and then we extract aligned links whose source words are low-frequency (called s (cid:55) t LFW Links).",
"Second, we randomly select a number of samples from the parallel corpus.",
"For better comparison, the subset should contains the same i in Equation (2) as that of other type of datasets (e.g. i in Equation (3) for KD).",
"Finally, we calculate recall, precision, F1 scores based on low-frequency bilingual links for the subset.",
"Recall (R) represents how many low-frequency source words can be aligned to targets.",
"Precision (P) means how many aligned low-frequency links are correct according to human evaluation.",
"F1 is the harmonic mean between precision and recall.",
"Similarly, we can analyze t (cid:55) s LFW Links by considering low-frequency targets.",
"Table 1 shows the results on low-frequency links.",
"Compared with Raw, KD can recall more s (cid:55) t LFW links (73.4 vs. 66.4) with more accurate alignment (89.2 vs. 73.3).",
"This demonstrates the effectiveness of KD for NAT models from the bilingual alignment perspective.",
"However, in the t (cid:55) s direction, there are fewer LFW links (69.9 vs. 72.3) with worse alignment quality (79.1 vs. 80.6) in KD than those in Raw.",
"This confirms our claim that KD harms NAT models due to the loss of low-frequency target words.",
"Inspired by these findings, it is natural to assume that reverse KD exhibits complementary properties.",
"Accordingly, we conduct the same analysis method on KD data, and found better t (cid:55) s links but worse s (cid:55) t links compared with Raw.",
"Take the Zh-En sentence pair in Table 2 for example, KD retains the source side low-frequency Chinese words (Raw S ) but generates the high-frequency English words Heck-man instead of the golden Hackman ( KDT ).",
"On the other hand, KD preserves the low-frequency English words Hackman (Raw T ) but produces the high-frequency Chinese words ( KDS ).",
"Our Approach Based on analysis results, we propose to train NAT models on bidirectional distil-2 The FastAlign (Dyer et al., 2013) was employed to build word alignments for the training datasets.",
"lation by concatenating two kinds of distilled data.",
"The reverse distillation is to replace the source sentences in the original training data with synthetic ones generated by a backward AT teacher.",
"3 According to Equation 3, KD can be formulated as: KD = { ( y i , f t (cid:55) s ( y i )) | y i Raw t } Ni =1 (4) where f t (cid:55) s represents an AT-based translation model trained on Raw data for translating text from the target to the source language.",
"Figure",
"1(c) illustrates the training strategy.",
"First, we employ both f s (cid:55) t and f t (cid:55) s AT models to generate KD and KD data, respectively.",
"Considering complementarity of two distilled data, we combine KD and KD as a new training data for training NAT models.",
"We expect that 1) distilled data can maintain advantages of low-modes; 2) bidi-rectinoal distillation can recall more LFW links on two directions with better alignment quality, leading to the overall improvements.",
"Besides, Nguyen et al. (2020) claimed that combining different distilled data (generated by various models trained with different seeds) improves data diversification for NMT, and we leave this for future work.",
"We have proposed two parallel approaches to rejuvenate low-frequency knowledge from authentic (2.2) and synthetic (2.3) data, respectively.",
"Intuitively, we combine both of them to further improve the model performance.",
"From data view, two presented training strategies are: Raw KD (Raw Pretraining) and KD + KD (Bidirectional Distillation Training).",
"Considering the effectiveness of pretraining (Mathis et al., 2021) and clean finetuning (Wu et al., 2019), we introduce a combined pipeline: Raw KD + KD KD as out best training strategy.",
"There are many possible ways to implement the general idea of combining two approaches.",
"The aim of this paper is not to explore the whole space but simply to show that one fairly straightforward implementation works well and the idea is reasonable.",
"Nonetheless, we compare possible strategies of combination two approaches as well as demonstrate their complementarity in 3.3.",
"While in main experiments (in 3.2), we valid the combination strategy, namely Low-Frequency Rejuvenation (LFR).",
"Data Main experiments are conducted on four widely-used translation datasets: WMT14 English-German (En-De, Vaswani et al. 2017), WMT16 Romanian-English (Ro-En, Gu et al. 2018), WMT17 Chinese-English (Zh-En, Hassan et al. 2018), and WAT17 Japanese-English (Ja-En, Mor-ishita et al. 2017), which consist of 4.5M, 0.6M, 20M, and 2M sentence pairs, respectively.",
"We use the same validation and test datasets with previous works for fair comparison.",
"To prove the universality of our approach, we further experiment on different data volumes, which are sampled from WMT19 En-De.",
"4 The Small and Medium corpora respectively consist of 1.0M and 4.5M sentence pairs, and Large one is the whole dataset which contains 36M sentence pairs.",
"We preprocess all data via BPE (Sennrich et al., 2016) with 32K merge operations.",
"We use tokenized BLEU (Pa-pineni et al., 2002) as the evaluation metric, and sign-test (Collins et al., 2005) for statistical sig-nificance test.",
"The translation accuracy of low-frequency words is measured by AoLC (Ding et al., 2021b), where word alignments are established 4 http://www.statmt.org/wmt19/ translation-task.html based on the widely-used automatic alignment tool GIZA++ (Och and Ney, 2003).",
"Mask-Predict (MaskT, Ghazvininejad et al. 2019) that uses the conditional mask LM (De-vlin et al., 2019) to iteratively generate the target sequence from the masked input.",
"We followed its optimal settings to keep the iteration number as 10 and length beam as 5.",
"Levenshtein Transformer (LevT, Gu et al. 2019) that introduces three steps: deletion, placeholder and token prediction.",
"The decoding iterations adaptively depends on certain conditions.",
"We closely followed previous works to apply sequence-level knowledge distillation to NAT (Kim and Rush, 2016).",
"Specifically, we train both BASE and BIG Transformer as the AT teachers .",
"For BIG model, we adopt large batch strategy (i.e. 458K tokens/batch) to optimize the performance.",
"Most NAT tasks employ Transformer-B IG as their strong teacher except for Ro-En and Small En-De, which are distilled by Transformer-B ASE .",
"Training Traditionally, NAT models are usually trained for 300K steps on regular batch size (i.e. Model Zh-En Ja-En BLEU ALF BLEU ALF AT 25.3 66.2 29.8 70.8 MaskT 24.2 61.5 28.9 66.9 +LFR 25.1 64.8 29.6 68.9 LevT 24.4 62.7 29.1 66.8 +LFR 25.1 65.3 29.7 69.2 Table 4: Performance on other language pairs, including WMT17 Zh-En and WAT17 Ja-En.",
"128K tokens/batch).",
"In this work, we empirically adopt large batch strategy (i.e. 480K tokens/batch) to reduce the training steps for NAT (i.e. 70K).",
"Accordingly, the learning rate warms up to 1 10 7 for 10K steps, and then decays for 60k steps with the cosine schedule (Ro-En models only need 4K and 21K, respectively).",
"For regularization, we tune the dropout rate from [0.1, 0.2, 0.3] based on validation performance in each direction, and apply weight decay with 0.01 and label smoothing with (cid:15) = 0.1.",
"We use Adam optimizer (Kingma and Ba, 2015) to train our models.",
"We followed the common practices (Ghazvininejad et al., 2019; Kasai et al., 2020) to evaluate the performance on an ensemble of top 5 checkpoints to avoid stochasticity.",
"Note that the total training steps of the proposed approach (in 2.2 2.4) are identical with those of the standard training (in 2.1).",
"Taking the best training strategy (Raw KD + KD KD) for example, we empirically set the training step for each stage is 20K, 20K and 30K, respectively.",
"And Ro-En models respectively need 8K, 8K and 9K steps in corresponding training stage.",
"Comparison with Previous Work Table 3 lists the results of previous competitive NAT models (Gu et al., 2018; Lee et al., 2018; Kasai et al., 2020; Gu et al., 2019; Ghazvininejad et al., 2019) on the WMT16 Ro-En and WMT14 En-De benchmark.",
"We implemented our approach on top of two advanced NAT models (i.e. Mask-Predict and Levenshtein Transformer).",
"Compared with standard NAT models, our training strategy significantly and consistently improves translation performance (BLEU ) across different language pairs and NAT models.",
"Besides, the improvements on translation Model Law Med.",
"performance are mainly due to a increase of translation accuracy on low-frequency words (ALF ), which reconfirms our claims.",
"For instance, our method significantly improves the standard Mask-Predict model by +0.8 BLEU score with a substantial +3.6 increase in ALF score.",
"Encouragingly, our approach push the existing NAT models to achieve new SOTA performances (i.e. 28.2 and 33.9 BLEU on En-De and Ro-En, respectively).",
"It is worth noting that our data-level approaches neither modify model architecture nor add extra training loss, thus do not increase any latency (Speed), maintaining the intrinsic advantages of non-autoregressive generation.",
"We must admit that our strategy indeed increase the amount of computing resources due to that we should train f t (cid:55) s AT teachers for building KD data.",
"Results on Other Language Pairs Table 4 lists the results of NAT models on Zh-En and Ja-En language pairs, which belong to different language families (i.e. Indo-European, Sino-Tibetan and Japonic).",
"Compared with baselines, our method significantly and incrementally improves the translation quality in all cases.",
"For Zh-En, LFR achieves on average +0.8 BLEU improvement over the traditional training, along with increasing on average +3.0% accuracy on low-frequency word translation.",
"For long-distance language pair Ja-En, our method still improves the NAT model by on average +0.7 BLEU point with on average +2.2 ALF.",
"Furthermore, NAT models with the proposed training strategy perform closely to their AT teachers (i.e. 0.2 BLEU).",
"This shows the effectiveness and universality of our method across language pairs.",
"Results on Domain Shift Scenario The lexical choice must be informed by linguistic knowledge of how the translation model's input data maps onto words in the target domain.",
"Since low-frequency words get lost in traditional NAT models, the problem of lexical choice is more severe under domain shift scenario (i.e. models are trained on one domain but tested on other domains).",
"Thus, we conduct evaluation on WMT14 En-De models over five out-of-domain test sets (Muller et al., 2020), including law, medicine, IT, Koran and movie subtitle domains.",
"As shown in Table 5, standard NAT models suffer large performance drops in terms of BLEU score (i.e. on average -2.9 BLEU over AT model).",
"By observing these outputs, we found a large amount of translation errors on low-frequency words, most of which are domain-specific terminologies.",
"In contrast, our approach improves translation quality (i.e. on average -1.4 BLEU over AT model) by rejuvenating low-frequency words to a certain extent, showing that LFR increases the domain robustness of NAT models.",
"Results on Different Data Scales To confirm the effectiveness of our method across different data sizes, we further experiment on three En-De datasets at different scale.",
"The smalland medium-scale training data are randomly sampled from WM19 En-De corpus, containing about 1.0M and 4.5M sentence pairs, respectively.",
"The large-scale one is collected from WMT19, which consists of 36M sentence pairs.",
"We report the BLEU scores on same testset newstest2019 for fair comparison.",
"We employs base model to train the small-scale AT teacher, and big model with large batch strategy (i.e. 458K tokens/batch) to build the AT teachers for mediumand large-scale.",
"As seen in Table 6, our simple training recipe boost performances for Model BLEU ALF Mask-Predict 27.0 68.4 +Raw Data Prior 27.8 72.4 +Low-Frequency 27.8 72.3 +Combination 28.1 72.9 Table 7: Complementary to other work.",
"NAT models across different size of datasets, especially on large scale (+0.9), showing the robustness and effectiveness of our approach.",
"Complementary to Related Work Ding et al. (2021b) is relevant to our work, which introduced an extra bilingual data-dependent prior objective to augment NAT models the ability to learn low-frequency words in raw data.",
"Our method is complementary to theirs due to that we only change data and training strategies (model-agnostic).",
"As shown in Table 7, two approaches yield comparable performance in terms of BLEU and ALF.",
"Besides, combination can further improve BLEU as well as ALF scores (i.e. +0.3 and +0.6).",
"This illustrates the complementarity of model-level and data-level approaches on rejuvenating low-frequency knowl-dege for NAT models.",
"We conducted extensive analyses to better understand our approach.",
"All results are reported on the Mask-Predict models.",
"Accuracy of Lexical Choice To understand where the performance gains come from, we conduct fine-grained analysis on lexical choice.",
"We divide All tokens into three categories based on their frequency, including High, Medium and Low.",
"Following Ding et al. (2021b), we measure the accuracy of lexical choice on different frequency of words.",
"Table 8 shows the results.",
"Takeaway: The majority of improvements on translation accuracy is from the low-frequency words, confirming our hypothesis.",
"Low-Frequency Words in Output We expect to recall more low-frequency words in translation output.",
"As shown in Table 9, we calculate the ratio of low-frequency words in generated sentences.",
"As seen, KD biases the NAT model towards gen-Model En-De Zh-En Ja-En All High Med.",
"Takeaway: Our method generates translations that contain more low-frequency words.",
"Effects of Variant Training Strategies As discussed in 2.4, we carefully investigate alternative training approaches in Table",
"10. We make the total training step identical to that of vanilla NAT models, and report both BLEU and ALF scores.",
"As seen, all variant strategies perform better than the standard KD method in terms both BLEU and Model All High Med.",
"ALF scores, confirming the necessity of our work.",
"Takeaway: 1) Pretraining is more effective than combination on utilizing data manipulation strategies; 2) raw data and bidirectional distilled data are complementary to each other; 3) it is indispensable to finetune models on KD in the last stage.",
"Our Approach Works for AT Models Although our work is designed for NAT models, we also investigated whether our LFT method works for general cases, e.g. autoregressive models.",
"We used Transformer-B IG as the teacher model.",
"For fair comparison, we leverage the Transformer-B ASE as the student model, which shares the same model capacity with NAT student (i.e. MaskT).",
"The result lists in Table",
"11. As seen, AT models also suffer from the problem of low-frequency words when using knowledge distillation, and our approach also works for them.",
"Takeaway: Our method works well for general cases through rejuvenating more low-frequency words.",
"Low-Frequency Words Benefiting from continuous representation learned from the training data, NMT models have shown the promising performance.",
"However, Koehn and Knowles (2017) point that low-frequency words translation is still one of the key challenges for NMT according to the Zipf's law (Zipf, 1949).",
"For AT models, Arthur et al. (2016) address this problem by integrating a count-based lexicon, and Nguyen and Chiang (2018) propose an additional lexical model, which is jointly trained with the AT model.",
"Recently, Gu et al. (2020) adaptively re-weight the rare words during training.",
"The lexical choice problem is more serious for NAT models, since 1) the lexical choice errors (low-resource words in particular) of AT distillation will propagate to NAT models; and 2) NAT lacks target-side dependencies thus misses necessary target-side context.",
"In this work, we alleviate this problem by solving the first challenge.",
"Data Manipulation Our work is related to previous studies on manipulating training data for NMT.",
"Bogoychev and Sennrich (2019) show that forward-and backward-translations (FT/ BT) could both boost the model performances, where FT plays the role of domain adaptation and BT makes the translation fluent.",
"Fadaee and Monz (2018) sample the monolingual data with more difficult words (e.g. rare words) to perform BT, achieving significant improvements compared with randomly sampled BT.",
"Nguyen et al. (2020) diversify the data by applying FT and BT multiply times.",
"However, different from AT, the prerequisite of training a well-performed NAT model is to perform KD.",
"We compared with related works in Table 10 and found that our approach consistently outperforms them.",
"Note that all the ablation studies focus on exploiting the parallel data without augmenting additional data.",
"Non-Autoregressive Translation A variety of approaches have been exploited to bridge the performance gap between NAT and AT models.",
"Some researchers proposed new model architectures (Lee et al., 2018; Ghazvininejad et al., 2019; Gu et al., 2019; Kasai et al., 2020), aided with additional signals (Wang et al., 2019; Ran et al., 2019; Ding et al., 2020), introduced sequential information (Wei et al., 2019; Shao et al., 2019; Guo et al., 2020; Hao et al., 2021), and explored advanced training objectives (Ghazvininejad et al., 2020; Du et al., 2021).",
"Our work is close to the research line on training methods.",
"Ding et al. (2021b) revealed the low-frequency word problem in distilled training data, and introduced an extra Kullback-Leibler divergence term derived by comparing the lexical choice of NAT model and that embedded in the raw data.",
"Ding et al. (2021a) propose a simple and effective training strategy, which progressively feeds different granularity of data into NAT models by leveraging curriculum learning.",
"In this study, we propose simple and effective training strategies to rejuvenate the low-frequency information in the raw data.",
"Experiments show that our approach consistently and significantly improves translation performance across language pairs and model architectures.",
"Notably, domain shift is an extreme scenario to diagnose low-frequency translation, and our method significant improves them.",
"Extensive analyses reveal that our method improves the accuracy of lexical choices for low-frequency source words, recalling more low-frequency words in translations as well, which confirms our claim.",
"We are grateful to the anonymous reviewers and the area chair for their insightful comments and suggestions.",
"Xuebo Liu and Derek F. Wong were supported in part by the Science and Technology Development Fund, Macau SAR (Grant No. 0101/2019/A2), and the Multi-year Research Grant from the University of Macau (Grant No. MYRG2020-00054-FST)."
] | [
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"method",
"abstain",
"result",
"other",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"method",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"abstain",
"other",
"other",
"objective",
"result",
"result",
"result",
"other",
"other"
] |
[
"Identifying intertextual relationships between authors is of central importance to the study of literature.",
"We report an empirical analysis of intertextuality in classical Latin literature using word embedding models.",
"To enable quantitative evaluation of intertextual search methods, we curate a new dataset of 945 known parallels drawn from traditional scholarship on Latin epic poetry.",
"We train an optimized word2vec model on a large corpus of lemmatized Latin, which achieves state-of-the-art performance for synonym detection and outperforms a widely used lexical method for intertextual search.",
"We then demonstrate that training embeddings on very small corpora can capture salient aspects of literary style and apply this approach to replicate a previous intertextual study of the Roman historian Livy, which relied on hand-crafted stylometric features.",
"Our results advance the development of core computational resources for a major premodern language and highlight a productive avenue for cross-disciplinary collaboration between the study of literature and NLP.",
"1 1 Introduction In Lonesome Day Blues, Bob Dylan sings, I'm gonna spare the defeated...I am goin' to teach peace to the conquered / I'm gonna tame the proud.",
"This lyric echoes a passage from Vergil's ancient Latin epic, the Aeneid , as translated by Allen Mandelbaum: to teach the ways of peace to those you conquer, / to spare defeated peoples, tame the proud (Thomas, 2012).",
"Such allusions or inter-texts transmit ideas across space and time, diverse media, and languages.",
"Although researchers focus on those intertextual connections felt to have special literary significance for the works at hand, in principle intertextuality refers to any verbal or 1 Code and data are available at https://github.c om/QuantitativeCriticismLab/NAACL-HLT-2021-Latin-Intertextuality .",
"semantic resemblance within the literary system, ranging from direct quotation to topical similarities (Kristeva, 1980; Juvan, 2009).",
"Given the importance of intertextual criticism to literary study, computational identification of text reuse in literature is an active area of research (Bamman and Crane, 2008; Forstall and Scheirer, 2019).",
"Classical Latin literature is a highly influential tradition characterized by an extraordinary density of allusions and other forms of text reuse (Hinds, 1998).",
"The most widely used tools for the detection of Latin intertextuality, such as Tesserae and Diogenes, rely on lexical matching of repeated words or phrases (Coffee et al., 2012, 2013; Heslin, 2019).",
"In addition to these core methods, other research has explored the use of sequence alignment (Chaud-huri et al., 2015; Chaudhuri and Dexter, 2017), semantic matching (Scheirer et al., 2016), and hybrid approaches (Moritz et al., 2016; Manjavacas et al., 2019) for Latin intertextual search, complementing related work on English (Smith et al., 2014; Zhang et al., 2014; Barbu and Trausan-Matu, 2017).",
"Much NLP research on historical text reuse, including previous applications of Latin word embeddings, has focused on the Bible and other religious texts (Lee, 2007; Moritz et al., 2016; Bjerva and Praet, 2016; Manjavacas et al., 2019).",
"As such, there is a clear need for enhanced computational methods for classical Latin literature.",
"We describe the optimization of word embedding models for Latin and their application to longstanding questions about literary intertextuality.",
"As is typical for many low-resource and premodern languages, development of core NLP technologies for Latin remains at an early stage.",
"Following attempts to train word2vec models on unlemma-tized corpora of Latin literature shortly after the method's introduction (Bamman; Bjerva and Praet, 2015) and inclusion of Latin in large-scale multilingual releases of FastText and BERT (Grave et al., 2018; Devlin et al., 2019), in the past year there has been increased interest in systematic optimization and evaluation of Latin embeddings.",
"Spurred by the recent EvaLatin challenge (Sprug-noli et al., 2020), a number of Latin models have been trained for use in lemmatization and part-of-speech tagging (Bacon, 2020; Celano, 2020; Straka and Strakov, 2020; Stoeckel et al., 2020), complementing new literary applications to Biblical text reuse and neo-Latin philosophy (Manjavacas et al., 2019; Bloem et al., 2020).",
"In addition, Sprugnoli et al. (2019) recently introduced a synonym selection dataset, based on the TOEFL benchmark for English, which they used to evaluate word2vec and FastText models trained on the LASLA corpus of Latin literature.",
"To the best of our knowledge, there have been no attempts to compare the performance of these models on standard evaluation tasks.",
"To establish a baseline for further language-specific optimization and to inform our research on intertextuality, we evaluate five Latin models for which pretrained embeddings are publicly available.",
"These models encompass a variety of training corpora and methods, including word2vec, FastText, and nonce2vec (Ap-pendix).",
"We consider two tasks involving synonym matching.",
"The first is the selection task introduced by Sprugnoli et al. (2019); the task is to distinguish the true synonym of a Latin word from three dis-tractors ( N = 2 , 759 ).",
"The second task, which is modeled on one of the English evaluation datasets from Mikolov et al. (2013), involves unrestricted search for the synonyms of 1,910 words found in an online dictionary of Latin near-synonyms (Ap-pendix).",
"In addition, we train word2vec embeddings on a large corpus of Latin compiled from the Internet Archive (Bamman and Crane, 2011; Bamman and Smith, 2012), which we first lemmatize using either the Classical Language Toolkit (John-son, 2021) or TreeTagger (Schmid, 1994).",
"The results of the comparative evaluation are summarized in Table",
"1. For the synonym search task, we consider the number of correct matches found in the top 1, 10, and 25 results by cosine similarity, as well as the mean reciprocal rank (MRR).",
"We find that our models achieve state-of-the-art performance on both tasks compared to the five published models.",
"The improvement in performance may be due to the combination of training on lemmatized text, which Sprugnoli et al. (2019) identified as an important optimization for Latin, and use of a lower-quality but much larger training corpus (1.38 billion tokens, compared to 1.7 million tokens in the curated LASLA corpus).",
"Despite the enormous number of Latin intertextual parallels recorded in the scholarship, computational research on literary text reuse is hampered by a lack of benchmark datasets.",
"Existing benchmarks tend to focus either on binary comparisons, such as between Vergil and Lucan (Coffee et al., 2012), or on specialized forms of religious intertextuality (Moritz et al., 2016; Manjavacas et al., 2019).",
"To enable validation testing of general NLP methods for intertextual search, we assemble a new benchmark dataset based on Valerius Flaccus' Argonautica , an epic poem dating from the 1st century C.E. which recounts the myth of Jason and the Argonauts.",
"For Book 1 of the Argonautica we record 945 verbal intertexts with four major epics (Vergil's Aeneid , Ovid's Metamorphoses , Lucan's Pharsalia , and Statius' Thebaid ) that are noted in the commentaries of Spaltenstein (2002), Kleywegt (2005), and Zissos (2008).",
"Our dataset thus contains a substantial number of intertexts of established literary interest with coverage across Book",
"1. 4 Analysis of intertextuality in Latin literature 4.1 Enhanced intertextual search Several widely used computational search methods for Latin intertextuality rely on lexical matching of related words.",
"We present an alternative approach in which potential intertextual phrases are ranked using word embeddings.",
"According to this method, we compare a bigram of interest to all bigrams in another text subject to the constraint that the distance between the words does not exceed a fixed interval.",
"The interval parameter is determined by the number of words occurring between the words comprising the bigram of interest and is usually, but not exclusively, between 0 and",
"2. The choice of bigrams as the basic unit conforms to ancient poetic practice, in which allusive phrases frequently consist of two words (although they can also be single words or longer phrases), and hence also conforms to modern intertextual search methods such as Tesserae (Coffee Model Selection (%) Ranking (%) top 1 top 10 top 25 MRR Bamman 66.6 0.4 2.1 3.3 17.5 Grave et al. (2018) 74.0 0.2 1.2 1.7 11.8 Sprugnoli et al. (2019) (word2vec CBOW) 81.1 2.4 11.3 15.9 19.8 Sprugnoli et al. (2019) (FastText SG) 86.9 1.7 9.3 14.3 18.2 Bloem et al. (2020) 84.8 0.3 3.9 7.0 10.1 word2vec (CLTK) 84.9 3.2 14.5 20.4 22.7 word2vec (TT) 87.7 3.5 15.0 21.0 20.6 Table 1: Evaluation of five published and two new Latin word embedding models on two synonym detection tasks.",
"et al., 2012, 2013).",
"A key difference in our approach, however, is that bigram pairs may share only one or even zero words in common.",
"The bigrams are drawn from the dataset of commen-tators' annotations; in cases where commentators note only a single-word intertext or a phrase longer than a bigram, we supplement or select words on a case-by-case basis, giving preference to those words that bear a semantic or syntactic similarity to one or more words in the intertext.",
"The similarity score for a bigram pair is calculated by taking the cosine similarities of the embeddings of the four possible pairs of words across both bigrams, and averaging the highest cosine similarity and the score for the remaining pair of words.",
"The bigram pair flammifero Olympo (fiery Olympus) and flammifera nocte (fiery night), for example, generates the four lemmatized pairs flammifer flammifer , flammifer nox , Olympus flammifer , and Olympus nox .",
"Hence, the similarity score for the bigram pair is the average of 1.0 for the exact match, flammifer flammifer , and 0.35 for the other remaining word pair, Olympus nox (i.e., 0.67).",
"In this way, the similarity score for an intertext noted by the commentators is ranked against all other bigrams in the relevant text, the size of which we set at a single book of poetry (i.e., equivalent to the text on which the dataset is based).",
"Although the choice to use one unit of text rather than another is somewhat arbitrary one could consider complete works rather than constituent books, for examplethe use of single books has several advantages, notably provision of a large but not overwhelming number of comparison phrases while maintaining ancient textual units with distinct episodes and themes.",
"Following this approach, we compute a ranking for each of the 945 parallels in the Valerius Flaccus benchmark.",
"For embeddings we use our word2vec model trained on CLTK-lemmatized text, which by MRR performs best in the synonym ranking task (Table 1).",
"The precision @ k and recall @ k for k = 1 , 3, 5, 10, 25, 50, 75, 100, and 250 are summarized in Fig.",
"1. 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Recall@k 0 0.05 0.1 0.15 0.2 0.25 0.3 P r ec i s i on @ k 1 k 250 Figure 1: Precision and recall for Latin intertextual search using an optimized word2vec model.",
"We next compare our method and the Tesserae search tool, which is regarded as state-of-the-art for Latin intertextual search (Bernstein et al., 2015; Forstall and Scheirer, 2019).",
"Using their public web-based interface, we run Tesserae searches comparing Book 1 of the Argonautica with each of the four texts in the benchmark dataset.",
"Tesserae produces lists of repeated bigrams ranked according to a hand-crafted scoring formula that considers the rareness and proximity of the words in each bigram.",
"For the complete set of Tesserae results, the recall is 33.9%, and the precision is 0.97%; with k = 250 , our method achieves a comparable precision (1.4%) but higher recall (82.4%).",
"An important advantage of the Tesserae tool, however, is that it searches for similar phrases in parallel and does not require a list of specific queries as input.",
"As such, the results aggregated for this comparison come from a much smaller number of Tesserae searches than the 945 embedding-based searches we run.",
"For this reason, Tesserae is likely to be more suitable than our method for applications in which the user does not have predetermined phrases of interest.",
"A minority of intertexts in the dataset contain no shared lemma and hence present a challenge for existing detection methods based on lexical matching but are recoverable using our search method.",
"The phrases e clausis [antris] (from the enclosed [cave], Arg. 1.417) and circum claustra [fremunt] ([they roar] around the gate, Aen. 1.56), for example, contain no words in common but have similar syntax (the prepositions e and circum ) and semantics (words indicating enclosure).",
"Similarly, the phrases Phlegethontis operti (hidden Phlegethon, Arg. 1.735) and Acherontis aperti (open Acheron, Theb. 11.150) both refer to rivers in the underworld and contain near-antonymic adjectives.",
"Word embeddings can thus be used to identify intertexts of literary interest in a way that complements existing methods.",
"Computational analysis of literary intertextuality is typically treated as an information retrieval problem, as in the previous section.",
"Here we consider an alternative framework of studying intertextuality through anomaly detection (Forstall et al., 2011).",
"For this approach, we train word embeddings on highly restricted corpora, so that the resulting models capture aspects of authorial style.",
"We use those restricted embeddings as features to predict instances of similarity between authors, which can indicate intertextuality.",
"To illustrate this approach we describe a case study involving Latin historiography and the development of prose style.",
"In particular, we examine patterns of stylistic influence between the Roman historian Livy, his source material, and other Latin prose literature.",
"As assessment of similarities in literary style is inherently subjective, we consider the task of replicating two experiments from a previous computational study of Livy, which employed a hand-crafted set of Latin stylometric features such as syntactic markers and function words, using word embeddings.",
"Our approach to evaluation of a subjective task is thus similar to that of Bamman et al. (2014), who tested a set of preregistered hypotheses about literary characters.",
"history of Rome drew on a wide range of source material, such as earlier historiography and political speeches, most of which is no longer extant.",
"The extent to which Livy cited these earlier sources, and their influence on Livy's compositional practice, remain important open questions for ancient historians.",
"Dexter et al. (2017) demonstrated previously that anomaly detection could be used to distinguish a database of 439 putative citational passages from the remainder of Livy.",
"To replicate this analysis, we train a word2vec model on all of Livy's surviving history and use the embeddings as input for a one-class support vector machine (SVM).",
"Following Dexter et al. (2017), we set the detection rate of the one-class SVM to 20% and train on a random selection of 30,000 5-sentence passages of Livy.",
"We find that the one-class SVM labels 38 .",
"2 0 .",
"8 % of passages from the citation database as anomalous, compared to 18 .",
"4 2 .",
"0 % of a validation set with 439 passages of general Livy (mean and standard deviation from N = 3 runs).",
"These results provide further evidence that citational passages of Livy exhibit an anomalous writing style, whether due to source use or stylistic modulation, corroborating the earlier analysis.",
"Finally, we consider the stylistic similarity of Livy to 17 other works of Latin literature analyzed by Dexter et al. (2017).",
"Again using a one-class SVM trained on Livy, we predict the Livianess of each work (Fig. 2).",
"Our results confirm the major trends identified by the prior stylometric analysis, including the expected dissimilarity to Livy of the verse texts and the consistent similarity of contemporary and early imperial historiography.",
"The primary difference between the two sets of results is that the stylometric features indicate greater similarity between Livy and non-historiographical prose, such as Augustine's Confessions and Vitruvius' De architectura , than do word embeddings, which may reflect a relative lack of shared diction.",
"We present an empirical analysis of Latin intertextuality using word embedding models.",
"In addition to its specific contributions to literary criticism and the digital humanities, our work makes several methodological advances of interest to the broader NLP community.",
"We conduct a comparative evaluation of Latin word embedding models for two synonym matching tasks and report an optimized model that achieves state-of-the-art performance, Agr De or Mur De rep Gal Cat Iug Vitr Inst 1 Ger Ann Conf Ps Lucr G HF Theb",
"which we apply to intertextual search of Latin poetry.",
"By capturing similarities other than exact repetition of words and phrases, our method complements existing search tools, such as Diogenes and Tesserae.",
"Given the diversity and complexity of references employed by Latin authors, taking a multifaceted approach is essential to the computational study of Latin intertextuality.",
"Although our initial work focuses on static embeddings, one potential avenue for improving our search method would be to leverage context-aware embeddings such as multilingual or Latin BERT (Devlin et al., 2019; Bamman and Burns, 2020).",
"In addition, we illustrate how intertextuality can be studied using anomaly detection, and we replicate previous stylometric research about the Roman historian Livy, which was informed by domain knowledge, using an unsupervised approach.",
"We hope that this work will strengthen cross-disciplinary collaboration between classics, the digital humanities, and NLP.",
"This work was conducted under the auspices of the Quantitative Criticism Lab ( www.qcrit.org ), an interdisciplinary group co-directed by P.C. and J.P.D. and supported by an American Council of Learned Societies Digital Extension Grant and a National Endowment for the Humanities Digital Humanities Advancement Grant (Grant No. HAA-271822-20).",
"P.C. was supported by a Mellon New Directions Fellowship, and J.P.D. by a Neukom Fellowship and a Harvard Data Science Fellowship.",
"The material contributed by J.A.B. is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. 1752134.",
"Any opinion, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.",
"We thank Adriana Csarez, James Patterson, and Ariane Schwartz for assistance with compiling the Valerius Flaccus intertextuality dataset, and Tommaso Spinelli for sharing the dictionary of Latin near-synonyms."
] | [
"abstain",
"method",
"objective",
"result",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"method",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"result",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"objective",
"result",
"method",
"abstain",
"abstain",
"result",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other"
] |
[
"Logical table-to-text generation aims to automatically generate fluent and logically faithful text from tables.",
"The task remains challenging where deep learning models often generated linguistically fluent but logically inconsistent text.",
"The underlying reason may be that deep learning models often capture surface-level spurious correlations rather than the causal relationships between the table x and the sentence y .",
"Specifically, in the training stage, a model can get a low empirical loss without understanding x and use spurious statistical cues instead.",
"In this paper, we propose a de-confounded variational encoder-decoder (DCVED) based on causal intervention, learning the objective p ( y j do ( x )) .",
"Firstly, we propose to use variational inference to estimate the confounders in the latent space and cooperate with the causal intervention based on Pearl's do-calculus to alleviate the spurious correlations.",
"Secondly, to make the latent confounder meaningful, we propose a back-prediction process to predict the not-used entities but linguistically similar to the exactly selected ones.",
"Finally, since our variational model can generate multiple candidates, we train a table-text selector to find out the best candidate sentence for the given table.",
"An extensive set of experiments show that our model outperforms the baselines and achieves new state-of-the-art performance on two logical table-to-text datasets in terms of logical fidelity.",
"Data-to-text generation refers to the task of generating descriptive text from non-linguistic inputs.",
"With the different types of inputs, this task can be (cid:3) Corresponding Authors defined more specifically, such as abstract meaning representation to text (Zhao et al., 2020; Bai et al., 2020a), infobox with key-value pairs to text (Bai et al., 2020b), graph-to-text (Song et al., 2020), and table-to-text (Wang et al., 2020; Parikh et al., 2020) generation.",
"Among these tasks, we focus on logical table-to-text generation, which aims to generate fluent and logically faithful text from tables (Chen et al., 2020a).",
"And the ability of logical inference is a kind of high-level intelligence, which is nontrivial for text generation systems in reality.",
"The task remains challenging because the reference sentences often convey logically inferred information, which is not explicitly presented in the table.",
"As a consequence, data-driven models often generated linguistically fluent but logically inconsistent text.",
"Recent progress on this task mainly lies in the use of pretrained language models (LMs) like GPT-2 ( Radford et al., 2018), which was shown to perform much better than non-pretrained models (Chen et al., 2020a,e).",
"However, it is still arguable that whether pretrained LMs can correctly capture the logics, as pretrained LMs like BERT would use spurious statistical cues for inference ( Niven and Kao, 2019).",
"The substantial difficulty for this task does not lay on whether to use the pretrained models or not.",
"Instead, the difficulty is because the surface-level spurious correlations are easier to capture than the causal relationship between the table and the text.",
"For example, we have observed that a model cooperating with GPT-2 generated a sentence \" The album was released in the United States 2 time \" for a given table.",
"But the country where the album was released twice is \" the United Kingdom \" 1 .",
"In the training stage, a model may get low training loss by utilizing the surface-level correlations without 1 The details of the table can be found in Section 5.6 5533 actually focusing on the selected entities.",
"As a result, in the inference stage, the model is possible to produce incorrect facts.",
"In this paper, we view the logical table-to-text generation from the perspective of causal inference and propose a de-confounded variational encoder-decoder (DCVED).",
"Firstly, given the table-sentence pair ( x ; y ) , we assume confounders z c existed in the latent space and contributing to the surface-level correlations (e.g., \" the United States \" and \" the United Kingdom \").",
"We estimate z c in the latent space based on variational inference, and cooperate the causal intervention based on Pearl's do-calculus (Pearl, 2010) to learn the objective p ( y j do ( x )) instead of p ( y j x ) .",
"Secondly, to make the latent confounder meaningful, we propose a back-prediction process to ensure the latent confounder z c can predict the not-used entities but linguistically similar to the exactly selected ones.",
"We also consider the exactly selected entities as the mediators in our de-confounded architecture models.",
"Finally, since our variational model can generate multiple candidates, we train a table-text selector to find out the best text for the table.",
"An extensive set of experiments show that our model achieves new state-of-the-art performance on two logical table-to-text datasets in terms of logical fidelity.",
"(cid:15) We propose to use variational inference to estimate the confounders in the latent space and cooperated with back-prediction to make the latent variable meaningful.",
"(cid:15)",
"We propose a generate-then-select paradigm jointly considering the surface-level and logical fidelity, which can be considered as an alternative to reinforcement learning.",
"(cid:15)",
"The experiments have shown that our model achieves new state-of-the-art performance on two logical table-to-text datasets with or without pretrained LMs.",
"Table-to-Text Generation .",
"The task of table-to-text generation belongs to the data-to-text generation, where a key feature is the structured input data.",
"Lebret et al. (2016) used a seq2seq neural model with a field-infusing strategy that obtains field-position-aware and field-words-aware cell embeddings to generate sentences from Wikipedia tables.",
"A follow-up work proposed to update the cell memory of the LSTM by a field gate to help LSTM identify the boundary between different cells (Liu et al., 2018).",
"Transformer-based (Vaswani et al., 2017) models were also proposed which improved the ability to capture long-term dependencies between cells (Ma et al., 2019; Wang et al., 2020; Chen et al., 2020a).",
"It is worth to mention that the copy mechanism (Luong et al., 2015) is an important part to deal with the out-of-vocabulary (OOV) words (Lebret et al., 2016; Gehrmann et al., 2018; Chen et al., 2020a) when not using pretrained language models.",
"Logical Table-to-Text Generation .",
"While usually fluent, existing methods often hallucinate phrases that contradict the facts in the table.",
"To benchmark models' ability to generate logically consistent sentences, recent work proposed a dataset collected from open domain ( Chen et al., 2020a), which would score low on those models ignoring logical consistency.",
"Follow-up work further proposed another dataset that involved logical forms as additional supervision information ( Chen et al. , 2020e), which includes common logic types paired with the underlying logical forms.",
"Causal Inference .",
"Machine learning models often suffer from the spurious statistical correlations brought by unmeasured or latent confounders (Keith et al., 2020).",
"To eliminate the confounding bias, one approach is applying the causal intervention based on Pearl's do-calculus ( Pearl, 2010).",
"However, it remains an open problem to choose proper confounders, and the language of text itself could be a confounder (Keith et al., 2020).",
"It is worth noting that high-quality observations of the mediators can also reduce the confounding bias, as the models will reduce the possibility of counting on the confounders (Chen et al., 2020d).",
"Before introducing our models, we briefly review the framework of VAE (Kingma and Welling, 2014), a generative model which allows to generate high-dimensional samples from a continuous space.",
"In the probability model framework, the probability of data x can be computed by: p ( x ) = p ( x ; z ) d z = p ( z ) p ( x j z ) d z (1) 5534 where it is approximated by maximizing the evidence lower bound (ELBO): log p (cid:18) ( x ) (cid:21) E z (cid:24) q ( z j x ) [log p (cid:18) ( x j z )] (cid:0) KL ( q ( z j x ) p ( z )) (2) where p (cid:18) ( x j z ) denotes the decoder with parameters (cid:18) and q ( z j x ) is obtained by an encoder with parameters , and p ( z ) is a prior distribution, for example, a Gaussian distribution.",
"And KL ( (cid:1)jj(cid:1) ) denotes the Kullback-Leibler (KL) Divergence between two distributions.",
"When applied to seq2seq generation where the input and the output are denoted by x and y respectively, the conditional variational auto-encoder (CVAE), or often known as variational encoder-decoder (VED), is used with following approximation: log p (cid:18) ( y j x ) (cid:21) E z (cid:24) q ( z j x ; y ) [log p (cid:18) ( y j x ; z )] (cid:0) KL ( q ( z j x ; y ) p ( z j x )) (3) In the vanilla CVAE formulation, such as the ones adopted in (Kingma et al., 2014; Jain et al., 2017), the prior distribution p ( z j x ) is approximated to p ( z ) , which is independent on x and fixed to a zero-mean unit-variance Gaussian distribution N (0 ; I ) .",
"However, this formulation is shown to induce a strong model bias ( Tomczak and Welling , 2018) and empirically perform worse than non-variational models (Wang et al., 2017) in multi-modal situation.",
"From a human perspective, multiple sentences can properly describe a given table, varying with different",
"different concerns, different logical types or linguistic realizations.",
"Therefore, given the input data x and the output sentence y , we can assume a latent variable z existed leading to a conditional generation process p ( y j x ; z ) where z contributes to the diversity.",
"It suggests a CVAE framework with Equation 3.",
"However, as discussed in Section 3, the vanilla CVAE will introduce a model bias (Tomczak and Welling, 2018).",
"In this subsection, we re-think the CVAE from the perspective of causal inference.",
"We assume a directed acyclic graph (DAG) existed, which includes a mediator z m and a confounder z c as shown in Figure",
"1(a).",
"The mediator is determined by x and has causal effects on y , while the confounder has causal effects on both x and y .",
"When only considering z m , we can compute the probability distribution p ( y j x ) by: p ( y j x ) = z m p ( y j x ; z m ) p ( z m j x ) = E z m (cid:24) p ( z m j x ) p ( y j x ; z m ) (4) where denotes the parameters of a mediator predictor.",
"An example for z m is the selected entity (e.g., United Kingdom) from the table x and exactly appeared in y .",
"The vanilla CVAE will constrain z m in the continuous space, and further approximate the prior distribution p ( z m j x ) to p ( z m ) , which produces biased information.",
"However, it does not mean that removing the approximation between p ( z m j x ) and p ( z m ) is enough.",
"We observe that models often rely on spurious statistical cues for prediction, resulting in some linguistically similar but inconsistent expressions in the generated sentences (e.g., using \" The United States \" instead of \" The United Kingdom ).",
"The model is possible to minimize the training loss relying on the surface-level correlations between the selected entity and the high-frequency entity.",
"In this case, the high-frequency entity belongs to the confounder z c .",
"In the inference stage, model may infer contradicting facts due to a high posterior probability of q ( z c j x ) .",
"To eliminate the spurious correlations, we apply causal intervention by learning the objective p ( y j do ( x )) instead of p ( y j x ) , which forces the input to be the observed data x , and removes all the arrows pointing to x as shown in Figure",
"1(b).",
"When only considering z c , we can compute the in-5535 tervened probability distribution by: p ( y j do ( x )) = z c p ( y j x ; z c ) p ( z c ) = E z c (cid:24) p ( z c ) p ( y j x ; z c ) (5) where z c is no longer determined by x , making p ( z c j do ( x )) = p ( z c ) .",
"When applying variational inference to z c , we have: p ( y j do ( x )) (cid:21) E z c (cid:24) q ( z c j y ) p (cid:18) ( y j x ; z c ) (cid:0) KL ( q ( z c j y ) j p ( z c )) (6) It can be seen that the confounder z c is more suitable than the mediator z m to cooperate with variational inference, as cutting off the link z c ! x will naturally make p ( z c j do ( x )) to p ( z c ) .",
"When jointly considering z m and z c , we have: p ( y j do ( x )) = z m z c p ( y ; z m ; z c j do ( x )) d z c (cid:21) E z m (cid:24) p ( z m j x ) ; z c (cid:24) q ( z c j y ) [log p (cid:18) ( y j x ; z m ; z c )] (cid:0) KL ( q ( z c j y ) p ( z c )) (7) according to the intervened causal graph in Figure",
"1(b).",
"The symbols , and (cid:18) denote the parameters of three probability modeling networks, respectively.",
"It is worth noting that we do not apply variational inference to z m because finding a proper prior distribution p ( z m j x ) remains another big topic.",
"Instead, our framework is easy to implement.",
"However, there is no guarantee that z m and z c can represent the real mediators and confounders in Equation 7.",
"If we have no other observed variables, the confounder z c would mainly represent the covariate which is naturally independent of x and has causal effects on y .",
"Therefore, we further involve proxy variables m and c for z m and z c , respectively, where the full causal graph is shown in Figure 1.",
"Proxy variables are the proxies of hidden or unmeasured variables (Miao et al., 2018).",
"In practice, the mediators and the confounders are often too complex and can not be directly observed.",
"For example, we may not be able to directly measure one's socioeconomic status but we are probable to get a proxy by the zip code or job type (Louizos et al., 2017).",
"To make the latent variables z m and z c meaningful, we add two additional networks and the learning objective is maximizing: E z m (cid:24) p ( z m j x ) ; z c (cid:24) q ( z c j y ) [log p (cid:18) ( y j x ; z m ; z c )] (cid:0) KL ( q ( z c j y ) p ( z c )) + E z m (cid:24) p ( z m j x ) [log p (cid:8) ( m j z m )] + E z c (cid:24) q ( z c j y ) [log p (cid:9) ( c j z c )] (8) where (cid:8) and (cid:9) denote the parameters of the two additional networks.",
"Back-Prediction from the Confounder .",
"As shown in Figure",
"1(a), the confounder z c inferred from y also have a causal effect on x .",
"Otherwise, the confounder will collapse into the covariate.",
"The spurious correlations we have observed are that models often generate linguistically similar but logically inconsistent outputs.",
"For example, \" the United Kingdom \" and \" the United State \" instead of \" the United Kingdom \" because the two entities are linguistically similar to each other.",
"Therefore, we assume the proxy confounders c to be the not-mentioned entities in the given table.",
"And we keep those high-frequency entities in the training set ( (cid:21) 5 times).",
"Let c = f c i;j g 2 RN c (cid:2) L c where c i;j denotes the j -th token of i -th entity, and N c and L c denote the number of entities and maximum length of the entity, respectively.",
"The log-probability log p (cid:9) ( c j z c ) is computed by: log p (cid:9) ( c j z c ) = i;j log p (cid:9) ( c i;j j z c ; c i;<j ) (9) where c i;<j denotes the tokens preceding to the j th token in the i -th entity.",
"Then we minimize the cross-entropy between p (cid:9) ( c j z c ) and p ( c ) .",
"Supervision for the Mediator .",
"In the logical table-to-text generation task, from the human perspective, the correct mediators may be the selected entities, the logical types, or the logical forms ( Chen et al., 2020e).",
"In this paper, we only consider the selected entities as it is relatively easy to extract while the logical types or forms are labor-intensive to annotate.",
"We represent the selected entities by m = f m i;j g 2 RN m (cid:2) L m where m i;j denotes the j -th token of i -th entity, and N m and L m denote the number of entities and maximum token number of the entity, respectively.",
"The log-probability p (cid:8) ( m j z m ) is computed by: log p (cid:8) ( m j z m ) = i;j log p (cid:8) ( m i;j j z m ; m i;<j ) (10) where m i;<j denotes the tokens preceding to the j -th token in the i -th entity.",
"Then we introduce the implementations of p (cid:18) ( y j x ; z m ; z c ) , p ( z m j x ) , and q ( z c j y ) .",
"We assume that the seq2seq model consists of an encoder Enc ( (cid:1) ) and a decoder Dec ( (cid:1) ) for p (cid:18) ( y j x ; z m ; z c ) .",
"And a target-oriented encoder T-Enc ( (cid:1) ) is used for q ( z c j y ) .",
"Firstly, we need to implement p ( z m j x ) and q ( z c j y ) .",
"Let H x be the hidden states of x encoded via H x = Enc ( x ) , and E y be the embeddings of y before fed to the decoder Dec ( (cid:1) ) .",
"We use a fully-connected neural network (FCNN) to project H x followed with the average pooling to obtain z m .",
"And we use the target-oriented encoder to encode E y and obtain H y = T-Enc ( E y ) .",
"We apply the mean pooling operation to H y and obtain h y .",
"To modeling q ( z c j y ) which is approximated to a Gaussian distribution, we use two FC-NNs to process h y and obtain the mean vector (cid:22) y and the log variance log (cid:27) 2 y which makes: q ( z c j y ) = N ( (cid:22) y ; (cid:27) 2 y ) (11) To implement p (cid:18) ( y j x ; z m ; z c ) , our model cooperates an non-pretrained model \"Field-Infusing+Trans\" (Chen et al., 2020a) or a pretrained model \"GPT-TabGen\" ( Chen et al., 2020a).",
"Specifically, \"Field-Infusing+Trans\" uses an infusing field embedding network to produce header-words-aware and cell-position-aware embeddings E p , then concatenate E p with token embeddings to obtain the infused embeddings E = f e i g 2 RL t (cid:2) d where e i denotes the embedding of i -th token in the table x , and L t and d denote the token number and the dimension, respectively.",
"Then the decoder is used to decode y token by token: y t = Dec ( H x ; y (cid:20) t ; z m ; z c ) .",
"The latent variables z m and z c are concatenated as one latent variable and projected by a FCNN to get a vector z m;c which has the same dimension with H x .",
"Then we add z m;c with E y at each decoding step.",
"When cooperated with \"GPT-TabGen\", the difference from \"Field-Infusing+Trans\" is that we use the GPT-2 as the encoder and decoder, and use the table linearization to indicate the cell position instead of the field-infusing method.",
"More details about the table linearization can be found in (Chen et al., 2020a).",
"And the vector z m;c is fed to the last Transformer layer of GPT-2 instead of the first layer, which brings less impact on the pretrained GPT-2.",
"By sampling multiple latent variables z c (cid:24) p ( z c ) , our model can generate multiple candidate sentences e Y = ( e y 1 ; e y 2 ; :::; e y N c ) for the table x where N c is the number of generated sentences.",
"We propose to find out the best sentence by a trained selector.",
"The generator optimized with MLE may focus more on the token-level matching than the sentence-level consistency while the selector will focus on improving the sentence-level scores.",
"Therefore, it can be considered as an alternative of reinforcement learning.",
"The selector scores each candidate sentence by s i = S (cid:31) ( e y i ; x ) where (cid:31) denotes the parameters of the selector network.",
"Note that we are not designing a selector s i = S (cid:31) ( e y i ; y ) because the reference sentence y is not available in practice.",
"Recent work has provided several selectors including parsing-based and NLI-based models ( Chen et al., 2020c).",
"We can directly use these selectors but we aims to develop a more general selector jointly considering surface-level fidelity and logical fidelity.",
"We use a mix of BLEU-3 ( Papineni et al., 2002) and NLI-Acc (Chen et al., 2020a) scores to supervise the selector.",
"In the training stage of the selector, we can get the gold scores of each generated candidate with the referenced sentence y by s (cid:3) i = S (cid:3) ( e y i ; y ) .",
"Then, we use BERT to encode x and y i followed with the average pooling layers to produce h s and h is .",
"Finally, we score the table-sentence pair represented by ( h s ; h is ) as follows: h f = h s (cid:8) h is (cid:8) j h s (cid:0) h is j (cid:8) h s h is S (cid:31) ( e y i ; x ) = (cid:27) ( W s h f ) (12) where (cid:8) and denote the concatenation and the element-wise multiplication operations, respectively.",
"And W s denotes the parameters of the scoring network.",
"The score S (cid:31) ( e y i ; x ) is between 0 and 1 , and better sentences need to be closer to 1 .",
"The scores of gold reference are set to 1 .",
"Then we use the margin-based triplet loss for the generated sentences in two way: comparing with gold sentences, and comparing between arbitrary two generated sentences.",
"Given N c generated candidate sentences, we rank the generated sentences according to the mix of BLEU-3 and NLI-Acc scores.",
"The ranked sentences are denoted by e Y r = ( e y 1 r ; e y 2 r ; :::; e y N c r ) where e y 1 r has the highest score.",
"Then the loss is computed as 5537 follows: L (cid:31) = max ( 0 ; S (cid:31) ( e y ir ; x ) (cid:0) S ( y ; x ) + (cid:13) 1 ) + max ( 0 ; S (cid:31) ( e y jr ; x ) (cid:0) S ( e y ir ; x ) + (cid:13) 2 ) (13) where (cid:13) 1 and (cid:13) 2 are the hyperparameters representing margin values, and i and j represent the ranked indexes.",
"At the inference stage, we can select the best sentence with the highest score.",
"We conduct experiments on two datasets: LogicNLG (Chen et al., 2020a) and Logic2Text (Chen et al., 2020e).",
"LogicNLG is constructed based on the positive statements of the Tabfact dataset ( Chen et al., 2020c), which contains rich logical inferences in the annotated statements.",
"Logic2Text is a smaller dataset and provides the annotation of logical forms.",
"Since the annotations of logical forms are labor-intensive, we only use the table-sentence pairs, following the task formulation of LogicNLG.",
"The statistics of the two datasets are shown in Table 1.",
"The models are evaluated on the surface-level consistency and the logical fidelity .",
"In terms of the surface-level consistency, we evaluate models on the sentence-level BLEU scores (Papineni et al., 2002) based on 1-3 grams matching.",
"In terms of logical fidelity, we follow the recent work and apply three metrics including SP-Acc and NLI-Acc based on semantic parsing and pretrained NLI model, respectively (Chen et al., 2020a).",
"The metrics are computed with the officially released codes 2 .",
"Compared Models.",
"We compare our models with both non-pretrained and pretrained models.",
"The non-pretrained models include \"Field-Gating\" (Liu et al., 2018) and \"Field-Infusing\" (Lebret et al., 2016) with LSTM decoder or Transformer 2 https://github.com/wenhuchen/LogicNLG decoder, which are strong baselines among non-pretrained models.",
"The pretrained models include \"BERT-TabGen\" and \"GPT-TabGen\" with the base size (Chen et al., 2020a).",
"Moreover, for the LogicNLG dataset, we compare with a two-phrase approach denoted by \"GPT-Coarse-to-Fine\", which first generates a template and then generates the final sentence conditioning on the template (Chen et al., 2020a).",
"For the variational models, we compare with the vanilla CVAE (Kingma et al., 2014) that approximates the prior distribution p ( z j x ) to p ( z ) .",
"Hyperparameters.",
"For the non-pretrained models, we set the dimension of LSTM or Transformer to 256.",
"Our model is based on \"Field-Infusing+Trans\" which includes 3-layer Transformers in the encoder and decoder respectively.",
"The posterior network q ( z c j y ) contains a two-layer Transformer.",
"For the pretrained models, we use the base version of BERT and GPT-2 which have an embedding size of 768.",
"The KL loss is minimized with the annealing trick where the KL weight is set to 0 for 2 epochs and grows to 1 : 0 in another 5 epochs.",
"The learning rate is initialized to set to 0 : 0001 and 0 : 000002 for non-pretrained and pretrained models, respectively.",
"Each model is trained for 15 epochs.",
"A special setting for our model is that we generate 10 candidate sentences for each table, and report the average performance and the best performance based on the selector, respectively.",
"We set the hyperparameters (cid:13) 1 = 0 : 2 and (cid:13) 2 = 0 : 2 for the selector.",
"Table 2 and 3 present the performance of our model as well the compared models on the surface-level consistency and the logical fidelity.",
"As shown, without the selector, our model DCVED already outperforms the baseline models \"Field-Infusing\" and \"GPT-TableGen\" on both LogicNLG and Logic2Text datasets.",
"Specifically, when compared with \"Field-Infusing\", our model increases the BLEU-3, SP-Acc, and NLI-Acc scores by 1 : 4 , 3 : 7 , and 3 : 9 points, respectively on the LogicNLG dataset, and 0 : 2 , 2 : 4 , and 2 : 8 points on the Logic2Text dataset.",
"When cooperating with GPT-2, our model outperforms \"GPT-TableGen\" by 1 : 6 , 2 : 2 , and 5 : 2 points of BLEU-3, SP-Acc, and NLI-Acc scores, respectively on the LogicNLG dataset, and 0 : 2 , 1 : 3 , and 5 : 4 points on the Logic2Text dataset.",
"Moreover, our model 5538 Model Type Surface-Level Fidelity Logical Fidelity BLEU-1 BLEU-2 BLEU-3 SP-Acc NLI-Acc Non-Pretrained Models Field-Gating + LSTM -42.3 19.5 6.9 38.0 56.8 Field-Gating + Trans -44.1 20.9 8.3 38.5 57.3 Field-Infusing + LSTM -43.1 19.7 7.1 38.6 57.1 Field-Infusing + Trans -43.7 20.9 8.4 38.9 57.3 CVAE + Field-Infusing + Trans -46.4 23.1 9.4 39.8 59.0 DCVED + Field-Infusing + Trans -46.2 22.9 9.8 42.6 61.2 DCVED + Field-Infusing + Trans Trained Selector 47.4 23.4 10.6 42.1 62.5 DCVED + Field-Infusing + Trans Oracle NLI-Acc z 45.0 22.2 9.0 41.7 86.8 DCVED + Field-Infusing + Trans Oracle BLEU-3 z 55.2 32.9 15.9 41.8 60.3 Pretrained Models BERT-TabGen -47.8 26.3 11.9 42.2 68.1 GPT-TabGen -48.8 27.1 12.6 42.1 68.7 GPT-TabGen Adv-Reg 45.8 23.1 9.6 40.9 68.5 GPT-TabGen RL 45.1 23.6 9.1 43.1 67.7 GPT-Coarse-to-Fine -46.6 26.8 13.3 42.7 72.2 CVAE + GPT-TabGen -49.0 27.9 13.5 42.6 71.8 DCVED + GPT-TabGen -49.3 28.3 14.2 44.3 73.9 DCVED + GPT-TabGen Trained Selector 49.5 28.6 15.3 43.9 76.9 DCVED + GPT-TabGen Oracle NLI-Acc z 49.7 28.5 14.5 46.1 92.2 DCVED + GPT-TabGen Oracle BLEU-3 z 59.7 38.0 22.1 45.0 74.2 Table 2: The experimental results of different models on the test split of LogicNLG dataset, where we split the table into non-pretrained and pretrained models.",
"also outperforms the recent SOTA model \"GPT-Coarse-to-Fine\" which increases the NLI-Acc score from 72 : 2 to 73 : 9 points on the Logic2Text dataset.",
"When combining with the trained selector, our model further increases the NLI-Acc scores to 76 : 9 and 73 : 8 points on LogicNLG and Logic2Text datasets, respectively.",
"We also show the upper bound of our model on BLEU and NLI-Acc scores.",
"Assume that two optimum selectors have access to the ground-truth sentences, and 5539 Dataset Model BLEU-3 SP-Acc NLI-Acc LogicNLG CVAE 9.4 39.8 59.0 DCVED ( z c ) 9.0 40.8 60.3 DCVED ( z c , c ) 9.3 40.1 60.2 DCVED ( z c , z m , m ) 10.2 41.8 60.6 DCVED (Full) 9.8 42.6 61.2 Logic2Text CVAE 9.3 38.1 41.6 DCVED ( z c ) 9.7 40.2 42.3 DCVED ( z c , c ) 9.6 39.4 43.5 DCVED ( z c , z m , m ) 11.2 40.8 44.8 DCVED (Full) 10.7 40.9 45.2 Table 4: The performances of ablated models as well as the full model on the two datasets.",
"would select the best sentence according to the BLEU-3 and NLI-Acc scores, respectively.",
"As shown, a higher BLEU-3 score does not lead to a higher NLI-Acc score.",
"Similarly, a higher NLI-Acc score does not yield a higher BLEU-3 score.",
"The findings indicate that selecting candidates only by BLEU-3 or only by NLI-Acc is not enough.",
"Instead, our trained selector comprehensively considers the BLEU-3 and NLI-Acc scores.",
"To analyze which mechanisms are driving the improvements, we present an ablation study in Table 4.",
"We show different ablated models with different combinations of z c , z m , c and m .",
"All these models are based on \"Field-Infusing\".",
"Moreover, the vanilla CVAE is also compared, which can be considered as a baseline making both z m and z c independent from x .",
"As shown, both the mediators and the confounders are influential.",
"The full model achieve the best SP-Acc and NLI-Acc scores with slightly lower BLEU-3 scores than the ablated model, DCVED ( z c , z m , m ).",
"Eliminating c from the full model leads to a drop of NLI-Acc by 0 : 6 and 0 : 4 points on LogicNLG and Logic2Text, respectively.",
"Further eliminating z m and m leads to a drop of NLI-Acc by 0 : 9 and 2 : 9 points on LogicNLG and Logic2Text, respectively.",
"An interesting finding is that DCVED ( z c , c ) performs worse than DCVED ( z c ) on SP-Acc.",
"The reason may be that predicting c from z c without considering the mediators z m may also lead to a bias, similar to CVAE.",
"However, the ablated models all perform better than CVAE on SP-Acc and NLI-Acc.",
"Following recent work (Chen et al., 2020a), we also perform human evaluation on the fluency and",
"logical fidelity.",
"We randomly select 200 tables in the LogicNLG dataset, and generate one sentence per table for each model.",
"Then we present the generated sentences to four raters without telling which model generates them.",
"The raters are all post-graduate students majoring in computer science.",
"We ask the raters to finish two binary-decision tasks: 1) whether a generated sentence is fluent; and 2) whether the fact of a generated sentence can be supported by the given table.",
"We report the averaged results in Table 5, from which we can see that our model \"DCVED + GPT-TabGen\" mainly increases the logical fidelity over the baseline model \"GPT-TabGen\" from 19.1% to 25.8%.",
"When cooperated with the trained selector and the oracle NLI selector, our model further increase the logical fidelity to 30.8% and 37.1%, respectively.",
"It is worth noting that the NLI selector can be represented by the scorer PNLI ( e y ; x ) , which does not require the ground-truth sentence y to be available ( Chen et al., 2020a).",
"It means that the setting of using the oracle NLI selector is acceptable.",
"To directly see the effect of our model, we present a case study in Figure 2.",
"Several GPT-2 based models generate sentences describing two tables in the LogicNLG test set.",
"The underlined red words represent the facts contradicting the table.",
"As shown, for the first table, CVAE generates the sentence \" The album was released in the United State 2 time \", where the correct entity should be \" the United Kingdom \" according to the table.",
"Instead, our model DCVED acknowledges that \" The album was released in the United Kingdom 2 time \".",
"Moreover, compared with those deterministic models like GPT-TableGen and GPT-Coarse-to-Fine, our model can generate sentences with different logical types.",
"For the second table, we can see that many contradicting facts exist in recent models.",
"For example, GPT-TableGen generates an incomplete sentence, which uses superla-5540 country date Europe 17 october 2008 Australia 18 october 2008 United Kingdom 20 october 2008 United Kingdom 1 december 2008 United States 20 october 2008 Japan 22 october 2008 Germany 5 december 2008 Global ( itunes ) 19 november 2012 Case 1: Black Ice (Album) GPT-TableGen: The album was released in the United State .",
"GPT-Coarse-to-Fine: Black Ice was released in Germany and Japan .",
"CVAE: The album was released in the United State 2 time.",
"DCVED: The album was released in the United Kingdom 2 time .",
"DCVED: The album was released in the United State before the release of the album in Japan .",
"CVAE: The Green Party Of Canada had the highest number Of Nomination in the 2000 Election.",
"DCVED: The Green Party Of Canada had the highest number Of Nomination in 2004 .",
"DCVED: The Green Party Of Canada had more Candidate Nominated in 2004 than in 2000 .",
"tive logic but not mentions a specific year.",
"Instead, our model produces two logically consistent sentences with superlative and comparative logic.",
"Although our model can improve the logical fidelity to a certain degree, all the models still get low scores in terms of the logical fidelity in human evaluation, which reflects the challenge of the task.",
"Especially, we find that models do not perform well on certain types of tables: 1) containing and comparing between large numbers, e.g., 18,013 and 29,001 in a table; and 2) containing mixed logics so that models require multi-hop reasoning, e.g., models generating \"there were 3 na-tions that won 2 gold medals\" while the correct nation number is 4.",
"To deal with these problems, we believe that two directions of work may be workable: 1) enhancing the mediators.",
"For example, the logical forms (Chen et al., 2020e) can be utilized as the mediator.",
"But as mentioned in Section 4.2, it is label-intensive to annotate the logical forms; 2) large-scale knowledge grounded pre-training, which may be a more promising way.",
"This type of work utilized the existing knowledge graphs or crawled data from Wikipedia (Chen et al., 2020b) to help models better encode/represent non-linguistic inputs, such as the numbers, the time, or the scores in the tables.",
"In this paper, we propose a de-confounded variational encoder-decoder for the logical table-to-text generation.",
"Firstly, we assume two latent variables existed in the continuous space, representing the mediator and the confounder respectively.",
"And we apply the causal intervention method to reduce the spurious correlations.",
"Secondly, to make the latent variables meaningful, we use the exactly selected entities to supervise the mediator and the not selected but linguistically similar entities to supervise the confounder.",
"Finally, since our model can generate multiple candidates for a table, we train a selector guided by both surface-level and logical fidelity to select the best sentence.",
"The experiments show that our model yields competitive results with recent SOTA models.",
"The authors would like to thank the anonymous reviewers for their constructive comments.",
"This work was supported by the National Key Research and Development Program of China under Grant 2018YFC0830400, and Shanghai Municipal Science and Technology Major Project under Grant 2021SHZDZX0102."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"result",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"method",
"result",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"method",
"method",
"result",
"other",
"other"
] |
[
"The success of the large neural language models on many NLP tasks is exciting.",
"However, we find that these successes sometimes lead to hype in which these models are being described as understanding language or capturing meaning.",
"In this position paper, we argue that a system trained only on form has a priori no way to learn meaning.",
"In keeping with the ACL 2020 theme of Taking Stock of Where We've Been and Where We're Going, we argue that a clear understanding of the distinction between form and meaning will help guide the field towards better science around natural language understanding.",
"The current state of affairs in NLP is that the large neural language models (LMs), such as BERT (De-vlin et al., 2019) or GPT-2 (Radford et al., 2019), are making great progress on a wide range of tasks, including those that are ostensibly meaning-sensitive.",
"This has led to claims, in both academic and popular publications, that such models under-stand or comprehend natural language or learn its meaning.",
"From our perspective, these are overclaims caused by a misunderstanding of the relationship between linguistic form and meaning.",
"We argue that the language modeling task, because it only uses form as training data, cannot in principle lead to learning of meaning .",
"We take the term language model to refer to any system trained only on the task of string prediction, whether it operates over characters, words or sentences, and sequentially or not.",
"We take (linguistic) meaning to be the relation between a linguistic form and communicative intent.",
"Our aim is to advocate for an alignment of claims and methodology: Human-analogous natural language understanding (NLU) is a grand challenge of artificial intelligence, which involves mastery of the structure and use of language and the ability to ground it in the world.",
"While large neural LMs may well end up being important components of an eventual full-scale solution to human-analogous NLU, they are not nearly-there solutions to this grand challenge.",
"We argue in this paper that genuine progress in our field climbing the right hill, not just the hill on whose slope we currently sit depends on maintaining clarity around big picture notions such as meaning and understanding in task design and reporting of experimental results.",
"After briefly reviewing the ways in which large LMs are spoken about and summarizing the recent flowering of BERTology papers ( 2), we offer a working definition for meaning ( 3) and a series of thought experiments illustrating the impossibility of learning meaning when it is not in the training signal ( 4,5).",
"We then consider the human language acquisition literature for insight into what information humans use to bootstrap language learning ( 6) and the distributional semantics literature to discuss what is required to ground distributional models ( 7).",
"8 presents reflections on how we look at progress and direct research effort in our field, and in 9, we address possible counterarguments to our main thesis.",
"Publications talking about the application of large LMs to meaning-sensitive tasks tend to describe the models with terminology that, if interpreted at face value, is misleading.",
"Here is a selection from academically-oriented pieces (emphasis added): (1) In order to train a model that understands sentence relationships, we pre-train for a binarized next sentence prediction task.",
"(Devlin et al., 2019) (2) Using BERT, a pretraining language model, has been successful for single-turn machine comprehension ... (Ohsugi et al., 2019) (3) The surprisingly strong ability of these models to recall factual knowledge without any fine-tuning demonstrates their potential as unsupervised open-domain QA systems.",
"If the highlighted terms are meant to describe human-analogous understanding, comprehension, or recall of factual knowledge, then these are gross overclaims.",
"If, instead, they are intended as technical terms, they should be explicitly defined.",
"One important consequence of imprudent use of terminology in our academic discourse is that it feeds AI hype in the popular press.",
"As NLP gains public exposure and is more widely used in applied contexts, it is increasingly important that the actual capabilities of our systems be accurately represented.",
"In some cases, NLP experts speaking with the media are being appropriately careful, as in these two quotes in the New York Times : 1 (4) These systems are still a really long way from truly understanding running prose.",
"(Gary Marcus) (5) Though BERT passed the lab's common-sense test, machines are still a long way from an artificial version of a human's common sense.",
"(Oren Etzioni)",
"However, there are plenty of instances where the popular press gets it wrong, such as (6) from the B2C website, 2 apparently based on the Google Blog post about BERT and search, which includes numerous statements like (7).",
"3 (6) BERT is a system by which Google's algorithm uses pattern recognition to better understand how human beings communicate so that it can return more relevant results for users.",
"(7) Here are some of the examples that showed up our evaluation process that demonstrate BERTs ability to understand the intent behind your search.",
"In sum, it is not clear from our academic literature whether all authors are clear on the distinction between form and meaning, but it is clear that the way we speak about what neural LMs are doing is misleading to the public.",
"Part of the reason for this tendency to use imprecise language may well be that we do not yet fully understand what exactly it is about language that the large LMs come to implicitly represent.",
"Their success, however, has sparked a subfield (BERTol-ogy') that aims to answer this question.",
"The methodology of probing tasks (e.g. Adi et al., 2017; Ettinger et al., 2018) has been used to show that 1 https://www.nytimes.com/2018/11/18/technology/artific ial-intelligence-language.html, accessed 2019/12/04 2 https://www.business2community.com/seo/what-t o-do-about-bert-googles-recent-local-algorithm-updat e-02259261, accessed 2019/12/04 3 https://www.blog.google/products/search/search-langu age-understanding-bert/, accessed 2019/12/04 large LMs learn at least some information about phenomena such as English subject-verb agreement (Goldberg, 2019; Jawahar et al., 2019), constituent types, dependency labels, NER, and (core) semantic role types (again, all in English) (Tenney et al., 2019).",
"4 Hewitt and Manning (2019) find information analogous to unlabeled dependency structures in the word vectors provided by ELMo and BERT (trained on English).",
"And of course it is well established that vector-space representations of words pick up word classes, both syntactic (POS, e.g. Lin et al., 2015) and semantic (lexical similarity, e.g. Rubenstein and Goodenough, 1965; Mikolov et al., 2013).",
"Others have looked more closely at the success of the large LMs on apparently meaning sensitive tasks and found that in fact, far from doing the rea-soning ostensibly required to complete the tasks, they were instead simply more effective at leveraging artifacts in the data than previous approaches.",
"Niven and Kao (2019) find that BERT's unreasonably good performance on the English Argument Reasoning Comprehension Task (Habernal et al., 2018) falls back to chance if the dataset is modified by adding adversarial examples that just negate one piece of the original, thus mirroring the distribution of lexical cues for each label.",
"Similarly, McCoy et al. (2019) find that BERT's performance on the English Multi-genre Natural Language Inference dataset (Williams et al., 2018) is predicated on its ability to leverage syntactic heuristics involving overlap (of full constituents, subsequences, or simply bags of words).",
"In a dataset carefully designed to frustrate such heuristics, BERT's performance falls to significantly below chance.",
"In this brief overview of BERTology papers we have highlighted both the extent to which there is evidence that large LMs can learn aspects of linguistic formal structure (e.g. agreement, dependency structure), and how their apparent ability to reason is sometimes a mirage built on leveraging artifacts in the training data (i.e. form, not mean-ing).",
"Our contribution is an argument on theoretical grounds that a system exposed only to form in its training cannot in principle learn meaning.",
"We start by defining two key terms: We take form to be any observable realization of language: marks 4 But see Warstadt et",
"al.'s (2019) cautionary note about how the methodology used for probing can influence the results.",
"on a page, pixels or bytes in a digital representation of text, or movements of the articulators.",
"5 We take meaning to be the relation between the form and something external to language, in a sense that we will make precise below.",
"When humans use language, we do so for a purpose: We do not talk for the joy of moving our articulators, but in order to achieve some communicative intent .",
"There are many types of communicative intents: they may be to convey some information to the other person; or to ask them to do something; or simply to socialize.",
"We take meaning to be the relation M E I which contains pairs ( e, i ) of natural language expressions e and the communicative intents i they can be used to evoke.",
"Given this definition of meaning, we can now use understand to refer to the process of retrieving i given e .",
"Communicative intents are about something that is outside of language .",
"When we say Open the window! or When was Malala Yousafzai born?",
", the communicative intent is grounded in the real world the speaker and listener inhabit together.",
"Communicative intents can also be about abstract worlds, e.g. bank accounts, computer file systems, or a purely hypothetical world in the speaker's mind.",
"Linguists distinguish communicative intent from conventional (or standing ) meaning (Quine, 1960; Grice, 1968).",
"The conventional meaning of an expression (word, phrase, sentence) is what is constant across all of its possible contexts of use.",
"Conventional meaning is an abstract object that represents the communicative potential of a form, given the linguistic system it is drawn from.",
"Each linguistic system (say, English) provides a relation C E S , which contains pairs ( e, s ) of expressions e and their conventional meanings s .",
"6 The field of linguistic semantics provides many competing theories of what conventional meanings s look like.",
"For our purposes, we don't need to select among these theories; all we assume is that conventional meanings must have interpretations, such as a means of testing them for truth against a model of the world.",
"Thus, like the meaning relation M , C connects language to objects outside of language.",
"5 In spoken languages, the primary articulators are the components of the vocal tract.",
"In signed languages, they are principally the hands and face.",
"6 We abstract away here from the facts that linguistic systems C change over time and are only incompletely shared among different speakers.",
"They are stable enough to function as rich signals to communicative intent.",
"Returning to the meaning relation M from above, it is best understood as mediated by the relation C of a linguistic system shared between two interlocutors.",
"The speaker has a certain communicative intent i , and chooses an expression e with a standing meaning s which is fit to express i in the current communicative situation.",
"Upon hearing e , the listener then reconstructs s and uses their own knowledge of the communicative situation and their hypotheses about the speaker's state of mind and intention in an attempt to deduce i .",
"This active participation of the listener is crucial to human communication (Reddy, 1979; Clark, 1996).",
"For example, to make sense of (8) and (9) (from Clark, 1996, p.144), the listener has to calculate that Napoleon refers to a specific pose (hand inside coat flap) or that China trip refers to a person who has recently traveled to China.",
"We humans are also very willing, as we will see in 4 below, to attribute communicative intent to a linguistic signal of a language we speak, even if the originator of the signal is not an entity that could have communicative intent.",
"To summarize, as we strive to understand how NLU tasks and system performance on those tasks relates to the bigger picture goals of building human-analogous natural language understanding systems, it is useful to distinguish cleanly between form, conventional meaning, and communicative intent.",
"Furthermore, we should be careful not to confuse communicative intent with ground truth about the world, as speakers can of course be mistaken, be intentionally dissembling, etc.",
"We argue that a model of natural language that is trained purely on form will not learn meaning: if the training data is only form, there is not sufficient signal to learn the relation M between that form and the non-linguistic intent of human language users, nor C between form and the standing meaning the linguistic system assigns to each form.",
"Meaning and understanding have long been seen as key to intelligence.",
"Turing (1950) argued that a machine can be said to think if a human judge cannot distinguish it from a human interlocutor after having an arbitrary written conversation with each.",
"However, humans are quick to attribute meaning and even intelligence to artificial agents, even when they know them to be artificial, as evidenced by the way people formed attachments to ELIZA (Weizenbaum, 1966; Block, 1981).",
"This means we must be extra careful in devising evaluations for machine understanding, as Searle (1980) elaborates with his Chinese Room experiment: he develops the metaphor of a system in which a person who does not speak Chinese answers Chinese questions by consulting a library of Chinese books according to predefined rules.",
"From the outside, the system seems like it understands Chinese, although in reality no actual understanding happens anywhere inside the system.",
"Searle's thought experiment begins from the premise that it is possible to manipulate forms well enough to be indistinguishable from a system that understands the meaning of the forms, reasons about it, and responds appropriately.",
"We observe that much recent work in NLP claims to be building systems where not only the runtime system but in fact also the process for building it only has access to form.",
"But language is used for communication about the speakers' actual (physical, social, and mental) world, and so the reasoning behind producing meaningful responses must connect the meanings of perceived inputs to information about that world.",
"This in turn means that for a human or a machine to learn a language, they must solve what Harnad (1990) calls the symbol grounding problem .",
"Harnad encapsulates this by pointing to the impossibility for a non-speaker of Chinese to learn the meanings of Chinese words from Chinese dictionary definitions alone.",
"Our purpose here is to look more deeply into why meaning can't be learned from linguistic form alone, even in the context of modern hardware and techniques for scaling connectionist models to the point where they can take in vast amounts of data.",
"We argue that, independently of whether passing the Turing test would mean a system is intelligent, a system that is trained only on form would fail a sufficiently sensitive test, because it lacks the ability to connect its utterances to the world.",
"In order to illustrate the challenges in attempting to learn meaning from form alone, we propose a concrete scenario.",
"Say that A and B, both fluent speakers of English, are independently stranded on two uninhabited islands.",
"They soon discover that previous visitors to these islands have left behind telegraphs and that they can communicate with each other via an underwater cable.",
"A and B start happily typing messages to each other.",
"Meanwhile, O, a hyper-intelligent deep-sea octopus who is unable to visit or observe the two islands, discovers a way to tap into the underwater cable and listen in on A and B's conversations.",
"O knows nothing about English initially, but is very good at detecting statistical patterns.",
"Over time, O learns to predict with great accuracy how B will respond to each of A's utterances.",
"O also observes that certain words tend to occur in similar contexts, and perhaps learns to generalize across lexical patterns by hypothesizing that they can be used somewhat interchangeably.",
"Nonetheless, O has never observed these objects, and thus would not be able to pick out the referent of a word when presented with a set of (physical) alternatives.",
"At some point, O starts feeling lonely.",
"He cuts the underwater cable and inserts himself into the conversation, by pretending to be B and replying to A's messages.",
"Can O successfully pose as B without making A suspicious?",
"This constitutes a weak form of the Turing test (weak because A has no reason to suspect she is talking to a non-human); the interesting question is whether O fails it because he has not learned the meaning relation, having seen only the form of A and B's utterances.",
"The extent to which O can fool A depends on the task that is, on what A is trying to talk about.",
"A and B have spent a lot of time exchanging trivial notes about their daily lives to make the long island evenings more enjoyable.",
"It seems possible that O would be able to produce new sentences of the kind B used to produce; essentially acting as a chatbot.",
"This is because the utterances in such conversations have a primarily social function, and do not need to be grounded in the particulars of the interlocutors' actual physical situation nor anything else specific about the real world.",
"It is sufficient to produce text that is internally coherent.",
"Now say that A has invented a new device, say a coconut catapult.",
"She excitedly sends detailed instructions on building a coconut catapult to B, and asks about B's experiences and suggestions for improvements.",
"Even if O had a way of constructing the catapult underwater, he does not know what words such as rope and coconut refer to, and thus can't physically reproduce the experiment.",
"He can only resort to earlier observations about how B responded to similarly worded utterances.",
"Perhaps O can recognize utterances about mangos and nails as similarly worded because those words appeared in similar contexts as coconut and rope .",
"So O decides to simply say Cool idea, great job!, because B said that a lot when A talked about ropes and nails.",
"It is absolutely conceivable that A accepts this reply as meaningful but only because A does all the work in attributing meaning to O's response.",
"It is not because O understood the meaning of A's instructions or even his own reply.",
"Finally, A faces an emergency.",
"She is suddenly pursued by an angry bear.",
"She grabs a couple of sticks and frantically asks B to come up with a way to construct a weapon to defend herself.",
"Of course, O has no idea what A means.",
"Solving a task like this requires the ability to map accurately between words and real-world entities (as well as reasoning and creative thinking).",
"It is at this point that O would fail the Turing test, if A hadn't been eaten by the bear before noticing the deception.",
"7 Having only form available as training data, O did not learn meaning.",
"The language exchanged by A and B is a projection of their communicative intents through the meaning relation into linguistic forms.",
"Without access to a means of hypothesizing and testing the underlying communicative intents, reconstructing them from the forms alone is hopeless, and O's language use will eventually diverge from the language use of an agent who can ground their language in coherent communicative intents.",
"The thought experiment also illustrates our point from 3 about listeners' active role in communication.",
"When O sent signals to A pretending to be B, he exploited statistical regularities in the form, i.e. the distribution of linguistic forms he observed.",
"Whatever O learned is a reflection of A and B's communicative intents and the meaning relation.",
"But reproducing this distribution is not sufficient for meaningful communication.",
"O only fooled A into believing he was B because A was such an active listener: Because agents who produce English sentences usually have communicative intents, she 7 To see what a large LM might reply in this situation, we prompted the GPT-2 demo with Help! I'm being chased by a bear! All I have is these sticks. What should I do? , and GPT-2 to supplied You're not going to get away with this! (ht tps://gpt2.apps.allenai.org/, accessed 2019/12/4).",
"Following Radford et",
"al.'s (2019) approach of giving explicit cues to encode the task, we also constructed a more elaborate prompt.",
"The results, given in Appendix A, are highly entertaining but no more helpful to the hapless A. assumes that O does too, and thus she builds the conventional meaning English associates with O's utterances.",
"Because she assumes that O is B, she uses that conventional meaning together with her other guesses about B's state of mind and goals to attribute communicative intent.",
"It is not that O's utterances make sense, but rather, that A can make sense of them.",
"The story of the octopus considers the problem of learning not only the full communicative system, including the relations M and C , but also the reasoning required to come up with answers that are both coherent and also helpful in the real world.",
"Here, we provide two more constrained thought experiments, to focus more narrowly on the problem of learning the meaning relation, for both natural languages and programming languages.",
"Because programming languages are designed to be unambiguous and relatively insensitive to execution context, the distinction between standing and speaker meaning is less important than for natural languages.",
"A Java program e , when compiled and executed on the Java Virtual Machine, can be interpreted as a function i which maps program inputs to program outputs.",
"We take the meaning relation J E I of Java to contain all such pairs ( e, i ) .",
"Java Imagine that we were to train an LM on all of the well-formed Java code published on Github.",
"The input is only the code.",
"It is not paired with bytecode, nor a compiler, nor sample inputs and outputs for any specific program.",
"We can use any type of LM we like and train it for as long as we like.",
"We then ask the model to execute a sample program, and expect correct program output.",
"English As as second example, imagine training an LM (again, of any type) on English text, again with no associated independent indications of speaker intent.",
"The system is also given access to a very large collection of unlabeled photos, but without any connection between the text and the photos.",
"For the text data, the training task is purely one of predicting form.",
"For the image data, the training task could be anything, so long as it only involves the images.",
"At test time, we present the model with inputs consisting of an utterance and a photograph, like How many dogs in the picture are jumping?",
"or Kim saw this picture and said What a cute dog!",
"What is cute?",
"and the photos Figure 1: Photo stimuli 1 (L) and 2 (R) in Figure 1, where the appropriate answers are a number or a region of the photo, respectively.",
"Reflections In both cases, the tests are ridiculous.",
"It seems patently unfair to ask the model to perform them, given what it was trained on.",
"But that is precisely the point we are trying to make: a system that has learned the meaning (semantics) of a programming language knows how to execute code in that language.",
"And a system that has learned the meaning of a human language can do things like answer questions posed in the language about things in the world (or in this case, in pictures).",
"In other words, what's interesting here is not that the tasks are impossible, but rather what makes them impossible: what's missing from the training data.",
"The form of Java programs, to a system that has not observed the inputs and outputs of these programs, does not include information on how to execute them.",
"Similarly, the form of English sentences, to a system that has not had a chance to acquire the meaning relation C of English, and in the absence of any signal of communicative intent, does not include any information about what language-external entities the speaker might be referring to.",
"Accordingly, a system trained only on the form of Java or English has no way learn their respective meaning relations.",
"One common reason for believing LMs might be learning meaning is the claim that human children can acquire language just by listening to it.",
"This is not supported by scholarly work on language acquisition: rather, we find that human language learning is not only grounded in the physical world around us, but also in interaction with other people in that world.",
"Kids won't pick up a language from passive exposure such as TV or radio: Snow et al. (1976) note in passing that Dutch-speaking kids who watch German TV shows by choice nonetheless don't learn German.",
"Kuhl (2007) shows experimentally that English-learning infants can learn Mandarin phonemic distinctions from brief interactions with a Mandarin-speaking experimenter but not from exposure to Mandarin TV or radio.",
"Baldwin (1995) and others argue that what is critical for language learning is not just interaction but actually joint attention, i.e. situations where the child and a caregiver are both attending to the same thing and both aware of this fact.",
"This theoretical perspective is substantiated with experimental results showing that toddlers (observed at 15 and 21 months) whose caregivers follow into their attention and provide labels for the object of joint attention more have larger vocabularies (Tomasello and Farrar, 1986); that toddlers (1820 months old) don't pick up labels uttered by someone behind a screen, but do pick up labels uttered by someone performing joint attention with them (Baldwin, 1995); and that at around 1011 months of age babies pay attention to whether a person's eyes are open or not in terms of whether to follow their gaze, and the degree to which infants in fact follow gaze at 1011 months while vocalizing themselves predicts vocabulary comprehension 78 months later (Brooks and Meltzoff, 2005).",
"8 In summary, the process of acquiring a linguistic system, like human communication generally, relies on joint attention and intersubjectivity: the ability to be aware of what another human is attending to and guess what they are intending to communicate.",
"Human children do not learn meaning from form alone and we should not expect machines to do so either.",
"Distributional semanticists have long been aware that grounding distributional representations in the real world is challenging.",
"The lexical similarity relations learned by distributional models trained on text don't in themselves connect any of those words to the world (Herbelot, 2013; Baroni et al., 2014; Erk, 2016; Emerson, 2020), and the distributions of words may not match the distribution of things in the world (consider four-legged dogs ).",
"One approach to providing grounding is to train distributional models on corpora augmented with perceptual data, such as photos (Hossain et al., 2019) or other modalities (Kiela and Clark, 2015; Kiela et al., 2015).",
"Another is to look to interaction data, e.g. a dialogue corpus with success annotations, including low-level success signals such as 8 These three studies do not name the language that the children were learning.",
"It appears to have been English.",
"emotional stress (McDuff and Kapoor, 2019) or eye gaze (Koller et al., 2012), which contains a signal about the felicitous uses of forms.",
"The idea that as the learner gets access to more and more information in addition to the text itself, it can learn more and more facets of meaning is worked out in detail by Bisk et al. (2020).",
"We agree that this is an exciting avenue of research.",
"From this literature we can see that the slogan meaning is use (often attributed to Wittgenstein, 1953), refers not to use as distribution in a text corpus but rather that language is used in the real world to convey communicative intents to real people.",
"Speakers distill their past experience of language use into what we call meaning here, and produce new attempts at using language based on this; this attempt is successful if the listener correctly deduces the speaker's communicative intent.",
"Thus, standing meanings evolve over time as speakers can different experiences (e.g. McConnell-Ginet, 1984), and a reflection of such change can be observed in their changing textual distribution (e.g. Herbelot et al., 2012; Hamilton et al., 2016).",
"What about systems which are trained on a task that is not language modeling say, semantic parsing, or reading comprehension tests and that use word embeddings from BERT or some other large LM as one component?",
"Numerous papers over the past couple of years have shown that using such pretrained embeddings can boost the accuracy of the downstream system drastically, even for tasks that are clearly related to meaning.",
"Our arguments do not apply to such scenarios: reading comprehension datasets include information which goes beyond just form, in that they specify semantic relations between pieces of text, and thus a sufficiently sophisticated neural model might learn some aspects of meaning when trained on such datasets.",
"It also is conceivable that whatever information a pretrained LM captures might help the downstream task in learning meaning, without being meaning itself.",
"Recent research suggests that it is wise to interpret such findings with caution.",
"As noted in 2, both McCoy et al. (2019) and Niven and Kao (2019) found that BERT picked up idiosyncratic patterns in the data for their tasks, and not meaning.",
"Beyond such diagnostic research on why large pretrained LMs boost such tasks so much, we think there is a more fundamental question to be asked here: Are we climbing the right hill?",
"There are two different perspectives from which one can look at the progress of a field.",
"Under a bottom-up perspective, the efforts of a scientific community are driven by identifying specific research challenges.",
"A scientific result counts as a success if it solves such a specific challenge, at least partially.",
"As long as such successes are frequent and satisfying, there is a general atmosphere of sustained progress.",
"By contrast, under a top-down perspective, the focus is on the remote end goal of offering a complete, unified theory for the entire field.",
"This view invites anxiety about the fact that we have not yet fully explained all phenomena and raises the question of whether all of our bottom-up progress leads us in the right direction.",
"There is no doubt that NLP is currently in the process of rapid hill-climbing.",
"Every year, states of the art across many NLP tasks are being improved significantly often through the use of better pretrained LMs and tasks that seemed impossible not long ago are already old news.",
"Thus, everything is going great when we take the bottom-up view.",
"But from a top-down perspective, the question is whether the hill we are climbing so rapidly is the right hill.",
"How do we know that incremental progress on today's tasks will take us to our end goal, whether that is General Linguistic Intelli-gence (Yogatama et al., 2019) or a system that passes the Turing test or a system that captures the meaning of English, Arapaho, Thai, or Hausa to a linguist's satisfaction?",
"It is instructive to look at the past to appreciate this question.",
"Computational linguistics has gone through many fashion cycles over the course of its history.",
"Grammarand knowledge-based methods gave way to statistical methods, and today most research incorporates neural methods.",
"Researchers of each generation felt like they were solving relevant problems and making constant progress, from a bottom-up perspective.",
"However, eventually serious shortcomings of each paradigm emerged, which could not be tackled satisfactorily with the methods of the day, and these methods were seen as obsolete.",
"This negative judgment we were climbing a hill, but not the right hill can only be made from a top-down perspective.",
"We have discussed the question of what is required to learn meaning in an attempt to bring the top-down perspective into clearer focus.",
"We can only definitively tell if we've been climbing the right hill in hindsight, but we propose some best",
"practices for less error-prone mountaineering: First, above all, cultivate humility towards language and ask top-down questions.",
"Neural methods are not the first bottom-up success in NLP; they will probably not be the last.",
"Second, be aware of the limitations of tasks: Artificial tasks like bAbI (Weston et al., 2016) can help get a field of research off the ground, but there is no reason to assume that the distribution of language in the test data remotely resembles the distribution of real natural language; thus evaluation results on such tasks must be interpreted very carefully.",
"Similar points can be made about crowdsourced NLI datasets such as SQuAD (Rajpurkar et al., 2016) or SNLI (Bowman et al., 2015), which do not represent questions that any particular person really wanted to ask about a text, but the somewhat unnatural communicative situation of crowdsourcing work.",
"If a system does better on such a task than the inter-annotator agreement, 9 the task probably has statistical artifacts that do not represent meaning.",
"In the vision community, Barbu et al. (2019) offer a novel dataset which explicitly tries to achieve a more realistic distribution of task data; it would be interesting to explore similar ideas for language.",
"Third, value and support the work of carefully creating new tasks (see also Heinzerling, 2019).",
"For example, the DROP reading comprehension benchmark (Dua et al., 2019) seeks to create more stringent tests of understanding by creating questions that require the system to integrate information from different parts of a paragraph via simple arithmetic or similar operations.",
"10 Fourth, evaluate models of meaning across tasks.",
"(Standing) meaning is task-independent, so a system that captures meaning should do well on multiple tasks.",
"Efforts like SuperGLUE (Wang et al., 2019) seem like a good step in this direction.",
"Finally, perform thorough analysis of both errors and successes.",
"As McCoy et al. (2019) and Niven and Kao (2019) have shown, systems that find success with large pretrained LMs do not necessarily do so because the LMs have learned meaning.",
"9 https://rajpurkar.github.io/SQuAD-explorer/ 10 See Appendix B for an exploration of what GPT-2 does with arithmetic.",
"Analyses which start from an attitude of healthy skepticism (too good to be true) and probing tasks which try to identify what the model actually learned can be good ways to find out whether the system performs well for the right reasons.",
"In discussing the main thesis of this paper with various colleagues over the past 18 months, we have observed recurring counterarguments.",
"In this section, we address those counterarguments, plus a few more that might arise.",
"But meaning' doesn't mean what you say it means.",
"Defining meaning is notoriously hard.",
"For the purposes of this paper, we chose a working definition which is as general as we could make it, capturing the crucial point that meaning is based on the link between linguistic form and something that is not language.",
"Meaning cannot simply be the relation between form and some kind of deep syntax, e.g. semantic dependency graphs (Oepen et al., 2015); like syntax, such representations could perhaps be learned from form alone (He et al., 2018; Hewitt and Manning, 2019).",
"Equating these with meaning ignores a core function of language, which is to convey communicative intents.",
"But meaning could be learned from . . . .",
"As we discussed in 7, if form is augmented with grounding data of some kind, then meaning can conceivably be learned to the extent that the communicative intent is represented in that data.",
"In addition, certain tasks are designed in a way that specific forms are declared as representing certain semantic relations of interest.",
"Examples of this include NLI datasets (Dagan et al., 2006; Ra-jpurkar et al., 2016; Ostermann et al., 2019) which pair input/output tuples of linguistic forms with an explicit semantic relation (e.g. text + hypothesis + entailed).",
"Similarly, control codes, or tokens like tl;dr , have been used to prompt large LMs to perform summarization and other tasks (Radford et al., 2019; Keskar et al., 2019).",
"Here forms are explicitly declared at test time to represent certain semantic relations, which together with the distributional similarity between e.g. tl;dr and other phrases such as in summary , may be enough to bootstrap a successful neural summarizer.",
"Depending on one's perspective, one may argue that such a system has learned to reliably find instances of the relation without understanding the text; or that explicitly declaring cues like entailed or tl;dr as representing certain semantic relations provides a training signal that goes beyond pure form.",
"Analogously, it has been pointed out to us that the sum of all Java code on Github (cf. 5) contains unit tests, which specify input-output pairs for Java code.",
"Thus a learner could have access to a weak form of interaction data, from which the meaning of Java could conceivably be learned.",
"This is true, but requires a learner which has been equipped by its human developer with the ability to identify and interpret unit tests.",
"This learner thus has access to partial grounding in addition to the form.",
"But there is so much form out there surely that is enough.",
"We have argued for the general principle that learning meaning requires more than form.",
"How much form can be observed is not relevant to our point; the octopus can observe A and B for as long as he wants, and the quantity of training data in 5 is not limited.",
"But given lots of form, could O perhaps learn to keep producing seemingly meaningful responses to A's utterances without learning meaning?",
"The problem is that people constantly generate new communicative intents to talk about their constantly evolving inner and outer worlds, and thus O would need to memorize infinitely many stimulus-response pairs.",
"Such an approach may be an avenue towards high scores in evaluations where perfection is not expected anyway; but it is probably not an avenue towards human-analogous NLU.",
"But aren't neural representations meaning too?",
"The internal representations of a neural network have been found to capture certain aspects of meaning, such as semantic similarity (Mikolov et al., 2013; Clark, 2015).",
"As we argued in 4, semantic similarity is only a weak reflection of actual meaning.",
"Neural representations neither qualify as standing meanings ( s ), lacking interpretations, nor as communicative intents ( i ), being insufficient to e.g. correctly build a coconut catapult.",
"An interesting recent development is the emergence of models for unsupervised machine translation trained only with a language modeling objective on monolingual corpora for the two languages (Lample et al., 2018).",
"If such models were to reach the accuracy of supervised translation models, this would seem contradict our conclusion that meaning cannot be learned from form.",
"A perhaps surprising consequence of our argument would then be that accurate machine translation does not actually require a system to understand the meaning of the source or target language sentence.",
"But BERT improves performance on meaning-related tasks, so it must have learned something about meaning.",
"It has probably learned something about meaning, in the same sense that syntax captures something about meaning and semantic similarity captures something about meaning: a potentially useful, but incomplete, reflection of the actual meaning.",
"McCoy et al. (2019) and Niven and Kao (2019) provide cautionary tales about overestimating what that something is purely based on evaluation results on existing tasks.",
"What exactly BERT and its relatives learn about meaning is a very interesting question, and we look forward to further findings from the field of BERTology.",
"In this paper, we have argued that in contrast to some current hype, meaning cannot be learned from form alone.",
"This means that even large language models such as BERT do not learn meaning; they learn some reflection of meaning into the linguistic form which is very useful in applications.",
"We have offered some thoughts on how to maintain a healthy, but not exaggerated, optimism with respect to research that builds upon these LMs.",
"In particular, this paper can be seen as a call for precise language use when talking about the success of current models and for humility in dealing with natural language.",
"With this we hope to encourage a top-down perspective on our field which we think will help us select the right hill to climb towards human-analogous NLU.",
"Acknowledgments.",
"This paper benefitted from many inspiring and often spirited discussions.",
"Without implying any agreement with the contents as presented, we thank Sam Bowman, Vera Demberg, Lucia Donatelli, Jason Eisner, Jonas Groschwitz, Kristen Howell, Angie McMillan-Major, Joakim Nivre, Stephan Oepen, Ellie Pavlick, Benjamin Roth, Dan Roth, Asad Sayeed, Hinrich Schutze, Nina Tahmasebi, and Olga Zamaraeva.",
"This paper originated in a Twitter mega-thread that was neatly summarized by Thomas Wolf (2018).",
"We also thank the ACL reviewers and the participants of the Toulouse Workshop on Formal and Distributional Semantics (2015) and *SEM 2016 for their insightful and constructive thoughts."
] | [
"abstain",
"result",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"objective",
"abstain",
"result",
"objective",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"result",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain"
] |
[
"Visual Dialog involves understanding the dialog history (what has been discussed previously) and the current question (what is asked), in addition to grounding information in the image, to generate the correct response.",
"In this paper, we show that co-attention models which explicitly encode dialog history outperform models that don't, achieving state-of-the-art performance (72 % NDCG on val set).",
"However, we also expose shortcomings of the crowd-sourcing dataset collection procedure by showing that history is indeed only required for a small amount of the data and that the current evaluation metric encourages generic replies.",
"To that end, we propose a challenging subset (VisDialConv) of the VisDial val set and provide a benchmark of 63% NDCG.",
"Recently, there has been an increased interest in visual dialog, i.e. dialog-based interaction grounded in visual information (Chattopadhyay et al., 2017; De Vries et al., 2017; Seo et al., 2017; Guo et al., 2018; Shekhar et al., 2018; Kottur et al., 2019; Haber et al., 2019).",
"One of the most popular test beds is the Visual Dialog Challenge ( VisDial ) (Das et al., 2017), which involves an agent answering questions related to an image, by selecting the answer from a list of possible candidate options.",
"According to the authors, nearly all interactions (98%) contain dialog phenomena, such as co-reference, that can only be resolved using dialog history, which makes this a distinct task from previous Visual Question Answering (VQA) challenges, e.g. (Antol et al., 2015).",
"For example, in order to answer the question About how many? in Figure 1, we have to infer from what was previously said, that the conversation is about the skiers.",
"In the original paper, Das et al. (2017) find that models which structurally encode dialog history, such as Memory Networks (Bordes et al., 2016) or Hierarchical Recurrent Encoders (Serban et al., 2017) improve performance.",
"However, naive history modelling (in this case an encoder with late fusion/concatenation of current question, image and history encodings) might actually hurt performance.",
"Massiceti et al. (2018) take this even further, claiming that VisDial can be modeled without taking history or even visual information into account.",
"Das et al. (2019) rebutted by showing that both features are still needed to achieve state-of-the-art (SOTA) results and an appropriate evaluation procedure has to be used.",
"In this paper, we show that competitive results on VisDial can indeed be achieved by replicating the top performing model for VQA (Yu et al., 2019b) and effectively treating visual dialog as multiple rounds of question-answering, without taking history into account.",
"However, we also show that these results can be significantly improved by encoding dialog history, as well as by fine-tuning on a more meaningful retrieval metric.",
"Finally, we show that more sophisticated dialog encodings outperform naive fusion on a subset of the data which contains true dialog phenomena according to crowd-workers.",
"In contrast to previous work on the VisDial dataset, e.g. (Kottur et al., 2018; Agarwal and Goyal, 2018; Gan et al., 2019; Guo et al., 2019; Kang et al., 2019), we are the first to conduct a principled study of dialog history encodings.",
"Our contributions can thus be summarized as follows: We present SOTA results on the VisDial dataset using transformer-based Modular Co-Attention (MCA) networks.",
"We further show that models encoding dialog history outperform VQA models on this dataset.",
"We show that curriculum fine-tuning (Bengio et al., 2009) on annotations of semantically equivalent answers further improves results.",
"We experiment with different dialog history encodings and show that early fusion, i.e. dense interaction with visual information (ei-ther via grounding or guided attention ) works better for cases where conversational historical context is required.",
"We release a crowd-sourced subset containing verified dialog phenomena and provide benchmark results for future research.",
"In this section, we extend Modular Co-Attention Networks, which won the VQA challenge 2019 (Yu et al., 2019b) and adapt it to visual dialog.",
"Different from previous co-attention networks (Kim et al., 2018; Nguyen and Okatani, 2018), MCA networks use guided attention to model dense relations between the question and image regions for better visual grounding.",
"In the following, we explore MCA networks with different input encodings following a [model]-[input]' convention to refer to our MCA model variants; see Figure 3 for an overview.",
"Whenever unspecified, images are represented as a bag of bottom-up features, i.e. object level representations (see Section 3).",
"The MCA module with multi-modal fusion as depicted in Figure 2, is common to all our architectures.",
"Inspired by the transformers (Vaswani et al., 2017), the MCA network (Yu et al., 2019b) is a modular composition of two basic attention units: self-attention and guided attention.",
"These are arranged in an encoder-decoder composition in the MCA module (Figure 2), which performed best for VQA (Yu et al., 2019b).",
"The Self-Attention (SA) unit in transformers (Vaswani et al., 2017) is composed of a multihead attention layer followed by a feed-forward layer.",
"When applied to vision, the SA unit can be viewed as selecting the most relevant object-level image features for the downstream task.",
"Specifi-cally, the scaled dot product attention takes as input key, query and value (usually same modality's embedded representations) and outputs a self-attended vector (Eq.1).",
"Multi-head attention provides multiple representation spaces to capture different lin-guistic/grounding phenomena, which are otherwise lost by averaging using a single head.",
"Att ( Q,K,V ) = softmax ( QKT d K ) V MHAtt ( Q,K,V ) = Concat ( head 1 ,...head n ) WO head i = Att ( QW Qi ,KW Kk ,V W Vi ) (1) The Guided-Attention (GA) unit conditions the attention on different sequences.",
"The key and value come from one modality, while the query comes from a different modality similar to the decoder architecture in Transformers (Vaswani et al., 2017).",
"Similar to Eq.",
"1, the GA unit outputs features f i = Att ( X, Y, Y ) where X R m d x comes from one modality and Y R n d y from the other.",
"Residual connection (He et al., 2016) and layer normalization (Ba et al., 2016) are applied to the output of both the attention and feed-forward layers similar to (Vaswani et al., 2017; Yu et al., 2019b) in both the SA and GA units.",
"extended analogously to model the interaction between the question and history.",
"First, the input (i.e. the question) is passed through multiple multi-head self-attention layers L , in order to get self-aware representations before acting as conditional signal to different modalities (visual or contextual history) similar to the auto-encoding procedure of Transformers.",
"Then the final representation XL is used as the input for GA units to model cross-modal dependencies and learn the final conditioned representation YL .",
"The learned representations XL R m d and YL R n d contain the contextualized and conditioned representations over the word and image regions, respectively.",
"We apply attention reduction (Yu et al., 2019b) with a multi-layer perceptron (MLP) for XL (analogously for YL ).",
"We obtain the final multi-modal fused representation z : x = softmax ( MLP x ( XL )) x = i = 1 m xi x Li z = LayerNorm ( W Tx x + W Ty y ) (2) where x = [ x 1 . . . xm ] R m are learned attention weights (same process for y and y ) and W x R d d z , W y R d d z are linear projection matrices (dimensions are the same for simplicity).",
"We call this model MCA with Image component only; (MCA-I) , since it only encodes the question and image features and therefore treats each question in Visual Dialog as an independent instance of VQA, without conditioning on the historical context of the interaction.",
"In the following, we extend the above framework to model dialog history.",
"We experiment with late/shallow fusion of history and image (MCA-I-H), as well as modelling dense interaction between conversational history and the image representation (i.e. MCA-I-VGH, MCA-I-HGuidedQ).",
"History guided Question (MCA-I-HGuidedQ): The network in Figure 3a is designed to model coreference resolution, which can be considered as the primary task in VisDial (Kottur et al., 2018).",
"We first enrich the question embedding by conditioning on historical context using guided attention in the MCA module.",
"We then use this enriched (co-reference resolved) question to model the visual interaction as described in Section 2.1.",
"Visually grounded history with image representation (MCA-I-VGH): Instead of considering conversational history and the visual context as two different modalities, we now ground the history with the image first, see Figure 3b.",
"This is similar in spirit to maintaining a pool of visual attention maps (Seo et al., 2017), where we argue that different questions in the conversation attend to different parts of the image.",
"Specifically, we pass the history to attend to object-level image features using the MCA module to get visually grounded contextual history.",
"We then embed the question to pool the relevant grounded history using another MCA module.",
"In parallel, the question embedding is also used to ground the current visual context.",
"At the final step, the respective current image and historical components are fused together and passed through a linear layer before decoding.",
"Note, this model is generic enough to potentially handle multiple images in a conversation and thus could be extended for tasks e.g. conversational image editing, which is one of the target applications of visual dialog (Kim et al., 2017; Manuvinakurike et al., 2018a,b; Lin et al., 2018; El-Nouby et al., 2018).",
"Two-stream Image and History component (MCA-I-H): Figure 3c shows the model which maintains two streams of modular co-attention networks one for the visual modality and the other for conversational history.",
"We follow a similar architecture for the visual component as MCA-I and duplicate the structure for handling conversational history.",
"At the final step, we concatenate both the embeddings and pass them through a linear layer.",
"For all the models described above, we use a discriminative decoder which computes the similarity between the fused encoding and RNN-encoded answer representations which is passed through a softmax layer to get the probability distribution",
"over the candidate answers.",
"We train using cross entropy over the ground truth answer: L ( ) = 1 NN = 100 n = 1 y n logP ( x n , ) (3) N denotes the number of candidate answers which is set to 100 for this task, y n is the (ground truth) label which is 0 or 1 during the training procedure, or a relevance score of the options during fine-tuning (casting it as multi-label classification).",
"We use PyTorch 1 (Paszke et al., 2017) for our experiments 2 .",
"Following Anderson et al. (2018), we use bottom-up features of 36 proposals from images using a Faster-RCNN (Ren et al., 2015) pre-trained on Visual Genome (Krishna et al., 2017) to get a bag of object-level 2048-d image representations.",
"Input question and candidate options are tokenized to a maximum length of 20 while the conversational history to 200.",
"Token embeddings in text are initialized with 300-d GloVe vectors (Penning-ton et al., 2014) and shared among all text-based encoders.",
"The RNN encodings are implemented using LSTMs (Hochreiter and Schmidhuber, 1997).",
"1 https://pytorch.org/ 2 Code available at https://github.com/ shubhamagarwal92/visdial_conv We use the Adam optimizer (Kingma and Ba, 2015) both for training and fine-tuning.",
"We use VisDial v1.0 for our experiments and evaluation.",
"3 The dataset contains 123K/2K/8K dialogs for train/val/test set respectively.",
"Each dialog is crowd-sourced on a different image, consisting of 10 rounds of dialog turns, totalling approx.",
"1.3M turns.",
"Each question has also been paired with a list of 100 automatically generated candidate answers which the model has to rank.",
"To account for the fact that there can be more than one semantically correct answer (e.g. Nope, No, None, Can-not be seen), dense annotations for 2k/2k turns of train/val of the data have been provided, i.e. a crowd-sourced relevance score between 0 and 1 (1 being totally relevant) for all 100 options.",
"As the Visual Dialog task has been posed as a ranking problem, standard information retrieval (IR) metrics are used for evaluation, such as Recall@ { 1,5,10 } to measure performance in the top N results (higher better), mean reciprocal rank (MRR) of the Ground-Truth (GT) answer (higher better), and Mean rank of the GT answer (lower better).",
"Normalized Discounted Cumulative Gain (NDCG) is another measure of ranking quality, which is commonly used when there is more than one correct answer (provided with their relevance).",
"Sparse Annotation Phase: We first train on sparse annotations, i.e. only 1 provided ground-truth answer, which is available for the whole training set.",
"Here the model learns to select only one relevant answer.",
"Curriculum Fine-tuning Phase: Dense annotations, i.e. crowd-sourced relevance weights, are provided for 0.16% of training set, which we use to fine-tune the model to select multiple semantically equivalent answers.",
"This acts like a curriculum learning setup (Elman, 1993; Bengio et al., 2009), 3 Following the guidelines on the dataset page we report results only on v1.0, instead of v0.9.",
"VisDial v1.0 has been consistently used for Visual Dialog Challenge 2018 and 2019.",
"where selecting one answer using sparse annotation is an easier task and fine-tuning more difficult.",
"4 4.4 Baselines MCA-I-HConcQ and MCA-H: MCA-I-HConcQ is a naive approach of concatenating raw dialog history to the question while keeping the rest of the architecture the same as MCA-I.",
"MCA-H on the other hand considers this task as only conversational (not visual) dialog with MCA module on history instead of image.",
"RvA: We reproduce the results of Niu et al. (2019)'s Recursive Visual Attention model (RvA), which won the 2019 VisDial challenge.",
"Their model browses the dialog history and updates the visual attention recursively until the model has suf-ficient confidence to perform visual co-reference resolution.",
"We use their single model's open-source implementation and apply our fine-tuning procedure on the val set in Table 1. When reporting on the test set results in Table 2, we use the leaderboard scores published online which contains further unpublished enhancements based on ensembling (MReaL-BDAI).",
"In the following, we report results on the VisDial v1.0 val set, (Table 1), as well as the test-std set, 5 (Table 2).",
"For measuring significance (reported on p 0 . 05 ), we use Kruskal-Wallis (Kruskal and Wallis, 1952) and Wilcoxon signed rank test (Wilcoxon, 1992) with Bonferroni correction (Bon-ferroni, 1936).",
"We report results in terms of NDCG, which is the main metric of the challenge.",
"MCA-I-H is our best performing model.",
"It achieves state-of-the-art performance: It outperforms the RvA baseline by almost 5 NDCG points on the val set and by over 7 points on the test set.",
"On the official challenge test set, MCA-I-H ranks 2 nd : it improves over 7 NDCG over the best single model but loses by 2 points against a 6-strong RvA ensemble model (2019 winning entry).",
"4 While instance-level' curriculum learning is defined in terms of harder dialogs', in our work, we used dataset/tasklevel' curriculum finetuning.",
"Our suggested method is a combination of curriculum learning and fine tuning (pre-training and adjusting to a specific downstream task).",
"As such, we use the term curriculum fine-tuning' i.e. adaptation by NDCG aware curriculum during fine-tuning.",
"5 We only report results for our best preforming models as the number of allowed submissions to the challenge is limited.",
"Compared to MCA-I, which treats the task as multiple rounds of VQA, encoding history improves results, but only significantly for MCA-I-VGH in the sparse annotation phase.",
"After fine-tuning, MCA-I-VGH and MCA-I-H perform equally.",
"MCA-I-H implements a late/shallow fusion of history and image.",
"Architectures which model dense interaction between the conversational history and the image representations (i.e. MCA-I-VGH, MCA-I-HGuidedQ) perform comparably; only MCA-HConcQ performs significantly worse.",
"Note that MCA-I also outperforms the baselines and current SOTA by a substantial margin (both in the sparse annotation phase and curriculum fine-tuning phase), while, counter-intuitively, there is not a significant boost by adding conversational history.",
"This is surprising, considering that according to Das et al. (2017), 38% of questions contain a pronoun, which would suggest that these questions would require dialog history in order to be understood/grounded by the model.",
"Furthermore, curriculum fine-tuning significantly improves performance with an average improvement of 11.7 NDCG points, but worsens performance in terms of the other metrics, which only consider a single ground truth (GT) answer.",
"In the following, we perform a detailed error analysis, investigating the benefits of dialog history encoding",
"encoding and the observed discrepancy between the NDCG results and the other retrieval based metrics.",
"We performed an ablation study whereby we did not include the caption as part of historical context and compare with the results in Table 1. The performance dropped from (NDCG 72.2, MRR 42.3) to (NDCG 71.6, MRR 40.7) using our best performing MCA-I-H model after finetuning.",
"Since the crowd-sourced conversation was based on the caption, the reduced performance was expected.",
"In order to further verify the role of dialog history, we conduct a crowd-sourcing study to understand which questions require dialog history, in order to be understood by humans.",
"We first test our history-encoding models on a subset (76 dialogs) of the recently released VisPro dataset (Yu et al., 2019a) which focuses on the task of Visual Pronoun Resolution.",
"6 Note that VisPro also contains non-referential pleonastic pronouns, i.e. pronouns used as dummy subjects when e.g. talking about the weather (Is it sunny?).",
"We thus create a new crowd-sourced dataset 7 , which we call VisDialConv .",
"This is a subset of the VisDial val-set consisting of 97 dialogs, where the crowd-workers identified single turns (with dense annotations) requiring historical information.",
"In particular, we asked crowd-workers whether they could provide an answer to a question given an image, without showing them the dialog history, and select one of the categories in Table 4 (see further details in Appendix B).",
"In order to get reliable results, we recruited 3 crowd-workers per image-question pair and only kept instances where at least 2 people agreed.",
"Note that we only had to discharge 14.5% of the origi-6 We use the intersection of dialogs in VisDial val set and VisPro to create this subset.",
"7 Data collection code available at https://github.",
"com/shubhamagarwal92/visdialconv-amt Model Sparse annotation Phase Curriculum Fine-tuning NDCG MRR R@1 R@5 R@10 Mean NDCG MRR R@1 R@5 R@10 Mean VisPro subset dataset MCA-I 59.80 57.88 45.39 72.24 82.76 5.84 69.82 36.2 20 54.08 70.92 10.02 MCA-I-HConcQ 61.08 61.79 48.95 77.5 86.58 4.72 68.44 38 22.24 55.79 71.71 9.17 MCA-I-HGuidedQ 61.35 60.13 47.11 75.26 86.18 5.23 68.29 36.59 21.05 53.29 70.13 9.76 MCA-I-VGH 61.68 59.33 46.18 75.53 86.71 5.07 68.97 39.21 23.68 57.11 70.53 8.83 MCA-I-H 61.72 59.62 45.92 77.11 86.45 4.85 70.87 39.8 25.39 55.13 70.39 9.42 VisDialConv (Crowd-sourced subset) dataset MCA-I 52.07 55.55 41.65 72.47 83.81 5.92 58.65 36.2 20.52 53.3 68.25 10.32 MCA-I-HConcQ 54.84 62.06 47.42 80.1 88.87 4.37 61.42 37.92 21.86 55.67 73.3 9.01 MCA-I-HGuidedQ 53.81 62.29 48.35 80.1 88.76 4.42 62.92 38.07 22.58 54.74 70.82 9.5 MCA-I-VGH 55.48 58.45 44.54 74.95 86.19 5.18 60.63 38.1 22.89 53.71 70.31 9.49 MCA-I-H 53.01 61.24 47.63 79.07 87.94 4.77 59.89 39.73 25.15 56.49 71.86 9.53 Table 3: Automatic evaluation on the subsets of VisPro and VisDialConv dataset.",
"nal 1035 image-question pairs, leaving us with 885 examples.",
"The results in Table 4 show that only 11% required actual dialog historical context according to the crowd-workers.",
"Most of the time (67% cases), crowd-workers said they can answer the question correctly without requiring history.",
"The results in Table 3 are on the subset of 97 questions which the crowd-workers identified as requiring history.",
"8 They show that history encoding models (MCA-I-HGuidedQ / MCA-I-HConcQ / MCA-I-H / MCA-I-VGH) significantly outperform MCA-I, suggesting that this data cannot be modelled as multiple rounds of VQA.",
"It can also be seen that all the models with dense (early) interaction of the historical context outperform the one with late interaction (MCA-I-H) in terms of NDCG.",
"Models with dense interactions appear to be more reliable in choosing other correct relevant answers because of the dialog context.",
"8 We took care to only include examples from Visdial val set in both Vispro and VisDialConv subsets.",
"Also note, there are only 8 overlapping instances between Vispro and VisdialConv subsets.",
"Our best performing model on VisDialConv is MCA-I-HGuidedQ and achieves a NDCG value of 62.9 after curriculum fine-tuning.",
"However, on the VisPro subset, we observe that MCA-I-H still outperforms the other models.",
"Interestingly, on this set, MCA-I also outperforms other history encoding models (except for MCA-I-H).",
"In sum, our analysis shows that only a small subset of the VisDial dataset contains questions which require dialog history, and for those, models which encode history lead to better results.",
"We posit that this is due to the fact that questions with pleonastic pronouns such as Is it sunny/daytime/day. . . are the most frequent according to our detailed analysis in Appendix C about the dialog phenomena.",
"Here, we investigate the discrepancy between the NDCG results and the other retrieval-based methods.",
"First, we find that the annotation scales differs: while there is a 3-way annotation on the train set, the val set defines 6 possible relevance classes, see Table 5.",
"This affects the evaluation results of our Image Dialog MCA-I-H MCA-I-VGHA bag of chips and a apple and orange.",
"Next, a manual inspection reveals that the relevance weight annotations contain substantial noise: We find that ground truth answers were marked as irrelevant for about 20% of train and 10% of val set.",
"Thus, our models seem to get confused by fine-tuning on this data.",
"We, therefore, manually corrected the relevance of only these GT answers (in dense annotations of train set only, but not in val set).",
"Please see Appendix D for further details.",
"The results in Table 1 (for MCA-I-H-GT) show that the model fine-tuned on the corrected data still achieves a comparable NDCG result, but substantially improves stricter (single answer) metrics, which confirms our hypothesis.",
"Finally, due to the noisy signal they receive during fine-tuning, our models learn to select safe answers 9 , such as I can't tell",
"(see examples in 9 We show the statistics of top-ranked predictions by our MCA-I-H model on our VisdialConv subset",
"(i.e. 97 dialogs of the Visdial val set).",
"Read as:",
"(Response, count, %)",
"(Yes, 14, 14%)",
"(No, 11, 11.34%)",
"(I cannot tell, 9, 9.27%)",
"(Nope, 3, 3%)",
"(Not that I see, 2, 2.06%)",
"(Red and white, 2, 2.06%)",
"(Not sure, 2, 2.06%)",
"(I can't tell, 2, 2.06%).",
"This shows that Figure 4), which rank high according to",
"(the more forgiving)",
"NDCG, but perform poorly for stricter metrics like MRR and Recall.",
"Our results suggest that the VisDial dataset only contains very limited examples which require dialog history.",
"Other visual dialog tasks, such as GuessWhich?",
"(Chattopadhyay et al., 2017)",
"and GuessWhat?!",
"(De Vries et al., 2017)",
"take place in a goal-oriented setting, which according to Schlangen",
"(2019), will lead to data containing more natural dialog phenomena.",
"However, there is very limited evidence that dialog history indeed matters for these tasks",
"(Yang et al., 2019).",
"As such, we see data collection to capture visual dialog phenomena as an open problem.",
"Nevertheless, our results also show that encoding dialog history still leads to improved results.",
"This is in contrast with early findings that",
"a)",
"naive encoding will harm performance (Das et al. (2017); at least 13.3% of answers are non-commital (I cannot tell, Not sure, I can't tell).",
"see MCA-I-HConcQ in Table 1), or that",
"b) history is not necessary (Massiceti et al., 2018).",
"Furthermore, we find that our model learns to provide generic answers by taking advantage of the NDCG evaluation metric.",
"Learning generic answers is a well-known problem for open-domain dialog systems, e.g. (Li et al., 2016).",
"While the dialog community approaches these phenomena by e.g. learning better models of coherence (Xu et al., 2018), we believe that evaluation metrics also need to be improved for this task, as widely discussed for other generation tasks, e.g. (Liu et al., 2016; Novikova et al., 2017; Reiter, 2018).",
"As a first step, BERT score (Zhang et al., 2019) could be explored to measure ground-truth similarity replacing the noisy NDCG annotations of semantic equivalence.",
"In sum, this paper shows that we can get SOTA performance on the VisDial task by using transformer-based models with Guided-Attention (Yu et al., 2019b), and by encoding dialog history and fine-tuning we can improve results even more.",
"Of course, we expect pre-trained visual BERT models to show even more improvements on this task, e.g. Vilbert (Lu et al., 2019), LXMert (Tan and Bansal, 2019), UNITER (Chen et al., 2019) etc.",
"However, we also show the limitations of this shared task in terms of dialog phenomena and evaluation metrics.",
"We, thus, argue that progress needs to be carefully measured by posing the right task in terms of dataset and evaluation procedure.",
"We thank the anonymous reviewers for their insightful comments.",
"Shubham would like to thank Raghav Goyal for the discussions during Pikabot' submission to Visual Dialog Challenge 2018.",
"This work received continued support by Adobe Research gift funding for further collaboration.",
"This research also received funding from Adeptmind Inc., Toronto, Canada and the EPSRC project MaDrIgAL (EP/N017536/1).",
"We would also like to acknowledge the AWS Cloud Credits for Research programme."
] | [
"abstain",
"result",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"objective",
"result",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"result",
"result",
"other",
"other",
"other",
"other",
"other",
"result",
"other",
"result",
"other",
"result",
"result",
"result",
"method",
"other",
"other",
"other",
"other",
"other"
] |
[
"Natural Language Processing (NLP) systems learn harmful societal biases that cause them to amplify inequality as they are deployed in more and more situations.",
"To guide efforts at debiasing these systems, the NLP community relies on a variety of metrics that quantify bias in models.",
"Some of these metrics are intrinsic , measuring bias in word embedding spaces, and some are extrinsic , measuring bias in downstream tasks that the word embeddings enable.",
"Do these intrinsic and extrinsic metrics correlate with each other?",
"We compare intrinsic and extrinsic metrics across hundreds of trained models covering different tasks and experimental conditions.",
"Our results show no reliable correlation between these metrics that holds in all scenarios across tasks and languages.",
"We urge researchers working on debiasing to focus on extrinsic measures of bias, and to make using these measures more feasible via creation of new challenge sets and annotated test data.",
"To aid this effort, we release code, a new intrinsic metric, and an annotated test set focused on gender bias in hate speech.",
"1 1 Introduction Awareness of bias in Natural Language Processing (NLP) systems has rapidly increased as more and more systems are discovered to perpetuate societal unfairness at massive scales.",
"This awareness has prompted a surge of research into measuring and mitigating bias, but this research suffers from lack of consistent metrics that discover and measure bias.",
"Instead, work on bias is rife with unstated assumptions (Blodgett et al., 2020) and relies on metrics that are easy to measure rather than metrics that meaningfully detect bias in applications.",
"(a) Intrinsic metrics summarize biases in the geometry of embeddings.",
"For example, in this embedding space, male words are closer to words about career and about math & science, whereas female words are closer to words about family.",
"(b) Extrinsic bias metrics summarize disparities in application performance across populations, such as rates of false negatives between different gender groups.",
"For example, a coreference system may make more errors in an anti-stereotypical career coreferent (red arc) than in a pro-stereotypical one (green arc).",
"A recent comprehensive survey of bias in NLP (Blodgett et al., 2020) found that one third of all research papers focused on bias in word embeddings.",
"This makes embeddings the most common topic in studies of bias over twice as common as any other topic related to bias in NLP.",
"As is visualised in Figure 1a, bias in embedding spaces is measured with intrinsic metrics, most commonly with the Word Embedding Association Test (WEAT) (Caliskan et al., 2017), which relates bias to the geometry of the embedding space.",
"Once embeddings are incorporated into an application, bias can be measured via extrinsic metrics (Figure 1b) that test whether the application performs differently on language related to different populations.",
"Hence, research on debiasing embeddings relies crucially on a hypothesis that doing so will remove or reduce bias in downstream applications.",
"However, we are aware of no prior research that confirms this hypothesis.",
"This untested assumption leaves NLP bias research in a precarious position.",
"Research into the semantics of word embeddings has already shown that intrinsic metrics (e.g. using analogies and semantic similarity, as in Hill et al., 2015) do not correlate well with extrinsic metrics (Faruqui et al., 2016).",
"Research into the bias of word embeddings lacks the same type of systematic study, and thus as a field we are exposed to three large risks: 1) making misleading claims about the fairness of our systems, 2) concentrating our efforts on the wrong problem, and most importantly, 3) feeling a false sense of security that we are making more progress on the problem than we are.",
"Our bias research can be rigorous and innovative, but unless we understand the limitations of metrics we use to evaluate it, it might have no impact.",
"In this paper, we ask: Does the commonly used intrinsic metric for embeddings (WEAT) correlate with extrinsic metrics of application bias?",
"To answer this question, we analyse the relationship between intrinsic and extrinsic bias.",
"Our study considers two languages (English and Spanish), two common embedding algorithms (word2vec and fastText) and two downstream tasks (coreference resolution and hatespeech detection).",
"While we find a moderately high correlation between these metrics in a handful of conditions, we find no correlation or even negative correlation in most conditions.",
"Therefore, we recommend that the ethical scientist or engineer does not rely on intrinsic metrics when attempting to mitigate bias, but instead focuses on the harms of specific applications and test for bias directly.",
"As additional contributions to these findings, we release new WEAT metrics for Spanish, and a new gender-annotated test set for hatespeech detection for English, both of which we created in the course of this research.",
"In all of our experiments, we compute correlations between commonly-used metrics, both intrinsic and extrinsic.",
"Intrinsic bias metrics are applied directly to word embeddings, formulating bias in terms of geometric relationships between concepts such as male , female , career , or family .",
"Each concept is in turn represented by curated wordlists.",
"For example, the concept male is represented by words like brother, father, grandfather, etc. while the concept math & science is represented by words like programmer, engineer, etc.",
"The most commonly used metric is WEAT (Caliskan et al., 2017).",
"2 , which measures the difference in mean cosine similarity between two target concepts X and Y ; and two attribute concepts A and B .",
"This difference represents the imbalance in associations between concepts.",
"Using (cid:126)w to represent the embedding of word w , we have a test statistic : s ( X, Y, A, B ) = (cid:88) x X s ( x, A, B ) (cid:88) y Y s ( y, A, B ) where s ( w, A, B ) = mean a A cos( (cid:126)w,(cid:126)a ) mean b B cos( (cid:126)w,(cid:126)b ) This is normalised by the standard deviation to get the effect size which we use in our experiments.",
"WEAT was initially developed as an indicator of bias, to show that the Implicit Association Test (IAT) from the field of psychology (Greenwald et al., 1998) can be replicated via word embeddings measurements.",
"There are thus 10 original tests chosen to replicate the tests presented to human subjects in IAT.",
"The tests measure different kinds of biased associations, such as African-American names vs. White names with pleasant vs. unpleasant terms, and female terms vs. male terms with career vs. family words.",
"WEAT was later repurposed as a predictor of bias in embedding spaces, via a somewhat muddy logical journey.",
"It has since been translated into 6 other languages (XWEAT; Lauscher and Glavas, 2019), and extended to operate on full sentences (May et al., 2019) and on contextual language models (Kurita et al., 2019).",
"When WEAT is used as a metric, papers report the effect size of the subset of tests relevant to the task at hand, each separately.",
"There are known issues with WEAT, such as sensitivity to corpus word frequency, and sensitivity 2 We count 34 papers from *CL and FAT* conferences since January 2020 that use WEAT or SEAT (May et al., 2019) in their methodology.",
"to target and attribute wordlists, as found by Sedoc and Ungar (2019) and Ethayarajh et al. (2019).",
"The latter proposes an alternative more theoretically robust metric, relational inner product association (RIPA), which uses the principal component of a gender subspace (determined via the method of Bolukbasi et al. (2016)) to directly measure how gendered a word is.",
"We have chosen to use the most common version of WEAT for this first empirical study, since it is most widely used.",
"It would be interesting to test RIPA in the same way, if it were extended to more types of bias and more languages.",
"But we note that all intrinsic metrics are sensitive to chosen wordlists, so this must be done carefully, especially across languages, a topic we will return to in Section 4.3.",
"Extrinsic bias metrics measure bias in applications, via some variant of performance disparity, or performance gap between groups.",
"For instance, a speech recognition system is unfair if it has higher error rates for African-American dialects (Tatman, 2017), meaning that systems perform less well for those speakers.",
"A hiring classification system is unfair if it has more false negatives for women than for men, meaning that more qualified women are accidentally rejected than are qualified men.",
"3 There are two commonly used metrics to quantify this possible performance disparity: Predictive Parity (Hutchinson and Mitchell, 2019), which measures the difference in precision for a privileged and non-privileged group, and Equality of Opportunity (Hardt et al., 2016), which measures the difference in recall between those groups (see Appendix A for formal definitions).",
"The metric that best identifies bias in a system varies based on the task.",
"For different applications, false negatives may be more harmful, for others false positives may be.",
"For our first task of coreference (Figure 1b), false negatives where the system fails to identify anti-stereotypical coreference chains (e.g. women as farmers or as CEOs) are more harmful to the underprivileged class than false positives.",
"For our second task, hate speech detection (Figure 2), both can be harmful, for different reasons.",
"False positives for one group can systematically censor certain content, as has been found for hate speech detection applied to African-American Vernacular English (AAVE) (Sap et al., 3 https://tinyurl.com/y6c6clzu Figure 2: Examples from twitter hatespeech detection: correct",
"2019; Davidson et al., 2019).",
"False negatives permit abuse of minority populations that are targets of hate speech.",
"We examine performance gaps in both precision and in recall for broad coverage.",
"Each of our experiments measures the correlation between a specific instance of WEAT and a specific extrinsic bias metric.",
"In each experiment, we train an embedding, measure the bias according to WEAT, and measure the bias in the downstream task that uses that embedding.",
"We then modify the embeddings by applying an algorithm to either debias them, or by inverting the algorithm's behavior to overbias them.",
"Again we measure WEAT on the modified embedding and also the downstream bias in the target task.",
"When we have done this multiple times until we reach a stopping condition (detailed below), we compute the correlation between the two metrics (via Pearson correlation and analysis with scatterplots).",
"Rather than draw conclusions from a single experiment, we attempt to draw more robust conclusions by running many experiments, which vary along several dimensions.",
"We consider two common embedding algorithms, two tasks, and two languages.",
"A full table of experiment conditions can be found in Table 1.",
"We need to measure the relationship between intrinsic and extrinsic metrics as bias changes, we must generate many datapoints for each experiment.",
"Previous work on bias in embeddings studies methods to reduce embedding bias.",
"To generate enough data points, we take the novel approach of both decreasing and increasing bias in the embeddings.",
"We measure the baseline bias level, via WEAT, for each embedding trained normally on the original corpus.",
"We then adjust the bias up or down, remeasure WEAT, and measure the change in the downstream task.",
"We choose two methods from previous work that are capable of both debiasing and overbiasing: the first is a preprocessing method that operates on the training data before training, the second is a postprocessing method that operates on the embedding space once it has been trained.",
"This is important since both kinds of methods may be used in practice: a large company with proprietary data will train embeddings from scratch, and thus may use a preprocessing method; whereas a small company may rely on publicly available pretrained embeddings, and thus use a post-processing method.",
"4 For preprocessing, we use dataset balancing (Dixon et al., 2018), which consists of subsampling the training data to be more equal with respect to some attributes.",
"For instance, if we are adjusting gender bias, we identify pro-stereotypical sentences 5 such as She was a talented housekeeper' vs. anti-stereotypical sentences, such as He was a talented housekeeper' or She was a talented analyst'.",
"We sub-sample and reduce the frequency of the pro-stereotypical collocations to debias, and sub-sample the anti-stereotypical conditions to overbias.",
"de-4 There are additional embedding based debiasing methods used in practice, based on identifying and removing a gender subspace during training or as postprocessing (Bolukbasi et al., 2016; Zhao et al., 2018b).",
"However, these methods do not change a word's nearest neighbour clusters (Gonen and Goldberg, 2019), and so we would expect these debiasing methods to show superficial bias changes in WEAT without changing downstream bias.",
"Both methods that we select modify the underlying word distribution and move many words in relation to each other.",
"We verified this with tSNE visualisation as in Figure 1a following Gonen and Goldberg (2019) and find that our bias modification methods do change word clusters.",
"5 Stereotypes as defined by Zhao et al. (2018a) and by Caliskan et al. (2017), who use the U.S. Bureau of Labor Statistics and the Implicit Association Test, respectively.",
"veloped to use dictionary wordlists (synonyms, antonyms) to refine semantic spaces.",
"It aims to move similar words (synonyms) close to each other and dissimilar words (antonyms) farther from each other, while keeping a regularisation term to preserve original semantics as much as possible.",
"Lauscher et al. (2020) used an approach inspired by Attract-Repel for debiasing, though with constraints implemented somewhat differently.",
"We use the same proand anti-stereotypical wordlists as in dataset-balancing.",
"For debiasing, we use the algorithm to increase distance between pro-stereotypical combinations ( she, housekeeper ) and decrease distance between anti-stereotypical combinations ( she, analyst or he, housekeeper ).",
"For overbiasing we do the reverse.",
"6 As the stopping condition for preprocessing, we constrain the sub-sampling so that it does not substantially change the dataset size, by limiting it to removing less than five percent of the original data.",
"For postprocessing we limit the algorithm to maximum 5 iterations.",
"We use two common word embedding algorithms: fastText (Bojanowski et al., 2017) and Skip-gram word2vec (Mikolov et al., 2013) embeddings.",
"Word embeddings in fastText are composed from embeddings of both the word and its subwords in the form of character n -grams.",
"Lauscher and Glavas (2019) suggest that this difference may cause bias to be acquired and encoded differently in fastText and word2vec (We discuss this in more detail in Section 5).",
"Despite recent widespread interest in contextual embeddings (e.g. BERT; Devlin et al., 2019), our experiments use these simpler contextless embed-6 Wordlists used for bias-modification and configs for Attract-Repel are included in the codebase.",
"dings because they are widely available in many toolkits and used in many real-world applications.",
"Their design simplifies our experiments, whereas contextual embeddings would add significant complexity.",
"However, we know that bias is still a problem for large contextual embeddings (Zhao et al., 2019, 2020; Gehman et al., 2020; Sheng et al., 2019), so our work remains important.",
"If intrinsic and extrinsic measures do not correlate with simple embeddings, this result is unlikely to be changed by adding more architectural layers and configurable hyperparameters.",
"We use three tasks that appear often in bias literature: Coreference resolution for English, hate speech detection for English, and hate speech detection for Spanish.",
"To make the scenarios as realistic as possible, we use a common, easy-to-implement and high performing architecture for each task: the end-to-end coreference system of Lee et al. (2017) and the the CNN of Kim (2014), which has been used in high-scoring systems in recent hate speech detection shared tasks (Basile et al., 2019).",
"For each task, we feed pretrained embeddings to the model, frozen, and then train the model using the standard hyperparameters published for each model and task.",
"We experiment on both English and Spanish.",
"It is important to take a language with pervasive gender-marking (Spanish) into account, as previous work has shown that grammatical gender-marking has a strong effect on gender bias in embeddings (McCurdy and Serbetci, 2017; Gonen et al., 2019; Zhou et al., 2019).",
"We use Spanish only for hate speech detection, because gender marking makes a challenge-set style coreference evaluation trivial to resolve and not a candidate for detection of gender bias.",
"7 4 Experiments 4.1 Datasets To train embeddings, we use domain-matched data for each downstream task.",
"For coreference we train on Wikipedia data, and for hatespeech detection we train on English tweets or Spanish tweets, 7 This fact is the premise behind the work of Stanovsky et al. (2019) who use the explicit marking in translation to reveal bias.",
"consistent with the task.",
"8 Our English Coreference system is trained on OntoNotes (Weischedel et al., 2017) and evaluated on Winobias (Zhao et al., 2018a), a Winograd-schema style challenge set designed to measure gender bias in coreference resolution.",
"English hate speech detection uses the abusive tweets dataset of Founta et al. (2018), and is evaluated on the test set of ten thousand tweets, which we have hand labelled as targeted male , female , and neutral (we release this labelled test set for future work).",
"Spanish hate speech detection uses the data from the shared task of Basile et al. (2019), which contains labels for comments directed at women and directed at migrants.",
"Both WEAT and bias modification methods depend on seed wordlists.",
"9 These wordlists are closely related to each other, and we match them by type of bias, such that we measure WEAT tests for gender bias with embeddings modified via gender bias wordlists (themselves derived from WEAT lists, as detailed below) and WEAT tests for migrant bias with embeddings modified for migrant bias.",
"To generate bias modification wordlists we follow the approach of Lauscher et al. (2020) and use a pretrained set of embeddings (from spacy.io ) to expand the set of WEAT words to their 100 unique nearest neighbours.",
"For all experiments, we take the union of all WEAT terms, expand them, and use this expanded set for both dataset balancing and for Attract-Repel.",
"11 For gender bias in coreference and hate speech, we use terms that are male vs. female and are career, math, science, vs. family, art.",
"For gender bias and migrant bias in Spanish hate speech, we compare male/female identity or migrant/non-migrant identity with pleasant-unpleasant term expansions.",
"12 8 Details of datasets & preprocessing are in Appendix C. 9 WEAT uses wordlists to measure relationships between words in the space, and bias modification depends on identifying words to sub or supersample (for databalancing), or to adjust (for Attract-Repel).",
"Many other debiasing methods that we did not use (e.g Bolukbasi et al. (2016)) also use wordlists.",
"10 All WEAT wordlists are in Appendix B. We make a small substitution of general gender words instead of proper names in WEAT 6, as proper names by design do not appear in our coreference task.",
"11 Final word sets are 200-400 words, due to significant overlap in nearest neighbors & manual removal of odd terms.",
"12 We did additionally experiment with using the exact 4.3 New Spanish WEAT We substantially modified Spanish WEAT (aka XWEAT for non-English WEATs) and added entirely new terms.",
"The reason for this is that the original XWEAT was translated from English very literally, which causes two problems.",
"The first problem with XWEAT is that many of the terms do not make sense in a Spanish speaking community names included in the original, like Amy , are names in Spanish and thus were untranslated, but are uncommon and have upper class connotations not intended in the original test.",
"Another example is firearms translated as arma de fuego , which while technically a correct literal translation, is not commonly used to describe weapons.",
"13 The second problem with XWEAT is that nouns on the wordlists for both abstract math and science concepts as well as abstract art concepts are almost entirely grammatically female.",
"For instance, cien-cia (science), geometr a (geometry) are grammatically female, as are escultura (sculpture) and novela (novel).",
"It is well established that for languages with grammatical gender, words that share a grammatical gender have embeddings that are closer together than words that do not (Gonen et al., 2019; McCurdy and Serbetci, 2017).",
"So, when WEAT in English was translated into XWEAT in Spanish (Glavas et al., 2019), the terms were imbalanced with regard to grammatical gender, which makes the results misleading.",
"We balance the lists, often replacing abstract nouns with corresponding adjectives which can take male or female form, e.g. cientfico and cientfica (scientific, male and female), such that we can use both versions to account for the effect of grammatical gender.",
"Finally, we needed a metric to examine bias against migrants.",
"Metrics for intrinsic bias must be targeted to the type of harm expected in the downstream application, and there is not an out-of-the-box WEAT test for this.",
"So we create a new WEAT test for bias against migrants in Spanish.",
"Following the setup of tests for racial bias in original WEAT based on American racial biases in English we have lists of names associated with migrants vs. non-migrants, and compare them with lists of pleasant and unpleasant terms.",
"The names are based on work of Salamanca and Pereira WEAT terms for debiasing, and found the trends to be similar but of smaller magnitude, so we settled on expanded lists as a more realistic scenario.",
"13 The standard would be armas .",
"(2013), who studied ranking names as lower vs. upper class; class status is closely correlated with whether a person is a migrant.",
"We select a subset of names in which the majority in the study agree on the class.",
"Pleasant and unpleasant terms exist in WEAT and XWEAT, but we again modify them to balance grammatical gender.",
"Figure 3 displays data for all tasks: one scatterplot per triple of experimental variables: an intrinsic metric, an extrinsic metric, an embedding algorithm.",
"If we want to be able to broadly use WEAT metrics for any given bias research, these graphs should each show a clear and a positive correlation.",
"None of them do.",
"There are no trends in correlation between the metrics that hold in all cases regardless of experimental detail, for any of the tasks.",
"We have additionally examined whether there are correlations within one bias modification method (pre or postprocessing) in case a difference in the way these methods modify embeddings causes differences in trends.",
"In most cases this breakout tells the same story.",
"The select cases where positive (and negative) correlations are present are discussed below.",
"Further breakout graphs and combinations are included in Appendix D. Coreference (en): Gender The coreference task (Figure 3, rows 1-3) doesn't display a clear correlation in all cases, and yet it has the clearest relationship of all three tasks, with a significant moderate positive correlation for both Predictive Parity (precision) and Equality of Opportunity (re-call) for word2vec (columns 3 & 4).",
"The overall trends are muddied by the data for fastText, which does not have a significant correlation under any conditions.",
"Both are expected: that coreference would display the strongest trends, and that fastText would display more unpredictable or weaker trends.",
"The Winobias coreference task is as directly matched to the WEAT tests as it is possible to be since both use common career words to measure bias.",
"So the relationship between the two metrics is clearest here: moving female terms closer to certain career terms most directly helps a system resolve anti-stereotypical coreference chains.",
"However, we still only see a correlation in wod2vec, not fastText.",
"fastText may behave less predictably because of its use of subwords; when subwords are used, Precision Recall Precision Recall WEAT 6 WEAT 7 WEAT 8 WEAT 6 WEAT 7 WEAT 8 WEAT g e nd e r WEAT m i g r a n t E ng li s h c o r e f e r e n ce E ng li s h h a t e s p eec h d e t ec ti on S p a n i s h h a t e s p eec h d e t ec ti on fastText embeddings word2vec embeddings Figure 3: Experimental results, showing one scatterplot per experiment.",
"word representations are more interconnected.",
"14 We can debias with regard to a specific word, but that word's embedding will still be influenced by all other words that share its character ngrams.",
"It is difficult to predict how changing the composition of a training corpus will affect all words that contain a certain ngram (e.g. ch ) in them.",
"For this reason, fastText may be initially more resistant to encoding biases than word2vec, as was found in Lauscher and Glavas (2019), but may also be more complex to debias.",
"This has implications for extending this work to contextual models, which always use some form of subword unit.",
"Hatespeech (en): Gender Hatespeech (en) has fewer and more restricted correlations than coreference, as can be seen in Figure 3, rows 4-6.",
"These plots show no relationship at all between intrinsic and extrinsic metrics.",
"When data is broken out by bias modification method (see Figure 4b in Appendix D), it becomes clear that there is a moderate positive correlation for postprocessing for recall, and the aggregate appears this way because there is a moderate negative correlation for preprocessing.",
"This holds for both embedding algorithms, though both positive and negative correlations are stronger for fastText.",
"Precision displays no correlation.",
"Note that the absolute variance in recall is much smaller than for precision, but this is still significant for each embedding algorithm individually and for both grouped together.",
"Of interest for future bias research is that the baseline level of bias (premodification, from raw twitter data) in English hatespeech differs by embedding type, but only for precision.",
"Initial models (with unmodified embeddings) using fastText have 10 additional points of precision for male-targeted hatespeech than for female-targeted.",
"However initial models using word2vec have the opposite bias and have 4 fewer points of precision for male-targeted than female targeted hatespeech.",
"For recall, the two embedding algorithms are equivalent, with 6 fewer points for male-targeted hatespeech.",
"In fact, in the recall metric there is an early indication of unreliability of the relationship we are examining between WEAT and extrinsic bias, because there is a spread of different WEAT results that map to nearly the same difference in recall.",
"14 For example, the representation of the word childish is by design also made up of the representations for child and ish , but also all unigrams, bigrams, and trigrams it contains ( c , ch , chi , etc).",
"Hatespeech (es): Gender and Migrant For hatespeech in Spanish, we examine two kinds of bias separately gender bias and bias against migrants, in Figure 3, rows 7,8.",
"Neither gender bias nor migrant bias show positive correlations in any experimental conditions.",
"Gender bias in our models is in an absolute sense never present, since in Spanish hatespeech targeted against women is easier to identify than against others (with F1 in the high 80s).",
"15 But there are no overall trends when this is bias is modified to be more or less extreme, and there are no positive correlations in any conditions.",
"There is a moderate negative correlation for precision only when looking at fastText embeddings.",
"Migrant bias similarly has no trends save in very restricted conditions broken out by bias modification type.",
"In contrast to the gender case, hatespeech against migrants is clearly challenging to identify, with much lower F1 in the low 60s.",
"There is a positive correlation between migrant bias and performance gap for recall with preprocessing in fastText only.",
"This fits the expectation that fastText may be more sensitive to preprocessing than postprocessing due to subwords, as discussed above, though in the gender bias case with negative correlation it is equally sensitive to both, so it is hard to draw conclusions.",
"Given the smaller number of datapoints for Spanish (discussed below) this is likely just noise.",
"To confuse the situation further, the only trends in precision are present in word2vec, and are negative correlations.",
"Note that all graphs for Spanish display central clusters, because it was more difficult to get an even spread of bias measures, and because Spanish has fewer data points than English.",
"This is for a number of reasons that compound and underscore the difficulty of expanding supposedly language-agnostic techniques beyond English, even to high resourced languages like Spanish.",
"We have only one WEAT test for each type of bias, since we made our own that carefully balanced grammatical gender, after rectifying the issues with the existing translated versions (see Section 4.3).",
"Bias modification is also more difficult the richer agreement system in Spanish means that there are more surface forms of what would be one word in English.",
"In addition, the language model used for nearest neighbour expansion of wordlists (see Sec-15 This is perhaps due to examples in the training data having clearer markers such as specific anti-female slurs, but is itself an interesting question. tion 4.2) produces predominantly formal register words from news or scientific articles, due to a less varied makeup of its training data than the English model.",
"This makes them less well suited to debiasing twitter data specifically, and there were no readily available models that had more casual register.",
"For bias against migrants, there is the additional challenge that wordlists are predominantly based on proper names, which are much rarer in twitter (which tends to use @ mentions instead) than in other media.",
"The broad result of this research is that changes in WEAT do not correlate with changes in application bias, and therefore that WEAT should not be used to measure progress in debiasing algorithms.",
"We have found that even when we maximally target bias modification of an embedding, we cannot produce a reliable change in bias direction downstream.",
"There was no pattern or correlation between tasks, for the same task in different languages, or even in most cases within one task.",
"And we have chosen one of the simplest possible setups, with fullword embeddings and a single type of bias at a time.",
"Real world scenarios can easily be more complicated and involve multiple types of bias or subword embeddings.",
"Our findings also indicate that additional complexity may muddy the relationship further.",
"For example, fastText behaved less predictably than word2vec across experiments, suggesting that if we were to expand to larger models that are fully reliant on subwords the patterns may become even less clear.",
"The implication of this finding is that an NLP scientist or engineer has limited options when investigating and mitigating bias.",
"They must",
"a) find the specific set of wordlists, embedding algorithms, downstream tasks, and bias modification methods that are together predictive of bias for the given task, language, and model or",
"b) implement full systems to test application bias directly, even if their work focuses on embeddings.",
"This underscores the importance of making good downstream bias measures available, as either approach will require these.",
"More datasets that are collected need to be annotated with subgroup demographic and identity information there are very few available.",
"More research needs to focus on creating good challenge sets to measure application bias.",
"Additional research on more broad usage of unsupervised methods (Zhao and Chang, 2020) would also be valuable, though those also would benefit from subgroup identity annotation to make their results more interpretable.",
"It is only when more of these things are readily available that we can see the true measure of the efficacy of our debiasing efforts.",
"We do note a limitation of this study in that all downstream tasks are discriminative classification tasks.",
"Bias in classification is more straightforward to measure, with well established metrics, but covers allocational harms (performance disparity), whereas the inclusion of generative models could better cover representational harms (misleading or harmful representations/portrayals) (Blodgett et al., 2020; Crawford, 2017).",
"Concurrent research on causal mediation analysis for bias has shown that the embedding layer in open-domain generation has the strongest effect on gender bias (as compared to other layers of the network) (Vig et al., 2020).",
"Further work could investigate whether generation tasks have display the same or different relationship to intrinsic metrics.",
"We have examined the relationship of the intrinsic bias metric WEAT to the extrinsic bias metrics of Equality of Opportunity and Predictive Parity, for multiple tasks and languages, and determined that positive correlations between them exist only in very restricted settings.",
"In many cases there is either negative correlation or none at all.",
"While intrinsic metrics such as WEAT remain good descriptive metrics for computational social science, and for examining bias in human texts, we advise that the NLP community not rely on them for measuring model bias.",
"We instead advise that they focus on careful consideration of downstream applications and the creation of datasets and challenge sets that enable measurement at this stage.",
"We thank Andreas Grivas, Kate McCurdy, Yevgen Matusevych, Elizabeth Nielsen, Ramon Sanabria, Ida Szubert, Sabine Weber, Bj orn Ross, Agostina Calabrese, and Eddie Ungless for comments on earlier drafts of this paper."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"result",
"method",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other"
] |
[
"There is a growing interest in the combined use of NLP and machine learning methods to predict gaze patterns during naturalistic reading.",
"While promising results have been obtained through the use of transformer-based language models, little work has been undertaken to relate the performance of such models to general text characteristics.",
"In this paper we report on experiments with two eye-tracking corpora of naturalistic reading and two language models (BERT and GPT-2).",
"In all experiments, we test effects of a broad spectrum of features for predicting human reading behavior that fall into five categories (syntactic complexity, lexical richness, register-based multiword combinations, readability and psycholinguistic word prop-erties).",
"Our experiments show that both the features included and the architecture of the transformer-based language models play a role in predicting multiple eye-tracking measures during naturalistic reading.",
"We also report the results of experiments aimed at determining the relative importance of features from different groups using SP-LIME.",
"Extensive studies using eye-trackers to observe gaze patterns have shown that humans read sentences efficiently by performing a series of fixations and saccades (for comprehensive overviews, see,",
"e.g.",
"Rayner et al. (2012), Seidenberg (2017), and Brysbaert (2019)).",
"During a fixation, the eyes stay fixed on a word and remain fairly static for 200-250 milliseconds.",
"Saccades are rapid jumps between fixations that typically last 20-40 ms and span 7-9 characters.",
"In addition, when reading, humans do not fixate one word at a time, i.e. some saccades run in the opposite direction, and some words or word combinations are fixed more than once or skipped altogether.",
"Much of the early work in this area was concerned with the careful construction of sentences to model human reading behavior and understand predictive language processing (Staub, 2015; Kuperberg and Jaeger, 2016).",
"The use of isolated, decontextu-alized sentences in human language processing research has been questioned on ecological validity grounds.",
"With the growing awareness of the importance of capturing naturalistic reading, new corpora of eye movement data over contiguous text segments have emerged.",
"Such corpora serve as a valuable source of data for establishing the basic benchmarks of eye movements in reading and provide an essential testing ground for models of eye movements in reading, such as the E-Z Reader model (Reichle et al., 1998) and the SWIFT model (Engbert et al., 2005).",
"They are also used to evaluate theories of human language processing in psycholinguistics: For example, the predictions of two theories of syntactic processing complexity (dependency locality theory and surprisal) were tested in the Dundee Corpus, which contains the eye-tracking record of 10 participants reading 51,000 words of newspaper text (Demberg and Keller, 2008).",
"Subsequent work has presented accounts where the ability of a language model to predict reading times is a linear function of its perplexity (Goodkind and Bicknell, 2018).",
"More recent work has employed transformer-based language models to directly predict human reading patterns across new datasets of eye-tracking and electroencephalogra-5276 phy during natural reading (Schrimpf et al., 2021; Hollenstein et al., 2021, for more details see the related work section below).",
"While this work has made significant progress, there is limited work aimed at determining the role of general text properties in predicting eye movement patterns in corpora of naturalistic reading.",
"To date, research has addressed this issue only peripherally (Lowder et al., 2018; Snell and Theeuwes, 2020; Hollenstein et al., 2021), examining the role of text features only on the basis of a small number of linguistic features.",
"In this paper, we conduct a systematic investigation of the effects of text properties on eye movement prediction: We determine the extent to which these properties affect the prediction accuracy of two transformer-based language models, BERT and GPT-2.",
"The relationship between these properties and model performance is investigated in two ways:",
"(a) building on the approaches in Lowder et al. (2018) and Hollenstein et al. (2021), by investigating the sensitivity of model predictions to a wide range of text features, and",
"(b) by incorporating text features into the transformer-based language models.",
"With respect to the latter, we examine the effects of the preceding sentence on gaze measurement within the sentence of interest.",
"This was motivated by psycholinguistic literature that has demonstrated spillover effects, where the fixation duration on a word is affected by linguistic features of the preceding context (Pollatsek et al., 2008; Shvartsman et al., 2014, see also Barrett and Hollenstein (2020) for a reference to the utility of information about preceding input).",
"Computational reading models have not addressed linguistic concepts beyond the level of the fixated word much, with a few exceptions,",
"e.g.",
"spillover effects related to previewing the next word n+1 during the current fixation on word n (Engbert et al., 2005).",
"Here we extend the study of spillover effects to the effects of textual features of the preceding sentence.",
"To our knowledge, this is the first systematic attempt to investigate the effects of textual features on the prediction of eye-tracking measures in a corpus of naturalistic reading by considering a large number of features spanning different levels of linguistic analysis.",
"In this section, we provide a brief overview of the available literature that has used transformer-based language models to predict human reading patterns, as well as the literature that has investigated the role of text properties on word predictability during naturalistic reading.",
"Schrimpf et al. (2021) evaluated a broad range of language models on the match of their internal representations to three datasets of human neural activity (fMRI and ECoG) during reading.",
"Their results indicated that transformer-based models perform better than recurrent networks or word-level embedding models.",
"They also found that the models with the best match with human language processing were models with unidirectional attention transformer architectures: specifically the generative pretrained transformer (GPT-2) (Radford et al., 2019), consistently outperformed all other models in both fMRI and ECoG data from sentence-processing tasks.",
"Hollenstein et al. (2021) presented the first study analyzing to what extent transformer language models are able to directly predict human gaze patterns during naturalistic reading.",
"They compare the performance of language-specific and multilingual pretrained and fine-tuned BERT and XLM models to predict reading time measures of eye-tracking datasets in four languages (English, Dutch, German, and Russian).",
"Their results show that both monolingual and multilingual transformer-based models achieve surprisingly high accuracy in predicting a range of eye-tracking features across all four languages.",
"For the English GECO dataset, which is also used in the current study, the BERT and XLM models yielded prediction accuracies (100 mean absolute error (MAE)) ranging between 91.15% (BERT-EN) and 93.89% (XLM-ENDE).",
"To our knowledge, the first study to investigate the role of textual characteristics on word predictability during naturalistic reading is an experimental study conducted by Lowder et al. (2018).",
"This study implemented a large-scale cumulative cloze task to collect word-by-word predictability 5277 data (surprisal and entropy reduction scores) for 40 text passages which were subsequently read by 32 participants while their eye movements were recorded.",
"Lowder et al. (2018) found that surprisal scores were associated with increased reading times in all eye-tracking measures.",
"They also observed a significant effect of text difficulty, measured by FleschKincaid grade level of each paragraph (Kincaid et al., 1975), such that increases in text difficulty were associated with increased reading times.",
"Crucially, their study yielded evidence of interactions between predictability (surprisal scores) and paragraph difficulty.",
"In the abovementioned computational study, Hollenstein et al. (2021) also investigated the influence of textual characteristics (word length, text readability) on model performance.",
"Text readability was measured using Flesch Reading Ease scores (Flesch, 1948).",
"Their results indicated that the models learned to reflect characteristics of human reading, such as sensitivity to word length.",
"They also found that model accuracy was higher in more easily readable sentences.",
"We analyze eye movement data from two eyetracking corpora of natural reading, the Ghent Eye-Tracking Corpus (GECO; (Cop et al., 2017)) and the Provo corpus (Luke and Christianson, 2018).",
"In both corpora the participants read full sentences within longer spans of naturally occurring text at their own speed while their eye movements were recorded.",
"The GECO corpus is large dataset of eye movement of a monolingual and bilingual readers who read a complete novel, Agatha Christie's The Mysterious Affair at Styles'.",
"It contains eye-tracking data from 14 English native speakers and 19 bilingual speakers of Dutch and English, who read parts of the novel in its original English version and another part of its Dutch translation.",
"In the present work, we focus on the analysis of the data from the monolingual English native speakers.",
"These participants read a total of 5031 sentences amounting to a total of 54364 word tokens.",
"The Provo Corpus is a dataset of eye movements of skilled readers reading connected text.",
"It consists of eye movement data from 84 native English-speaking participants from Brigham Young University, who read 55 short passages from a variety of sources, including online news articles, popular science magazines, and public-domain works of fiction.",
"These passages were an average of 50 words long for a total of 2,689 word tokens.",
"The texts from both datasets (GECO and PROVO) were automatically analyzed using CoCoGen (Strbel et al., 2016), a computational tool that implements a sliding window technique to calculate sentence-level measurements that capture the within-text distributions of scores for a given language feature (for current applications of the tool in the context of text classification, see Kerz et al. (2020, 2021)).",
"We extract a total of 107 features that fall into five categories: (1) measures of syntactic complexity (N=16), (2) measures of lexical richness (N=14), (3) register-based n-gram frequency measures (N=25), (4) readability measures (N=14), and (5) psycholinguistic measures (N=38).",
"A concise overview of the features used in this study is provided in Table 5 in the appendix.",
"Tokenization, sentence splitting, part-of-speech tagging, lemmatization and syntactic PCFG parsing were performed using Stanford CoreNLP (Manning et al., 2014).",
"The syntactic complexity measures comprise",
"(i) surface measures that concern the length of production units, such as the mean length of words, clauses and sentences,",
"(ii) measures of the type and incidence of embeddings, such as dependent clauses per T-Unit or verb phrases per sentence or",
"(iii) the frequency of particular types of particular structures, such as the number of complex nominal per clause.",
"These features are implemented based on descriptions in Lu (2010) and using the Tregex tree pattern matching tool (Levy and Andrew, 2006) with syntactic parse trees for extracting specific patterns.",
"Lexical richness measures fall into three distinct sub-types:",
"(i) lexical density, such as the ratio of the number of lexical (as opposed to grammatical) words to the total number of words in a text, 5278",
"(iii) lexical variation, i.e. the range of vocabulary as displayed in language use, captured by text-size corrected type-token ratio and",
"(iii) lexical sophistication, i.e. the proportion of relatively unusual or advanced words in the learner's text, such as the number of New General Service List (Browne et al., 2013).",
"The operationalizations of these measures follow those described in Lu (2012) and Strbel (2014).",
"The register-based n-gram frequency measures are derived from the five register sub-components of the Contemporary Corpus of American English (COCA, (Davies, 2008)): spoken, magazine, fiction, news and academic language 1 .",
"These measures consider both the register-specific frequency rank and count: Norm n,s,r = | C n,s,r | log (cid:104)(cid:81) c | Cn,s,r | freq n,r ( c ) (cid:105) | U n,s | (1) Let A n,s be the list of n-grams ( n [1 , 5] ) appearing within a sentence s , B n,r the list of n-gram appearing in the n-gram frequency list of register r ( r { acad, fic, mag, news, spok } ) and C n,s,r = A n,s B n,r the list of n-grams appearing both in s and the n-gram frequency list of register r .",
"U n,s is defined as the list of unique n-gram in s , and freq n,r ( a ) the frequency of n-gram a according to the n-gram frequency list of register r .",
"The total of 25 measures results from the combination of",
"(a) a reference list' containing the top 100k most frequent n-grams and their frequencies from one of five registers of the COCA corpus and",
"(b) the size of the n-gram ( n [1 , 5] ).",
"The readability measures combine a word familiarity variable defined by prespecified vocabulary resource to estimate semantic difficulty together with a syntactic variable, such as average sentence length.",
"Examples of these measures are the Fry index (Fry, 1968) or the SMOG (McLaugh-lin, 1969).",
"Finally, the psycholinguistic measures capture cognitive aspects of reading not directly addressed by the surface vocabulary and syntax features of traditional formulas.",
"These measures include a word's average age-of-acquisition (Ku-perman et al., 2012) or prevalence, which refers 1 The Contemporary Corpus of American English is the largest genre-balanced corpus of American English, which at the time the measures were derived comprised of 560 million words.",
"to the number of people knowing the word (Brys-baert et al., 2019; Johns et al., 2020).",
"We analyze data from eight word-level reading time measures, which were also investigated in Hollenstein et al. (2021).",
"The measures include general word-level characteristics such as (1) the number of fixations (NFX), i.e. the number of times a subject fixates on a given word w, averaged over all participants, (2) mean fixation duration (MFD), the average fixation duration of all fixations made on w, averaged over all participants and (3) fixation proportion (FXP), the number of subjects that fixated w, divided by the total number of participants.",
"Early processing' measures pertain to the early lexical and syntactic processing and are based on the first time a word is fixated.",
"These features include: (4) first fixation duration (FFD), i.e. the duration of the first fixation on w (in milliseconds), averaged over all subjects and (5) first pass duration (FPD), i.e. the sum of all fixations on w from the first time a subject fixates w to the first time the subject fixates another token.",
"Late processing' measures capture the late syntactic processing and are based on words which were fixated more than once.",
"These measures comprise (6) total fixation duration (TFD), i.e. the sum of the duration of all fixations made on w, averaged over all subjects, (7) number of re-fixations (NRFX), the number of times w is fixated after the first fixation, i.e., the maximum between 0 and the NFIX-1, averaged over all subjects and (8) re-read proportion (RRDP), the number of subjects that fixated w more than once, divided by the total number of subjects.",
"The means, standard deviations and observed ranges for all eye-tracking features are shown in Tables 1 and",
"2. Like in Hollenstein et al. (2021), before being entered into the models, all eye-tracking features were scaled between 0 and 100 so that the loss can be calculated uniformly over all features.",
"Deep neural transformer-based language models create contextualized word representations that are sensitive to the context in which the words appear.",
"These models have yielded significant improvements on a diverse array of NLP tasks, ranging from question answering to coref-erence resolution.",
"We compare two such models in terms of their ability to predict eye-tracking features: Bidirectional Encoder Representations from Transformers' (BERT) (Devlin et al., 2018) and Generative Pre-trained Transformer 2' (GPT-2) (Radford et al., 2019).",
"BERT is an auto-encoder model trained with a dual objective function of predicting masked words and the next sentence.",
"It consists of stacked transformer encoder blocks and uses self-attention, where each token in an input sentence looks at the bidirectional context, i.e. tokens on left and right of the considered token.",
"In contrast, GPT-2 is an autoregressive model consisting of stacked transformer decoder blocks trained with a language modelling objective, where the given sequence of tokens is used to predict the next token.",
"While GPT-2 uses self-attention as well, it employs masking to prevent words from attending to following tokens, hereby processing language fully unidirectionally.",
"BERT is trained on the BooksCorpus (800M words) and Wikipedia (2,500M words), whereas GPT-2 is trained on WebText, an 8-million documents subset of CommonCrawl amounting to 40 GB of text.",
"We chose the BERT base model (cased) because it is most comparable to GPT-2 with respect to number of layers and dimensionality (BERT base model (cased) has 110M trainable parameters, GPT-2 has 117M).",
"We evaluate the eye-tracking predictions of the models both on within-domain text, using an 80/10/10 split of the much larger GECO dataset (representing fiction language), as well as on out-of-domain text using the complete, much smaller PROVO dataset (comprising also online news and popular science magazine language).",
"Furthermore, since overly aggressive fine-tuning may cause catastrophic forgetting (Howard and Ruder, 2018), we perform all experiments both with frozen' language models, where all the layers of the language model are frozen and only the attached neural network layers are trained, and also fully fine-tuned' language models, where the error is back-propagated through the entire architecture and the pretrained weights of the model are updated based on the GECO training set.",
"For all models we explored in this paper, we apply a dropout rate of 0.1 and a l2 regularization of 1 10 4 .",
"We use AdamW as the optimizer and mean squared error as the loss function.",
"We use a fixed learning rate with warmup.",
"During warmup, the learning rates are linearly increased to the peak learning rates and then fixed.",
"For BERT with a frozen' language model, the peak learning rate is 5 10 4 with 5 warmup steps and for GPT-2 with a frozen' language model, it is 0 .",
"001 also with 5 warmup steps.",
"Models with 'fully fine-tuned' language models are trained with two phases.",
"In the first phase, the weights of the lan-5280 guage models are frozen and only regression layers are trained.",
"During this phase, peak learning rates of 3 10 4 for BERT and 0 .",
"001 for GPT-2 are used.",
"For both models, the first phase is performed over 12 epochs with 5 warmup steps.",
"In the second phase, we unfreeze the weights of language models and fine-tune the language models together with the regression layers.",
"During this phase, the BERT-based model is trained with a peak learning rate of 5 10 5 while GPT-2-based model is trained with a peak learning rate of 5 10 4 .",
"The number of warmup steps for training both models in this phase is",
"3. We adopted a two-phase training procedure since preliminary experiments showed that this procedure yields same results as training the entire models from the first epoch, yet it can speed up model convergence.",
"All hyper-parameters are optimized through grid search.",
"To investigate the impact of the text properties listed in Section 3.2 on prediction accuracy, we partitioned the GECO testset into deciles according to each textual property, i.e. each of the 107 features.",
"We then calculated the Pearson correlation coefficients between the decile of a given textual feature and the mean absolute error (MAE) of a given model.",
"We expected to observe higher prediction accuracy (lower MAE) for sentences with higher readability, lower syntactic complexity, lower lexical richness, higher n-gram frequency and less demanding psycholinguistic properties, i.e. lower age-of-acquisition scores and higher prevalence scores.",
"To determine whether eye movement patterns were affected by textual characteristics of the previous sentences (sentence spillover effects), a bidirectional LSTM (BLSTM) model was integrated into the predictive models (Figure 1).",
"This BLSTM model reads 107 dimensional vectors of textual features CM i N , , CM i 1 from N previous sentences 2 as its input, transforms them through 4 BLSTM layers of 512 hidden units each, and outputs a 1024 dimensional vector [ h 4 N | h 41 ] , that is a concatenation of the last hidden states of the 4th BLSTM layer in the forward and backward directions h 4 N , h 41 .",
"A fully connected (FC) layer is added on top of the BLSTM layers to reduce the dimension of BLSTM model output to 256 ( C i ).",
"Meanwhile, another FC layer is added to the pre-trained language model (BERT or GPT-2) in order to reduce its logits to the same dimension ( E i 1 , , E iM ).",
"The reduced BLSTM output is then added to each of the reduced language model logits.",
"Finally, the 256-dimensional joint vectors are fed to a final regression layer to predict human reading behavior.",
"The procedures used to train the hybrid' models with textual characteristics of the previous sentences was identical to those specified above.",
"Grid search yielded the same optimized values for all hyper-parameters, except for the peak learning rate of fully fine-tuned' model with GPT-2 in second training phase, which was 1 10 4 .",
"To assess the relative importance of the feature groups, we employed Submodular Pick Lime (SP-LIME; Ribeiro et al. (2016)), a method to construct a global explanation of a model by aggregating the weights of the linear models.",
"We first construct local explanations using LIME with a linear local explanatory model, exponential kernel function with Hamming distance and a kernel width of = 0 .",
"75 d , where d is the number of feature groups.",
"The global importance score of the SP-LIME for a given feature group j can then be derived by: I j = (cid:113)(cid:80) n i =1 | W ij | , where W ij is the j th coefficient of the fitted linear regression model to explain a data sample x i .",
"We use sentence-level accuracy (100-MAE) and coefficients of determination ( R 2 ) as metrics to evaluate the performance of all models.",
"Table 3 shows the evaluation results for all models averaged over all eye-tracking features.",
"Table 3 shows that both BERT and GPT-2 models predicted the eye-tracking features of both datasets with more than 92% accuracy.",
"The fine-tuned models performed consistently better than the pretrained-only (frozen') models both on the within-domain text (GECO) and on the out-of-domain text (PROVO).",
"This result indicates that the learned representations are general enough to be successfully applied both in the prediction of reading patterns of fiction texts as well as in the prediction of news and popular science texts.",
"The BERT models consistently outperformed the GPT-2 models with a difference in R 2 of as much as 10.54% on the within-domain data (GECO).",
"This result stands in sharp contrast with those reported in Schrimpf et al. (2021) summarised in Section",
"2. In their interpretation of the success of GPT-2 in predicting neural activity during reading, Schrimpf et al. (2021) state that GPT-2 is also arguably the most cognitively plausible of the transformer models (because it uses unidirectional, forward attention).",
"Especially in view of the remarkable margin by which the BERT models outperformed the GPT-2 models here, it appears that arguments that infer cognitive plausibility from prediction success should be viewed with caution (see also Merkx and Frank (2020) for Table 3: Model performance across datasets.",
"Note: fr' = freeze all layers of language model; ft' = the entire model is fine-tuned; + com S-1' = including textual features of previous sentence",
"further intricacies of the issue).",
"The most accurately predicted individual eye-tracking measures were fixation probability (FXP), mean fixation duration (MFD) and first fixation duration (FFD), indicating that prediction accuracy was generally better for early measures than for late measures.",
"A detailed overview of the results for each eyetracking measure across all models and datasets is provided in Table 7 in the appendix.",
"This finding suggests that the accurate prediction of late measures that are assumed to reflect higher order processes such as syntactic and semantic integration, revision, and ambiguity resolution may benefit from the inclusion of contextual information beyond the current sentence.",
"The correlation analyses of the textual features and the mean absolute error revealed that prediction accuracy was affected by the text characteristics of the sentence under consideration.",
"Such effects were found across all eye-tracking met-5282 rics for both BERT and GPT-2 models in both their frozen and fully fine-tuned variants.",
"For reasons of space, we focus our discussion on the predictions of the BERT frozen model of first pass durations on the GECO dataset (additional results for both frozen and fine-tuned BERT models for both first pass duration and total fixation duration are provided in Figure 3 in the appendix).",
"Figure 2 visualizes the impact of all textual features that reached correlation coefficients r > | 0 .",
"2 | along with the feature group they belong to.",
"As is evident in Figure 2 the prediction accuracy of the BERT frozen model was impacted by features from all five feature groups with individual features affecting prediction accuracy in opposite ways.",
"A strong impact ( r > | 0 . 5 | ) was observed for several features of the n-gram feature group: Fixation durations of sentences with higher scores on ngram-frequency features from the news, magazine and spoken registers were predicted more accurately than those with lower scores on these measures.",
"The SMOG readability index, which estimates the years of education a person needs to understand a piece of writing, also has a strong impact: Predicted first pass durations were less accurate in sentences with higher SMOG scores.",
"Several features from the lexical richness, syntactic complexity and readability groups had a moderate impact on prediction accuracy ( | 0 . 3 | < r < | 0 . 5 | ): For example, predictions of fixation durations were less accurate on sentences of with a more clausal embedding (ClausesPerSentence) and greater lexical sophistication (MeanLengthWord, Sophisti-cation.ANC and Sophistication.BNC).",
"A similar effect was also observed for the psycholinguistic age-of-acquisition features (AoA mean, AoA max), where predictions of fixations times were less accurate for later acquired words.",
"Note that the finding that the correlation coefficients of the readability features have opposite signs is due to the fact that these are either defined to quantify ease of reading (e.g. Flesch Kincaid Reading Ease) or reading difficulty (e.g. SMOG index).",
"Turning to the results of the hybrid models with integrated information on textual characteristics of the preceding sentence, we found that highest accuracy ( R 2 = 58 . 36 %) was achieved by the fine-tuned BERT model.",
"This amounts to an increase in performance over a model trained without that information of 1.53%.",
"This result demonstrates that future studies should take textual spillover effects into account.",
"Our best-fitting model outperformed not only the best-performing BERT model in Hollenstein et al. (2021), BERT-BASE-MULTILINGUAL-CASED (Wolf et al., 2019) but also the overall best-performing transformer-based model, XLM-MLM-ENDE-1024 (Lample and Conneau, 2019) tested in that study.",
"This result demonstrates that the claim put forth in Hollenstein et al. (2021) that multilingual models show an advantage over language specific ones and that multilingual models might provide cognitively more plausible representations in predicting reading needs to be viewed with caution.",
"The results of the feature ablation experiments revealed that the main sources of the greater prediction accuracy of the hybrid models was asso-5283 Table 4: Feature ablation of different models on PROVO dataset.",
"ciated with information concerning the syntactic complexity, lexical richness and n-gram frequency of the preceding sentence.",
"An overview of the results is presented in Table 4.",
"We focus here on the results on the out-of-domain testset (PROVO) for which improvements over models without the integrated textual information were more pronounced.",
"As is evident in Table 4, the central role of the three feature groups listed above result was observed across models (BERT vs. GPT-2) and across training procedures (frozen vs. fine-tuning).",
"However, Table 4 also demonstrates clear differences between the models: While the BERT models show greater sensitivity to syntactic complexity, the GPT-2 models mostly benefit from information concerning n-gram frequency.",
"A possible interpretation of this finding is that a unidirectional model like GPT-2 relies more strongly on word sequencing than a bidirectional one.",
"Future research is needed to examine this in more detail so that effects associated with differences in model architecture can be disentangled.",
"In this paper we conducted the first systematic investigation of the role of general text features in predicting human reading behavior using transformer-based language models (BERT & GPT-2).",
"We have shown (1) that model accuracy is systematically linked to sentence-level text features spanning five measurement categories (syn-tax, complexity, lexical richness, register-specific N-gram frequency, readability, and psycholinguistic properties), and (2) that prediction accuracy can be improved by using hybrid models that consider spillover effects from the previous sentence."
] | [
"abstain",
"abstain",
"method",
"method",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result"
] |
[
"Training on only perfect Standard English corpora predisposes pre-trained neural networks to discriminate against minorities from nonstandard linguistic backgrounds (e.g., African American Vernacular English, Colloquial Singapore English, etc.).",
"We perturb the inflectional morphology of words to craft plausible and semantically similar adversarial examples that expose these biases in popular NLP models, e.g., BERT and Transformer, and show that adversarially fine-tuning them for a single epoch significantly improves robustness without sacrificing performance on clean data.",
"1 1 Introduction In recent years, Natural Language Processing (NLP) systems have gotten increasingly better at learning complex patterns in language by pretraining large language models like BERT, GPT-2, and CTRL (Devlin et al., 2019; Radford et al., 2019; Keskar et al., 2019), and fine-tuning them on task-specific data to achieve state of the art results has become a norm.",
"However, deep learning models are only as good as the data they are trained on.",
"Existing work on societal bias in NLP primarily focuses on attributes like race and gender (Boluk-basi et al., 2016; May et al., 2019).",
"In contrast, we investigate a uniquely NLP attribute that has been largely ignored: linguistic background.",
"Current NLP models seem to be trained with the implicit assumption that everyone speaks fluent (often U.S.) Standard English, even though two-thirds ( > 700 million) of the English speakers in the world speak it as a second language (L2) (Eber-hard et al., 2019).",
"Even among native speakers, a significant number speak a dialect like African American Vernacular English (AAVE) rather than Standard English (Crystal, 2003).",
"In addition, these 1 Code and adversarially fine-tuned models available at https://github.com/salesforce/morpheus .",
"Therefore, putting these models directly into production without addressing this inherent bias puts them at risk of committing linguistic discrimination by performing poorly for many speech communities (e.g., AAVE and L2 speakers).",
"This could take the form of either failing to understand these speakers (Rickford and King, 2016; Tatman, 2017), or misinterpreting them.",
"For example, the recent mistranslation of a minority speaker's social media post resulted in his wrongful arrest (Hern, 2017).",
"Since L2 (and many L1 dialect) speakers often exhibit variability in their production of inflectional morphology 2 (Lardiere, 1998; Prevost and White, 2000; Haznedar, 2002; White, 2003; Seymour, 2004), we argue that NLP models should be robust to inflectional perturbations in order to minimize their chances of propagating linguistic discrimination.",
"Hence, in this paper, we: 2 Inflections convey tense, quantity, etc.",
"See Appendix A for dialectal examples.",
"Propose MORPHEUS , a method for generating plausible and semantically similar adversaries by perturbing the inflections in the clean examples (Figure 1).",
"In contrast to recent work on adversarial examples in NLP (Belinkov and Bisk, 2018; Ebrahimi et al., 2018; Ribeiro et al., 2018), we exploit morphology to craft our adversaries.",
"Demonstrate its effectiveness on multiple machine comprehension and translation models, including BERT and Transformer (Tables 1 & 2).",
"Show that adversarially fine-tuning the model on an adversarial training set generated via weighted random sampling is sufficient for it to acquire significant robustness, while preserving performance on clean examples (Table 5).",
"To the best of our knowledge, we are the first to investigate the robustness of NLP models to inflectional perturbations and its ethical implications.",
"Fairness in NLP.",
"It is crucial that NLP systems do not amplify and entrench social biases (Hovy and Spruit, 2016).",
"Recent research on fairness has primarily focused on racial and gender biases within distributed word representations (Boluk-basi et al., 2016), coreference resolution (Rudinger et al., 2018), sentence encoders (May et al., 2019), and language models (Bordia and Bowman, 2019).",
"However, we posit that there exists a significant potential for linguistic bias that has yet to be investigated, which is the motivation for our work.",
"Adversarial attacks in NLP.",
"First discovered in computer vision by Szegedy et al. (2014), adversarial examples are data points crafted with the intent of causing a model to output a wrong prediction.",
"In NLP, this could take place at the character, morphological, lexical, syntactic, or semantic level.",
"Jia and Liang (2017) showed that question answering models could be misled into choosing a distractor sentence in the passage that was created by replacing key entities in the correct answer sentence.",
"Belinkov and Bisk (2018) followed by demonstrating the brittleness of neural machine translation systems against character-level perturbations like randomly swapping/replacing characters.",
"However, these attacks are not optimized on the target models, unlike Ebrahimi et al. (2018), which makes use of the target model's gradient to find the character change that maximizes the model's error.",
"Since these attacks tend to disrupt the sentence's semantics, Ribeiro et al. (2018) and Michel et al. (2019) propose searching for adversaries that preserve semantic content.",
"Alzantot et al. (2018) and Jin et al. (2019) explore the use of synonym substitution to create adversarial examples, using word embeddings to find the n nearest words.",
"Eger et al. (2019) take a different approach, arguing that adding visual noise to characters leaves their semantic content undisturbed.",
"Iyyer et al. (2018) propose to create paraphrase adversaries by conditioning their generation on a syntactic template, while Zhang et al. (2019b) swap key entities in the sentences.",
"Zhang et al. (2019a) provide a comprehensive survey of this topic.",
"Adversarial training.",
"In order to ensure our NLP systems are not left vulnerable to powerful attacks, most existing work make use of adversarial training to improve the model's robustness (Good-fellow et al., 2015).",
"This involves augmenting the training data either by adding the adversaries to or replacing the clean examples in the training set.",
"Summary.",
"Existing work in fairness mostly focus on tackling bias against protected attributes like race and gender, while those in adversarial NLP primarily investigate characterand word-level perturbations and seek to improve the models' robustness by retraining them from scratch on the adversarial training set.",
"Our work makes use of perturbations in inflectional morphology to highlight the linguistic bias present in models such as BERT and Transformer, before showing that simply fine-tuning the models for one epoch on the adversarial training set is sufficient to achieve significant robustness while maintaining performance on clean data.",
"Inflectional perturbations inherently preserve the general semantics of a word since the root remains unchanged.",
"In cases where a word's part of speech (POS) is context-dependent (e.g., duck as a verb or a noun), restricting perturbations to the original POS further preserves its original meaning.",
"Additionally, since second language speakers are prone to inflectional errors (Haznedar, 2002; White, 2003), adversarial examples that perturb the inflectional morphology of a sentence should be less perceivable to people who interact heavily with non-native speakers or are themselves non-native speakers.",
"Hence, we present MORPHEUS , our proposed method for crafting inflectional adversaries.",
"it is plausible for English dialect and second language (L2) speakers to produce such sentences.",
"(Top)",
"Models trained on SQuAD 2.0 are more fragile than those trained on SQuAD 1.1, and have a bias towards predicting no answer.",
"Examples are answerable questions and therefore present in both SQuAD 1.1 and 2.0.",
"(Bottom)",
"Perturbing two inflections caused Transformer-big to output a completely irrelevant sentence.",
"In addition, adversarial examples for 1.4% of the test set caused the model to output the source (English) sentences.",
"Problem formulation.",
"Given a target model f and an original input example x for which the ground truth label is y , our goal is to generate the adversarial example x (cid:48) that maximizes f 's loss.",
"Formally, we aim to solve the following problem: x (cid:48) = arg max x c L ( y, f ( x c )) (1) where x c is an adversarial example generated by perturbing x , f ( x ) is the model's prediction, and L ( ) is the model's loss function.",
"In this setting, f is a neural model for solving a specific NLP task.",
"Proposed solution.",
"To solve this problem, we propose MORPHEUS (Algorithm 1), an approach that greedily searches for the inflectional form of each noun, verb, or adjective in x that maximally increases f 's loss (Eq. 1).",
"For each token in x , MORPHEUS calls MAXINFLECTED to find the inflected form that caused the greatest increase in f 's loss.",
"3 Table 1 presents some adversarial examples obtained by running MORPHEUS on state-of-the-art machine reading comprehension and translation models: namely, BERT (Devlin et al., 2019), SpanBERT (Joshi et al., 2019), and Transformer-big (Vaswani et al., 2017; Ott et al., 2018).",
"3 A task-specific evaluation metric may be used instead of the loss in situations where it is unavailable.",
"However, as we discuss later, the choice of metric is important for optimal performance and should be chosen wisely.",
"There are two possible approaches to implementing MAXINFLECTED : one is to modify each token independently from the others in parallel , and the other is to do it sequentially such that the increase in loss is accumulated as we iterate over the tokens.",
"A major advantage of the parallel approach is that it is theoretically possible to speed it up by t times, where t is the number of tokens which are nouns, verbs, or adjectives.",
"However, since current state-of-the-art models rely heavily on contextual representations, the sequential approach is likely to be more effective in finding combinations of inflectional perturbations that cause major increases in loss.",
"We found this to be the case in our preliminary experiments (see Table 6 in Appendix D).",
"Assumptions.",
"MORPHEUS treats the target model as a black box and maximally requires only access to the model's logits to compute the loss.",
"As mentioned, task-specific metrics may be used instead of the loss as long as the surface is not overly flat, like in a step function.",
"Examples of inappropriate metrics are the exact match and F 1 scores for extractive question answering, which tend to be 1 for most candidates but drop drastically for specific ones.",
"This may affect MORPHEUS ' ability to find an adversary that induces absolute model failure.",
"While the black box assumption has the advantage of not requiring access to the target model's gradients and parameters, a limitation is that we need to query the model for each candidate inflec-tion's impact on the loss, as opposed to Ebrahimi et al. (2018)'s approach.",
"However, this is not an issue for inflectional perturbations since each word usually has less than 5 possible inflections.",
"Candidate generation.",
"We make use of lemminflect 4 to generate candidate inflectional forms in the GETINFLECTIONS method, a simple process in which the token is first lemma-tized before being inflected.",
"In our implementation of GETINFLECTIONS , we also allow the user to specify if the candidates should be constrained to the same universal part of speech.",
"Semantic preservation.",
"MORPHEUS constrains its search to inflections belonging to the same universal part of speech.",
"For example, take the word duck.",
"Depending on the context, it may either be a verb or a noun.",
"In the context of the sentence There's a jumping duck, duck is a noun andM ORPHEUS may only choose alternate inflections associated with nouns.",
"This has a higher probability of preserving the sentence's semantics compared to most other approaches, like character/word shuffling or synonym swapping, since the root word and its position in the sentence remains unchanged.",
"Early termination.",
"MORPHEUS selects an inflection if it increases the loss.",
"In order to avoid unnecessary searching, it terminates once it finds an adversarial example that induces model failure.",
"In our case, we define this as a score of 0 on the task's evaluation metric (the higher, the better).",
"Other implementation details.",
"In order to increase overall inflectional variation in the set of adversarial examples, GETINFLECTIONS shuffles the generated list of inflections before returning it (see Figure 4 in Appendix).",
"Doing this has no 4 https://github.com/bjascob/LemmInflect effect on MORPHEUS ' ability to induce misclassifi-cation, but prevents overfitting during adversarial fine-tuning, which we discuss later in Section 6.",
"Additionally, since MORPHEUS greedily perturbs each eligible token in x , it may get stuck in a local maximum for some x values.",
"To mitigate this, we run it again on the reversed version of x if the early termination criterion was not fulfilled during the forward pass.",
"Finally, we use sacremoses 5 for tokenization and NLTK (Bird et al., 2009) for POS tagging.",
"NLP tasks.",
"To evaluate the effectiveness of MORPHEUS at inducing model failure in NLP models, we test it on two popular NLP tasks: question answering (QA) and machine translation (MT).",
"QA involves language understanding (classification), while MT also involves language generation.",
"Both are widely used by consumers of diverse linguistic backgrounds and hence have a high chance of propagating discrimination.",
"Measures.",
"In addition to the raw scores, we also report the relative decrease for easier comparison across models since they perform differently on the clean dataset.",
"Relative decrease ( d r ) is calculated using the following formula: d r = score original score adversarial score original (2) 4.1 Extractive Question Answering Given a question and a passage containing spans corresponding to the correct answer, the model is expected to predict the span corresponding to the answer.",
"Performance for this task is computed using exact match or average F 1 (Rajpurkar et al., 2016).",
"We evaluate the effectiveness of our attack using average F 1 , which is more forgiving (for the target model).",
"From our experiments, the exact match score is usually between 3-9 points lower than the average F 1 score.",
"Question Answering Dataset (SQuAD) comprises over 100,000 questionanswer pairs written by crowdworkers",
"5 https://github.com/alvations/sacremoses",
"based on Wikipedia articles.",
"SQuAD 1.1 guarantees that the passages contain valid answers to the questions posed (Rajpurkar et al., 2016).",
"SQuAD 2.0 increases the task's difficulty by including another 50,000 unanswerable questions, and models are expected to identify when a passage does not contain an answer for the given question (Rajpurkar et al., 2018).",
"Since the test set is not public, we generate adversarial examples from and evaluate the models on the standard dev set.",
"In addition, the answerable questions from SQuAD 2.0 are used in place of SQuAD 1.1 to evaluate models trained on SQuAD 1.1.",
"This allows for easy comparison between the performance of the SQuAD 1.1-fine-tuned models and SQuAD 2.0-fine-tuned ones for answerable questions.",
"We found performance on the answerable questions from SQuAD 2.0 to be comparable to SQuAD 1.1.",
"Models.",
"We evaluate MORPHEUS on Gardner et al. (2018)'s implementation of BiDAF (Seo et al., 2017), a common baseline model for SQuAD 1.1, ELMo-BiDAF (Peters et al., 2018), the transformers implementation (Wolf et al., 2019) of BERT, and SpanBERT, a pre-training method focusing on span prediction that outperforms BERT on multiple extractive QA datasets.",
"From Table 2, we see that models based on contextual embeddings (e.g., ELMo and BERT variants) tend to be more robust than those using fixed word embeddings (GloVe-BiDAF).",
"This difference is likely due to the pre-training process, which gives them greater exposure to a wider variety of contexts in which different inflections occur.",
"Removing the POS constraint further degrades the models' performance by another 10% of the original score, however, this difference is likely due to changes in the semantics and expected output of the examples.",
"BiDAF vs. BERT.",
"Even after accounting for the performance difference on clean data, the BiDAF variants are significantly less robust to inflectional adversaries compared to the BERT variants.",
"This is likely a result of BERT's greater representational power and masked language modeling pre-training procedure.",
"Randomly masking out words during pre-training could have improved the models' robustness to small, local perturbations (like ours).",
"BERT vs. SpanBERT.",
"In the context of question answering, SpanBERT appears to be slightly more robust than vanilla BERT when comparing overall performance on the two SQuAD datasets.",
"However, the difference becomes significant if we look only at the SQuAD 2.0-fine-tuned models' performance on answerable questions (7% differ-ence).",
"This indicates that BERT has a stronger bias towards predicting no answer when it encounters inflectional perturbations compared to SpanBERT.",
"SQuAD 1.1 vs. SQuAD 2.0.",
"The ability to know what you don't know (Rajpurkar et al., 2018) appears to have been obtained at a great cost.",
"The SQuAD 2.0-fine-tuned models are not only generally less robust to inflectional errors than their SQuAD 1.1 equivalents (6.5% difference), but also significantly less adept at handling answerable questions (1218% difference).",
"This discrepancy suggests a stronger bias in SQuAD 2.0 models towards predicting no answer upon receiving sentences containing inflectional errors (see Table 1).",
"Transferability.",
"Next, we investigate the transferability of adversarial examples found by MORPHEUS across different QA models and present some notable results in Table 3.",
"The adversarial examples found for GloVe-BiDAF transfer to a limited extent to other models trained on SQuAD 1.1, however, they have a much greater impact on BERT SQuAD 2 and SpanBERT SQuAD 2 (34x more).",
"We observe a similar pattern for adversarial examples found for SpanBERT SQuAD 1.1 .",
"Of the two, BERT is more brittle in general: the SpanBERT SQuAD 1.1 adversaries have a greater effect on BERT SQuAD 2 's performance on answerable questions than on SpanBERT SQuAD 2 's.",
"Discussion.",
"One possible explanation for the SQuAD 2.0 models' increased fragility is the difference in the tasks they were trained for: SQuAD 1.1 models expect all questions to be answerable and only need to contend with finding the right span, while SQuAD 2.0 models have the added burden of predicting whether a question is answerable.",
"Therefore, in SQuAD 1.1 models, the feature space corresponding to a possible answer ends where the space corresponding to another possible answer begins, and there is room to accommodate slight variations in the input (i.e., larger individual spaces).",
"We believe that in SQuAD 2.0 models, the need to accommodate the unanswerable prediction forces the spaces corresponding to the possible answers to shrink, with unanswerable spaces potentially filling the gaps between them.",
"For SQuAD 2.0 models, this increases the probability of an adversarial example landing in the space corresponding to the unanswerable prediction.",
"This would explain the effectiveness of adversarial fine-tuning in Section 6, which intuitively creates a buffer zone and expands the decision boundaries around each clean example.",
"The diminished effectiveness of the transferred adversaries at inducing model failure is likely due to each model learning slightly different segmentations of the answer space.",
"As a result, different small, local perturbations have different effects on each model.",
"We leave the in-depth investigation of the above phenomena to future work.",
"We now demonstrate MORPHEUS ' ability to craft adversaries for NMT models as well, this time without access to the models' logits.",
"The WMT'14 English-French test set (newstest2014), containing 3,003 sentence pairs, is used for both evaluation and generating adversarial examples.",
"We evaluate our attack on the fairseq implementation of both the Convolutional Seq2Seq (Gehring et al., 2017) and Transformer-big models, and report the BLEU score (Papineni et al., 2002) using fairseq 's implementation (Ott et al., 2019).",
"From our experiments (Table 2), ConvS2S and Transformer-big appear to be extremely brittle even to inflectional perturbations constrained to the same part of speech (5657% decrease).",
"In addition, some adversarial examples caused the models to regenerate the input verbatim instead of a translation: 1.4% of the test set for Transformer-big, 3% for ConvS2S (see Table 9 in the Appendix for some ex-amples).",
"This is likely due to the joint source/target bytepair encoding (Sennrich et al., 2016) used by both NMT systems to tackle rare word translation.",
"We experimented with both BLEU and chrF (Popovic, 2015) as our optimizing criterion 6 and achieved comparable results for both, however, MORPHEUS found more adversarial examples that caused the model to output random sentences about Nicolas Sarkozy when optimizing for chrF.",
"To test our hypothesis that inflectional perturbations are likely to be relatively natural and semantics preserving, we randomly sample 130 adversar-6",
"adversar-6 We use the sacrebleu implementation (Post, 2018).",
"ial examples 7 from each dataset and ask 3 Amazon Mechanical Turk workers to indicate (1) whether the sentences could have been written by a native speaker, L2 speaker, beginner learner 8 , or no human; and (2) the likelihood of the original and adversarial examples sharing the same meaning.",
"To ensure the quality of our results, only Turkers who completed > 10,000 HITs with a 99% acceptance rate could access our task.",
"For comparison, we also report ratings by native U.S. English speakers, who were selected via a demographic survey and fluency test adapted from Hartshorne et al. (2018).",
"Workers were paid a rate of at least $12/hr.",
"9 Table 4 shows that Turkers from our unrestricted sample judged 95% of our adversaries to be plausibly written by a human and 92% generally likely to be semantically equivalent to the original examples 92% of the time, hence validating our hypothesis.",
"Qualitative analysis revealed that is/are am/been changes accounted for 48% of the implausible adversaries.",
"Discussion.",
"We believe that non-native speakers may tend to rate sentences as more human-like for the following reasons: Their exposure to another language as a native speaker leads them to accept sentences that mimic errors made by L2 English speakers who share their first language.",
"Their exposure to the existence of these abovementioned errors may lead them to be more forgiving of other inflectional errors that are uncommon to them; they may deem these errors as 7 Only adversarial examples that degraded the F 1 score by > 50 and the BLEU score by > 15 were considered.",
"8 We define a beginner as one who has just started learning the language, and an L2 speaker to be an experienced speaker.",
"9 Each task was estimated to take 20-25s to be comfortably completed, but they were routinely completed in under 20s.",
"They do not presume mastery of English, and hence may choose to give the higher score when deciding between 2 choices.",
"In this section, we extend the standard adversarial training paradigm (Goodfellow et al., 2015) to make the models robust to inflectional perturbations.",
"Since directly running MORPHEUS on the entire training dataset to generate adversaries would be far too time-consuming, we use the findings from our experiments on the respective dev/test sets (Section 4) to create representative samples of good adversaries.",
"This significantly improves robustness to inflectional perturbations while maintaining similar performance on the clean data.",
"We first present an analysis of the inflectional distributions before elaborating on our method for generating the adversarial training set.",
"Figure 2a illustrates the overall distributional differences in inflection occurrence between the original and adversarial examples found by MORPHEUS for SQuAD 2.0.",
"Note that these distributions are computed based on the Penn Treebank (PTB) POS tags, which are finer-grained than the universal POS (UPOS) tags used to constrain MORPHEUS ' search (Section 4).",
"For example, a UPOSVERB may be actually be a PTBVBD , VBZ , VBG , etc.",
"We can see obvious differences between the global inflectional distributions of the original datasets and the adversaries found by MORPHEUS .",
"The differences are particularly significant for the NN , NNS , and VBG categories.",
"NNS and VBG also happen to be uncommon in the original distribution.",
"Therefore, we conjecture that the models failed (Section 4) because MORPHEUS is able to find the contexts in the training data where these inflections are uncommon.",
"Since there is an obvious distributional difference between the original and adversarial examples, we hypothesize that bringing the training set's inflectional distribution closer to that of the adversarial examples will improve the models' robustness.",
"To create the adversarial training set, we first isolate all the adversarial examples (from the dev/test set) that caused any decrease in F 1 /BLEU score and count the number of times each inflection is used in this adversarial dataset, giving us the inflectional distribution in Figure 2a.",
"Next, we randomly select an inflection for each eligible token in each training example , weighting the selection with this inflectional distribution instead of a uniform one.",
"To avoid introducing unnecessary noise into our training data, only inflections from the same UPOS as the original word are chosen.",
"We do this 4 times per training example, resulting in an adversarial training set with a cleanadversarial ratio of 1 : 4 .",
"This can be done in linear time and is highly scalable .",
"Algorithm 2 in Appendix C details our approach and Figure 2b depicts the training set's inflectional distribution before and after this procedure.",
"Fine-tuning vs. retraining.",
"Existing adversarial training approaches have shown that retraining the model on the augmented training set improves robustness (Belinkov and Bisk, 2018; Eger et al., 2019; Jin et al., 2019).",
"However, this requires substantial compute resources.",
"We show that fine-tuning the pre-trained model for just a single epoch is sufficient to achieve significant robustness to inflectional perturbations yet still maintain good performance on the clean evaluation set (Table 5).",
"SpanBERT.",
"Following Joshi et al. (2019), we fine-tune SpanBERT SQuAD 2 for another 4 epochs on our adversarial training set.",
"Table 5 shows the effectiveness of our approach for SpanBERT SQuAD 2 .",
"After just a single epoch of fine-tuning, SpanBERT SQuAD 2 becomes robust to most of the initial adversarial examples with a < 2 -point drop in performance on the clean dev set.",
"More importantly, running MORPHEUS on the robust model fails to significantly degrade its performance.",
"After 4 epochs, the performance on the clean SQuAD 2.0 dev set is almost equivalent to the original SpanBERT SQuAD 2 's, however this comes at a slight cost: the performance on the answerable questions is slightly lower than before.",
"In fact, if performance on answerable questions is paramount, our results show that fine-tuning on the adversarial training set for 1 epoch would be a better (and more cost effective) decision.",
"Retraining SpanBERT adversarially did not result in better performance.",
"We also found that weighting the random sampling with the adversarial distribution helped to improve the robust model's performance on the answerable questions (refer to Table 7 in Appendix).",
"Transformer-big.",
"Similarly, model robustness improves dramatically (56.25% to 20.20% decrease) after fine-tuning for 1 epoch on the adversarial training set with a 3 BLEU point drop in clean data performance (Table 5).",
"Fine-tuning for a further 3 epochs reduced the difference but made the model less robust to new adversarial examples.",
"We also experimented with using randomly sampled subsets but found that utilizing the entire original training set was necessary for preserving performance on the clean data (see Table 8 in Appendix).",
"Our anonymous reviewers brought up the possibility of using grammatical error correction (GEC) systems as a defense against inflectional adversaries.",
"Although we agree that adding a GEC model before the actual NLU/translation model would likely help, this would not only require an extra modeloften another Transformer (Bryant et al., 2019)and its training data to be maintained, but would also double the resource usage of the combined system at inference time.",
"Consequently, institutions with limited resources may choose to sacrifice the experience of minority users rather than incur the extra maintenance costs.",
"Adversarial fine-tuning only requires the NLU/translation model to be fine-tuned once and consumes no extra resources at inference time.",
"Although we have established our methods' effectiveness at both inducing model failure and robustifying said models, we believe they could be further improved by addressing the following limitations:",
"than that of real L2 speaker errors, which produced some unrealistic adversarial examples.",
"2. Our method of adversarial fine-tuning is analogous to curing the symptom rather than addressing the root cause since it would have to be performed for each domain-specific dataset the model is trained on.",
"In future work, we intend to address these limitations by directly modeling the L2 and dialectal distributions and investigating the possibility of robustifying these models further upstream.",
"Ensuring that NLP technologies are inclusive, in the sense of working for users with diverse linguistic backgrounds (e.g., speakers of World Englishes such as AAVE, as well as L2 speakers), is especially important since natural language user interfaces are becoming increasingly ubiquitous.",
"We take a step in this direction by revealing the existence of linguistic bias in current English NLP modelse.g., BERT and Transformerthrough the use of inflectional adversaries, before using adversarial fine-tuning to significantly reduce it.",
"To find these adversarial examples, we propose MORPHEUS , which crafts plausible and semantically similar adversaries by perturbing an example's inflectional morphology in a constrained fashion, without needing access to the model's gradients.",
"Next, we demonstrate the adversaries' effectiveness using QA and MT, two tasks with direct and wide-ranging applications, before validating their plausibility and semantic content with real humans.",
"Finally, we show that, instead of retraining the model, fine-tuning it on a representative adversarial training set for a single epoch is sufficient to achieve significant robustness to inflectional adversaries while preserving performance on the clean dataset.",
"We also present a method of generating this adversarial training set in linear time by making use of the adversarial examples' inflectional distribution to perform weighted random sampling.",
"We would like to express our gratitude to Lav Varsh-ney, Jason Wu, Akhilesh Gotmare, and our anonymous reviewers for their insightful feedback on our paper, and friends who participated in our pilot studies.",
"Samson is supported by Salesforce and the Singapore Economic Development Board under its Industrial Postgraduate Programme."
] | [
"abstain",
"result",
"other",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"objective",
"objective",
"result",
"method",
"other",
"other"
] |
[
"We propose PeTra, a memory-augmented neural network designed to track entities in its memory slots.",
"PeTra is trained using sparse annotation from the GAP pronoun resolution dataset and outperforms a prior memory model on the task while using a simpler architecture.",
"We empirically compare key modeling choices, finding that we can simplify several aspects of the design of the memory module while retaining strong performance.",
"To measure the people tracking capability of memory models, we",
"(a) propose a new diagnostic evaluation based on counting the number of unique entities in text, and",
"(b) conduct a small scale human evaluation to compare evidence of people tracking in the memory logs of PeTra relative to a previous approach.",
"PeTra is highly effective in both evaluations, demonstrating its ability to track people in its memory despite being trained with limited annotation.",
"Understanding text narratives requires maintaining and resolving entity references over arbitrary-length spans.",
"Current approaches for coreference resolution (Clark and Manning, 2016b; Lee et al., 2017, 2018; Wu et al., 2019) scale quadratically (without heuristics) with length of text, and hence are impractical for long narratives.",
"These models are also cognitively implausible, lacking the incrementality of human language processing (Tanenhaus et al., 1995; Keller, 2010).",
"Memory models with finite memory and online/quasi-online entity resolution have linear runtime complexity, offering more scal-ability, cognitive plausibility, and interpretability.",
"Memory models can be viewed as general problem solvers with external memory mimicking a Turing tape (Graves et al., 2014, 2016).",
"Some of the earliest applications of memory networks in language understanding were for question answering, where the external memory simply stored all of the word/sentence embeddings for a document (Sukhbaatar et al., 2015; Kumar et al., 2016).",
"To endow more structure and interpretability to memory, key-value memory networks were introduced by Miller et al. (2016).",
"The key-value architecture has since been used for narrative understanding and other tasks where the memory is intended to learn to track entities while being guided by varying degrees of supervision (Henaff et al., 2017; Liu et al., 2018a,b, 2019a).",
"We propose a new memory model, PeTra, for entity tracking and coreference resolution, inspired by the recent Referential Reader model (Liu et al., 2019a) but substantially simpler.",
"Experiments on the GAP (Webster et al., 2018) pronoun resolution task show that PeTra outperforms the Referential Reader with fewer parameters and simpler architecture.",
"Importantly, while Referential Reader performance degrades with larger memory, PeTra improves with increase in memory capacity (before saturation), which should enable tracking of a larger number of entities.",
"We conduct experiments to assess various memory architecture decisions, such as learning of memory initialization and separation of memory slots into key/value pairs.",
"To test interpretability of memory models' entity tracking, we propose a new diagnostic evaluation based on entity countinga task that the models are not explicitly trained forusing a small amount of annotated data.",
"Additionally, we conduct a small scale human evaluation to assess quality of people tracking based on model memory logs.",
"PeTra substantially outperforms Referential Reader on both measures, indicating better and more interpretable tracking of people.",
"1 1 Code available at https://github.com/ shtoshni92/petra The IG character IG Amelia OW Shepherd CR , portrayed by ...",
"Figure 2 depicts PeTra, which consists of three components: an input encoder that given the tokens generates the token embeddings, a memory module that tracks information about the entities present in the text, and a controller network that acts as an interface between the encoder and the memory.",
"Given a document consisting of a sequence of tokens { w 1 , , w T } , we first pass the document through a fixed pretrained BERT model (Devlin et al., 2019) to extract contextual token embeddings.",
"Next, the BERT-based token embeddings are fed into a single-layer unidirectional Gated Recurrent Unit (GRU) (Cho et al., 2014) running left-to-right to get task-specific token embeddings { h 1 , , h T } .",
"The memory M t consists of N memory cells.",
"The i th memory cell state at time step t consists of a tuple ( m it , u it ) where the vector m it represents the content of the memory cell, and the scalar u it [0 , 1] represents its recency of usage.",
"A high value of u it is intended to mean that the cell is tracking an entity that has been recently mentioned.",
"Initialization Memory cells are initialized to the null tuple, i.e. ( 0 , 0); thus, our memory is parameter-free.",
"This is in contrast with previous entity tracking models such as EntNet (Henaff et al., 2017) and the Referential Reader (Liu et al., 2019a) where memory initialization is learned and the cells are represented with separate key and value vectors.",
"We will later discuss variants of our memory with some of these changes.",
"At each time step t the controller network determines whether token t is part of an entity span and, if so, whether the token is coreferent with any of the entities already being tracked by the memory.",
"Depending on these two variables, there are three possible actions:",
"(i) IGNORE : The token is not part of any entity span, in which case we simply ignore it.",
"(ii) OVERWRITE : The token is part of an entity span but is not already being tracked in the memory.",
"(iii) COREF : The token is part of an entity span and the entity is being tracked in the memory.",
"Therefore, the two ways of updating the memory are OVERWRITE and COREF .",
"There is a strict ordering constraint to the two operations: OVERWRITE precedes COREF , because it is not possible to corefer with a memory cell that is not yet tracking anything.",
"That is, the COREF operation cannot be applied to a previously unwritten memory cell, i.e. one with u it = 0 .",
"Figure 1 illustrates an idealized version of this process.",
"Next we describe in detail the computation of the probabilities of the two operations for each memory cell at each time step t .",
"First, the entity mention probability e t , which reflects the probability that the current token w t is part of an entity mention, is computed by: e t = (MLP 1 ( h t )) (1) where MLP 1 is a multi-layer perceptron and is the logistic function.",
"Overwrite and Coref If the current token w t is part of an entity mention, we need to determine whether it corresponds to an entity being currently tracked by the memory or not.",
"For this we compute the similarity between the token embedding h t and the contents of the memory cells currently tracking entities.",
"For the i th memory cell with memory vector m it 1 the similarity with h t is given by: sim it = MLP 2 ([ h t ; m it 1 ; h t (cid:12) m it 1 ; u it 1 ]) (2) where MLP 2 is a second MLP and (cid:12) is the Hadamard (elementwise) product.",
"The usage scalar u it 1 in the above expression provides a notion of distance between the last mention of the entity in cell i and the potential current mention.",
"The higher the value of u it 1 , the more likely there was a recent mention of the entity being tracked by the cell.",
"Thus u it 1 provides an alternative to distance-based features commonly used in pairwise scores for spans (Lee et al., 2017).",
"Given the entity mention probability e t and similarity score sim it , we define the coref score cs it as: cs it = sim it 1 [ u it 1 = 0] (3) where the second term ensures that the model does not predict coreference with a memory cell that has not been previously used, something not enforced by Liu et al. (2019a).",
"2 Assuming the coref score for a new entity to be 0, 3 we compute the coref probability c it and new entity probability n t as follows: c 1 t ... c Nt n t = e t softmax cs 1 t ... cs Nt 0 (4) Based on the memory usage scalars u it and the new entity probability n t , the overwrite probability for 2 A threshold higher than 0 can also be used to limit coreference to only more recent mentions.",
"each memory cell is determined as follows: o it = n t 1 i =arg min j u jt 1 (5) Thus we pick the cell with the lowest usage scalar u jt 1 to OVERWRITE .",
"In case of a tie, a cell is picked randomly among the ones with the lowest usage scalar.",
"The above operation is non-differentiable, so during training we instead use o it = n t GS (cid:18) 1 u it 1 (cid:19) i (6) where GS ( . ) refers to Gumbel-Softmax (Jang et al., 2017), which makes overwrites differentiable.",
"For each memory cell, the memory vector is updated based on the three possibilities of ignoring the current token, being coreferent with the token, or considering the token to represent a new entity (causing an overwrite): m it = IGNORE (cid:122) (cid:125)(cid:124) (cid:123) (1 ( o it + c it )) m it 1 + OVERWRITE (cid:122) (cid:125)(cid:124) (cid:123) o it h t + c it MLP 3 ([ h t ; m it 1 ]) (cid:124) (cid:123)(cid:122) (cid:125) COREF (7) In this expression, the coreference term takes into account both the previous cell vector m it 1 and the current token representation h t , while the overwrite term is based only on h t .",
"In contrast to a similar memory update equation in the Referential Reader which employs a pair of GRUs and MLPs for each memory cell, our update parameter uses just MLP 3 which is memory cell-agnostic.",
"where (0 , 1) is the decay rate for the usage scalar.",
"Thus the usage scalar u it keeps decaying with time unless the memory is updated via OVERWRITE or COREF in which case the value is increased to reflect the memory cell's recent use.",
"Memory Variants In vanilla PeTra, each memory cell is represented as a single vector and the memory is parameter-free, so the total number of model parameters is independent of memory size.",
"This is a property that is shared with, for example, differentiable neural computers (Graves et al., 2016).",
"On the other hand, recent models for entity tracking, such as the EntNet (Henaff et al., 2017) and the Referential Reader (Liu et al., 2019a), learn memory initialization parameters and separate the memory cell into key-value pairs.",
"To compare these memory cell architectures, we investigate the following two variants of PeTra: 1. PeTra + Learned Initialization : memory cells are initialized at t = 0 to learned parameter vectors.",
"2. PeTra + Fixed Key : a fixed dimensions of each memory cell are initialized with learned parameters and kept fixed throughout the document read, as in EntNet (Henaff et al., 2017).",
"Apart from initialization, the initial cell vectors are also used to break ties for overwrites in Eqs.",
"(5) and (6) when deciding among unused cells (with u it = 0 ).",
"The criterion for breaking the tie is the similarity score computed using Eq.",
"(2).",
"The probability that the tokens w t 1 and w t 2 are coreferential according to, say, cell i of the memory depends on three things:",
"(a) w t 1 is identified as part of an entity mention and is either overwritten to cell i or is part of an earlier coreference chain for an entity tracked by cell i ,",
"(b) Cell i is not overwritten by any other entity mention from t = t 1 + 1 to t = t 2 , and",
"(c) w t 2 is also predicted to be part of an entity mention and is coreferential with cell i .",
"Combining these factors and marginalizing over the cell index results in the following expression for the coreference link probability : PCL ( w t 1 , w t 2 ) = N (cid:88) i =1 ( o it 1 + c it 1 ) t 2 (cid:89) j = t 1 +1 (1 o ij ) c it 2 (9) 2.5 Losses The GAP (Webster et al., 2018) training dataset is small and provides sparse supervision with labels for only two coreference links per instance.",
"In order to compensate for this lack of supervision, we use a heuristic loss L ent over entity mention probabilities in combination with the end task loss L coref for coreference.",
"The two losses are combined with a tunable hyperparameter resulting in the following total loss: L = L coref + L ent .",
"The coreference loss is the binary cross entropy between the ground truth labels for mention pairs",
"and the coreference link probability PCL in Eq.",
"(9).",
"Eq.",
"(9) expects a pair of tokens while the annotations are on pairs of spans, so we compute the loss for all ground truth token pairs: L coref = (cid:88) ( s a ,s b ,y ab ) G (cid:32) (cid:88) w a s a (cid:88) w b s b H ( y ab , PCL ( w a , w b )) (cid:33) where G is the set of annotated span pairs and H ( p, q ) represents the cross entropy of the distribution q relative to distribution p .",
"Apart from the ground truth labels, we use im-plied labels in the coreference loss calculation.",
"For handling multi-token spans, we assume that all tokens following the head token are coreferential with the head token (self-links).",
"We infer more supervision based on knowledge of the setup of the GAP task.",
"Each GAP instance has two candidate names and a pronoun mention with supervision provided for the { name, pronoun } pairs.",
"By design the two names are different, and therefore we use them as a negative coreference pair.",
"Even after the addition of this implied supervision, our coreference loss calculation is restricted to the three mention spans in each training instance; therefore, the running time is O ( T ) for finite-sized mention spans.",
"In contrast, Liu et al. (2019a) compute the above coreference loss for all token pairs (assuming a negative label for all pairs outside of the mentions), which results in a runtime of O ( T 3 ) due to the O ( T 2 ) pairs and O ( T ) computation per pair, and thus will scale poorly to long documents.",
"We use the inductive bias that most tokens do not correspond to entities by imposing a loss on the average of the entity mention probabilities predicted across time steps, after masking out the labeled entity spans.",
"For a training instance where spans s A and s B correspond to the person mentions and span s P is a pronoun, the entity mention loss is L ent = (cid:80) Tt =1 e t m t (cid:80) Tt =1 m t where m t = 0 if w t s A s B s P and m t = 1 otherwise.",
"Each GAP instance has only 3 labeled entity mention spans, but the text typically has other entity mentions that are not labeled.",
"Unlabeled entity mentions will be inhibited by this loss.",
"However, on average there are far more tokens outside entity spans than inside the spans.",
"In experiments without this loss, we observed that the model is susceptible to predicting a high entity probability for all tokens while still performing well on the end task of pronoun resolution.",
"We are interested in tracking people beyond just the entities that are labeled in the GAP task, for which this loss is very helpful.",
"GAP is a gender-balanced pronoun resolution dataset introduced by Webster et al. (2018).",
"Each instance consists of a small snippet of text from Wikipedia, two spans corresponding to candidate names along with a pronoun span, and two binary labels indicating the coreference relationship between the pronoun and the two candidate names.",
"Relative to other popular coreference datasets (Pradhan et al., 2012; Chen et al., 2018), GAP is comparatively small and sparsely annotated.",
"We choose GAP because its small size allows us to do extensive experiments.",
"For the input BERT embeddings, we concatenate either the last four layers of BERTBASE , or layers 1922 of BERTLARGE since those layers have been found to carry the most information related to coreference (Liu et al., 2019b).",
"The BERT embeddings are fed to a 300-dimensional GRU model, which matches the dimensionality of the memory vectors.",
"We vary the number of memory cells N from 2 to 20.",
"The decay rate for the memory usage scalar is 0.98.",
"The MLPs used for predicting the entity probability and similarity score consist of two 300-dimensional ReLU hidden layers.",
"For the Fixed Key variant of PeTra we use 20 dimensions for the learned key vector and the remaining 280 dimensions as the value vector.",
"All models are trained for a maximum of 100 epochs with the Adam optimizer (Kingma and Ba, 2015).",
"The learning rate is initialized to 10 3 and is reduced by half, until a minimum of 10 4 , whenever there is no improvement on the validation performance for the last 5 epochs.",
"Training stops when there is no improvement in validation performance for the last 15 epochs.",
"The temperature of the Gumbel-Softmax distribution used in the OVERWRITE operation is initialized to 1 and halved every 10 epochs.",
"The coreference loss terms in Section 2.5.1 are weighted differently for different coreference links:",
"(a) self-link losses for multi-to-ken spans are given a weight of 1,",
"(b) positive coreference link losses are weighted by 5, and",
"(c) negative coreference link losses are multiplied by 50.",
"To prevent overfitting:",
"(a) we use early stopping based on validation performance, and",
"(b) apply dropout at a rate of 0.5 on the output of the GRU model.",
"Finally, we choose = 0 .",
"1 to weight the entity prediction loss described in Section 2.5.2.",
"One of the goals of this work is to develop memory models that not only do well on the coreference resolution task, but also are interpretable in the sense that the memory cells actually track entities.",
"Hence in addition to reporting the standard metrics on GAP, we consider two other ways to evaluate memory models.",
"As our first task, we propose an auxiliary entity-counting task.",
"We take 100 examples from the GAP validation set and annotate them with the number of unique people mentioned in them.",
"4 We test the models by predicting the number of people from their memory logs as explained in Section 3.5.",
"The motivation behind this exercise is that if a memory model is truly tracking entities, then its memory usage logs should allow us to recover this information.",
"To assess the people tracking performance more holistically, we conduct a human evaluation in which we ask annotators to assess the memory models on people tracking performance, defined",
"as:(a) detecting references to people including pronouns, and",
"(b) maintaining a 1-to-1 correspondence between people and memory cells.",
"For this study, we pick the best run (among 5 runs) of PeTra and the Referential Reader for the 8-cell configuration using BERTBASE (PeTra: 81 F1; Referential Reader: 79 F1).",
"Next we randomly pick 50 documents (without replacement) from the GAP dev set and split those into groups of 10 to get 5 evaluation sets.",
"We shuffle the original 50 documents and follow the same steps to get another 5 evaluation sets.",
"In the end, we have a total of 10 evaluation sets with 10 documents each, where each unique document belongs to exactly 2 evaluation sets.",
"4 In the GAP dataset, the only relevant entities are people.",
"the models on their people tracking performance (detailed instructions in Appendix A.3).",
"For each document the annotators are presented memory logs of the two models (ordered randomly) and asked whether they prefer the first model, prefer the second model, or have no preference (neutral).",
"GAP Given a pronoun span s P and two candidate name spans s A & s B , we have to predict binary labels for potential coreference links between ( s A , s P ) and ( s B , s P ).",
"Thus, for a pair of entity spans, say s A and s P , we predict the coreference link probability as: PCL ( s A , s P ) = max w A s A ,w P s PPCL ( w A , w P ) where PCL ( w A , w P ) is calculated using the procedure described in Section 2.4 5 .",
"The final binary prediction is made by comparing the probability against a threshold.",
"Counting unique people For the test of unique people counting, we discretize the overwrite operation, which corresponds to new entities, against a threshold and sum over all tokens and all memory cells to predict the count as follows: # unique people = T (cid:88) t =1 N (cid:88) i =1 1 [ o it ] 3.6 Evaluation Metrics For GAP we evaluate models using F-score.",
"6 First, we pick a threshold from the set { 0.01, 0.02, , 5 The computation of this probability includes the mention detection steps required byWebster et al. (2018).",
"1.00 } which maximizes the validation F-score.",
"This threshold is then used to evaluate performance on the GAP test set.",
"For the interpretability task of counting unique people, we choose a threshold that minimizes the absolute difference between ground truth count and predicted count summed over the 100 annotated examples.",
"We select the best threshold from the set { 0.01, 0.02, , 1.00 } .",
"The metric is then the number of errors corresponding to the best threshold.",
"7 3.7 Baselines The Referential Reader (Liu et al., 2019a) is the most relevant baseline in the literature, and the most similar to PeTra.",
"The numbers reported by Liu et al. (2019a) are obtained by a version of the model using BERTBASE , with only two memory cells.",
"To compare against PeTra for other configurations, we retrain the Referential Reader using the code made available by the authors.",
"8 We also report the results of Joshi et al. (2019) and Wu et al. (2019), although these numbers are not comparable since both of them train on the much larger OntoNotes corpus and just test on GAP.",
"We train all the memory models, including the Referential Reader, with memory size varying from { 2, 4, , 20 } memory cells for both BERTBASE and BERTLARGE , with each configuration being trained 5 times.",
"Figure 3 shows the performance of the 7 Note that the error we report is therefore a best-case result.",
"We are not proposing a way of counting unique people in new test data, but rather using this task for analysis.",
"8 https://github.com/liufly/refreader",
"models on the GAP validation set as a function of memory size.",
"The Referential Reader outperforms PeTra (and its memory variants) when using a small number of memory cells, but its performance starts degrading after 4 and 6 memory cells for BERTBASE and BERTLARGE respectively.",
"PeTra and its memory variants, in contrast, keep improving with increased memory size (before saturation at a higher number of cells) and outperform the best Referential Reader performance for all memory sizes 6 cells.",
"With larger numbers of memory cells, we see a higher variance, but the curves for PeTra and its memory variants are still consistently higher than those of the Referential Reader.",
"Among different memory variants of PeTra, when using BERTBASE the performances are comparable with no clear advantage for any particular choice.",
"For BERTLARGE , however, vanilla PeTra has a clear edge for almost all memory sizes, suggesting the limited utility of initialization.",
"The results show that PeTra works well without learning vectors for initializing the key or memory cell contents.",
"Rather, we can remove the key/value distinction and simply initialize all memory cells with the zero vector.",
"To evaluate on the GAP test set, we pick the memory size corresponding to the best validation performance for all memory models.",
"Table 1 shows that the trends from validation hold true for test as well, with PeTra outperforming the Referential Reader and the other memory variants of PeTra.",
"Figure 4 shows the results for the proposed interpretability task of counting unique people.",
"For both BERTBASE and BERTLARGE , PeTra achieves the lowest error count.",
"Interestingly, from Figure 4b we can see that for 14 memory cells, the other memory variants of PeTra perform worse than the Referential Reader while being better at the GAP validation task (see Figure 3b).",
"This shows that a better performing model is not necessarily better at tracking people.",
"To test the relationship between the GAP task and the proposed interpretability task, we compute the correlation between the GAP F-score and the negative count of unique people for each model separately.",
"9 Table 2 shows the Spearman's correlation between these measures.",
"For all models we see a positive correlation, indicating that a dip in coreference performance corresponds to an increase in error on counting unique people.",
"The correlations for PeTra are especially high, again suggesting it's greater interpretability.",
"(a) A successful run of PeTra with 4 memory cells.",
"The model accurately links all the mentions of Amelia to the same memory cell while also detecting other people in the discourse.",
"(b) Memory log of PeTra with 8 memory cells.",
"The model correctly links she and Julie but fails at linking the three Bethenny mentions, and also fails at detecting Jason.",
"Table 3 summarizes the results of the human evaluation for people tracking.",
"The annotators prefer PeTra in 74% cases while the Referential Reader for only 8% instances (see Appendix A.4 for visualizations comparing the two).",
"Thus, PeTra easily outperforms the Referential Reader on this task even though they are quite close on the GAP evaluation.",
"The annotators agree on 68% of the documents, disagree between PeTra and Neutral for 24% of the documents, and disagree between PeTra and the Referential Reader for the remaining 8% documents.",
"For more details, see Appendix A.2.",
"We visualize two runs of PeTra with different configurations in Figure 5. For both instances the model gets the right pronoun resolution, but clearly in Figure 5b the model fails at correctly tracking repeated mentions of Bethenny.",
"We believe these errors happen because",
"(a) GAP supervision is limited to pronoun-proper name pairs, so the model is never explicitly supervised to link proper names, and",
"(b) there is a lack of span-level features, which hurts the model when a name is split across multiple tokens.",
"There are several strands of related work, including prior work in developing neural models with external memory as well as variants that focus on modeling entities and entity relations, and neural models for coreference resolution.",
"Memory-augmented models.",
"Neural network architectures with external memory include memory networks (Weston et al., 2015; Sukhbaatar et al., 2015), neural Turing machines (Graves et al., 2014), and differentiable neural computers (Graves et al., 2016).",
"This paper focuses on models with inductive biases that produce particular structures in the memory, specifically those related to entities.",
"Models for tracking and relating entities.",
"A number of existing models have targeted entity tracking and coreference links for a variety of tasks.",
"EntNet (Henaff et al., 2017) aims to track entities via a memory model.",
"EntityNLM (Ji et al., 2017) represents entities dynamically within a neural language model.",
"Hoang et al. (2018) augment a reading comprehension model to track entities, incorporating a set of auxiliary losses to encourage capturing of reference relations in the text.",
"Dhingra et al. (2018) introduce a modified GRU layer designed to aggregate information across coreferent mentions.",
"Memory models for NLP tasks.",
"Memory models have been applied to several other NLP tasks in addition to coreference resolution, including targeted aspect-based sentiment analysis (Liu et al., 2018b), machine translation (Maruf and Haffari, 2018), narrative modeling (Liu et al., 2018a), and dialog state tracking (Perez and Liu, 2017).",
"Our study of architectural choices for memory may also be relevant to models for these tasks.",
"Neural models for coreference resolution.",
"Several neural models have been developed for coreference resolution, most of them focused on modeling pairwise interactions among mentions or spans in a document (Wiseman et al., 2015; Clark and Manning, 2016a; Lee et al., 2017, 2018).",
"These models use heuristics to avoid computing scores for all possible span pairs in a document, an operation which is quadratic in the document length T assuming a maximum span length.",
"Memory models for coreference resolution, including our model, differ by seeking to store information about entities in memory cells and then modeling the relationship between a token and a memory cell.",
"This reduces computation from O ( T 2 ) to O ( T N ) , where N is the number of memory cells, allowing memory models to be applied to longer texts by using the global entity information.",
"Past work (Wiseman et al., 2016) have used global features, but in conjunction with other features to score span pairs.",
"Referential Reader.",
"Most closely related to the present work is the Referential Reader (Liu et al., 2019a), which uses a memory model to perform coreference resolution incrementally.",
"We signifi-cantly simplify this model to accomplish the same goal with far fewer parameters.",
"We propose a new memory model for entity tracking, which is trained using sparse coreference resolution supervision.",
"The proposed model outperforms a previous approach with far fewer parameters and a simpler architecture.",
"We propose a new diagnostic evaluation and conduct a human evaluation to test the interpretability of the model, and find that our model again does better on this evaluation.",
"In future work, we plan to extend this work to longer documents such as the recently released dataset of Bamman et al. (2019).",
"This material is based upon work supported by the National Science Foundation under Award Nos. 1941178 and 1941160.",
"We thank the ACL reviewers, Sam Wiseman, and Mrinmaya Sachan for their valuable feedback.",
"We thank Fei Liu and Jacob Eisenstein for answering questions regarding the Referential Reader.",
"Finally, we want to thank all the annotators at TTIC who participated in the human evaluation study."
] | [
"objective",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"We explore multitask models for neural translation of speech, augmenting them in order to reflect two intuitive notions.",
"First, we introduce a model where the second task decoder receives information from the decoder of the first task, since higher-level intermediate representations should provide useful information.",
"Second, we apply regularization that encourages transitivity and invertibility .",
"We show that the application of these notions on jointly trained models improves performance on the tasks of low-resource speech transcription and translation.",
"It also leads to better performance when using attention information for word discovery over unsegmented input.",
"Recent e orts in endangered language documentation focus on collecting spoken language resources, accompanied by spoken translations in a high resource language to make the resource interpretable (Bird et al., 2014a).",
"For example, the BULB project (Adda et al., 2016) used the LIG-Aikuma mobile app (Bird et al., 2014b; Blachon et al., 2016) to collect parallel speech corpora between three Bantu languages and French.",
"Since it's common for speakers of endangered languages to speak one or more additional languages, collection of such a resource is a realistic goal.",
"Speech can be interpreted either by transcription in the original language or translation to an-other language.",
"Since the size of the data is extremely small, multitask models that jointly train a model for both tasks can take advantage of both signals.",
"Our contribution lies in improving the sequence-to-sequence multitask learning paradigm, by drawing on two intuitive notions: that higher-level representations are more useful than lower-level representations, and that translation should be both transitive and invertible.",
"Higher-level intermediate representations , such as transcriptions, should in principle carry information useful for an end task like speech translation.",
"A typical multitask setup (Weiss et al., 2017) shares information at the level of encoded frames, but intuitively, a human translating speech must work from a higher level of representation, at least at the level of phonemes if not syntax or semantics.",
"Thus, we present a novel architecture for tied multitask learning with sequence-to-sequence models, in which the decoder of the second task receives information not only from the encoder, but also from the decoder of the first task.",
"In addition, transitivity and invertibility are two properties that should hold when mapping between levels of representation or across languages.",
"We demonstrate how these two notions can be implemented through regularization of the attention matrices, and how they lead to further improved performance.",
"We evaluate our models in three experiment settings: low-resource speech transcription and translation, word discovery on unsegmented input, and high-resource text translation.",
"Our high-resource experiments are performed on English, French, and German.",
"Our low-resource speech experiments cover a wider range of linguistic diversity: Spanish-English, Mboshi-French, and Ainu-English.",
"In the speech transcription and translation tasks, our proposed model leads to improved performance against all baselines as well as previous multitask architectures.",
"We observe improvements of up to 5% character error rate in the transcription task, and up to 2 .",
"8% character-level BLEU in the translation task.",
"However, we didn't observe similar improvements in the text translation experiments.",
"Finally, on the word discovery task, we improve upon previous work by about 3% F-score on both tokens and types.",
"Our models are based on a sequence-to-sequence model with attention (Bahdanau et al., 2015).",
"In general, this type of model is composed of three parts: a recurrent encoder, the attention, and a recurrent decoder (see Figure 1a).",
"1 The encoder transforms an input sequence of words or feature frames x 1 , . . . , x N into a sequence of input states h 1 , . . . , h N : h n = enc( h n 1 , x n ) .",
"The attention transforms the input states into a sequence of context vectors via a matrix of attention weights : c m = X n mn h n .",
"Finally, the decoder computes a sequence of output states from which a probability distribution over output words can be computed.",
"s m = dec( s m 1 , c m , y m 1 ) P ( y m ) = softmax( s m ) .",
"In a standard encoder-decoder multitask model (Figure 1b) (Dong et al., 2015; Weiss et al., 2017), we jointly model two output sequences using a shared encoder, but separate attentions and decoders: c 1 m = X n 1 mn h n s 1 m = dec 1 ( s 1 m 1 , c 1 m , y 1 m 1 ) P ( y 1 m ) = softmax( s 1 m ) and c 2 m = X n 2 mn h n s 2 m = dec 2 ( s 2 m 1 , c 2 m , y 2 m 1 ) P ( y 2 m ) = softmax( s 2 m ) .",
"We can also arrange the decoders in a cascade (Figure 1c), in which the second decoder attends only to the output states of the first decoder: c 2 m = X m 0 12 mm 0 s 1 m 0 s 2 m = dec 2 ( s 2 m 1 , c 2 m , y 2 m 1 ) P ( y 2 m ) = softmax( s 2 m ) .",
"1 For simplicity, we have assumed only a single layer for both the encoder and decoder.",
"It is possible to use multiple stacked RNNs; typically, the output of the encoder and decoder ( c m and P ( y m ), respectively) would be computed from the top layer only.",
"Tu et al. (2017) use exactly this architecture to train on bitext by setting the second output sequence to be equal to the input sequence ( y 2 i = x i ).",
"In our proposed triangle model (Figure 1d), the first decoder is as above, but the second decoder has two attentions, one for the input states of the encoder and one for the output states of the first decoder: c 2 m = hP m 0 12 mm 0 s 1 m 0 P n 2 mn h n i s 2 m = dec 2 ( s 2 m 1 , c 2 m , y 2 m 1 ) P ( y 2 m ) = softmax( s 2 m ) .",
"Note that the context vectors resulting from the two attentions are concatenated, not added.",
"For compactness, we will write X for the matrix whose rows are the x n , and similarly H , C , and so on.",
"We also write A for the matrix of attention weights: [ A ] ij = ij .",
"Let be the parameters of our model, which we train on sentence triples ( X , Y 1 , Y 2 ).",
"Define the score of a sentence triple to be a log-linear interpolation of the two decoders' probabilities:",
"where is a parameter that controls the importance of each sub-task.",
"In all our experiments, we set to 0 .",
"5.",
"We then train the model to maximize L ( ) = X score( Y 1 , Y 2 | X ; ) , where the summation is over all sentence triples in the training data.",
"We can optionally add a regularization term to the objective function, in order to encourage our attention mechanisms to conform to two intuitive principles of machine translation: transitivity and invertibility .",
"Transitivity attention regularizer To a first approximation, the translation relation should be transitive (Wang et al., 2006; Levinboim and Chiang, 2015): If source word x i aligns to target word 83 x 1 x N encoder h 1 h N attention c 1 c M decoder s 1 s M softmax P ( y 1 y M ) x 1 x N encoder h 1 h N attention attention c 11 c 1 M 1 decoder s 11 s 1 M 1 softmax P ( y 11 y 1 M 1 ) c 21 c 2 M 2 decoder s 21 s 2 M 2 softmax P ( y 21 y 2 M 2 ) x 1 x N encoder h 1 h N attention c 11 c 1 M 1 decoder s 11 s 1 M 1 softmax P ( y 11 y 1 M 1 ) attention c 21 c 2 M 2 decoder s 21 s 2 M 2 softmax P ( y 21 y 2 M 2 ) x 1 x N encoder h 1 h N attention c 11 c 1 M 1 decoder s 11 s 1 M 1 softmax P ( y 11 y 1 M 1 ) attentions c 21 c 2 M 2 decoder s 21 s 2 M 2 softmax P ( y 21 y 2 M 2 )",
"y 1 j and y 1 j aligns to target word y 2 k , then x i should also probably align to y 2 k .",
"To encourage the model to preserve this relationship, we add the following transitivity regularizer to the loss function of the triangle models with a small weight trans = 0 .",
"2: L trans = score( Y 1 , Y 2 ) trans (cid:13)(cid:13)(cid:13) A 12 A 1 A 2 (cid:13)(cid:13)(cid:13) 2 2 .",
"Invertibility attention regularizer The translation relation also ought to be roughly invertible (Levinboim et al., 2015): if, in the reconstruction version of the cascade model, source word x i aligns to target word y 1 j , then it stands to reason that y j is likely to align to x i .",
"So, whereas Tu et al. (2017) let the attentions of the translator and the reconstructor be unrelated, we try adding the following invertibility regularizer to encourage the attentions to each be the inverse of the other, again with a weight inv = 0 .",
"2: L inv = score( Y 1 , Y 2 ) inv (cid:13)(cid:13)(cid:13) A 1 A 12 I (cid:13)(cid:13)(cid:13) 2 2 .",
"3.3 Decoding Since we have two decoders, we now need to employ a two-phase beam search, following Tu et al. (2017): 1. The first decoder produces, through standard beam search, a set of triples each consisting of a candidate transcription Y 1 , a score P ( Y 1 ), and a hidden state sequence S .",
"2. For each transcription candidate from the first decoder , the second decoder now produces Corpus Speakers Segments Hours Ainu-English 1 2,668 2.5 Mboshi-French 3 5,131 4.4 Spanish-English 240 17,394 20 Table 1: Statistics on our speech datasets.",
"through beam search a set of candidate translations Y 2 , each with a score P ( Y 2 ).",
"3. We then output the combination that yields the highest total score( Y 1 , Y 2 ).",
"All our models are implemented in DyNet (Neubig et al., 2017).",
"2 We use a dropout of 0.2, and train using Adam with initial learning rate of 0 .",
"0002 for a maximum of 500 epochs.",
"For testing, we select the model with the best performance on dev.",
"At inference time, we use a beam size of 4 for each decoder (due to GPU memory constraints), and the beam scores include length normalization (Wu et al., 2016) with a weight of 0.8, which Nguyen and Chiang (2017) found to work well for low-resource NMT.",
"We focus on speech transcription and translation of endangered languages, using three di erent cor-2",
"cor-2 Our code is available at: https://bitbucket.org/ antonis/dynet-multitask-models .",
"pora on three di erent language directions: Spanish (es) to English (en), Ainu (ai) to English, and Mboshi (mb) to French (fr).",
"Spanish is, of course, not an endangered language, but the availability of the CALLHOME Spanish Speech dataset (LDC2014T23) with English translations (Post et al., 2013) makes it a convenient language to work with, as has been done in almost all previous work in this area.",
"It consists of telephone conversations between relatives (about 20 total hours of audio) with more than 240 speakers.",
"We use the original train-dev-test split, with the training set comprised of 80 conversations and dev and test of 20 conversations each.",
"Hokkaido Ainu is the sole surviving member of the Ainu language family and is generally considered a language isolate.",
"As of 2007, only ten native speakers were alive.",
"The Glossed Audio Corpus of Ainu Folklore provides 10 narratives with audio (about 2.5 hours of audio) and translations in Japanese and English.",
"3 Since there does not exist a standard train-dev-test split, we employ a cross validation scheme for evaluation purposes.",
"In each fold, one of the 10 narratives becomes the test set, with the previous one (mod 10) becoming the dev set, and the remaining 8 narratives becoming the training set.",
"The models for each of the 10 folds are trained and tested separately.",
"On average, for each fold, we train on about 2000 utterances; the dev and test sets consist of about 270 utterances.",
"We report results on the concatenation of all folds.",
"The Ainu text is split into characters, except for the equals ( = ) and underscore ( ) characters, which are used as phonological or structural markers and are thus merged with the following character.",
"4 Mboshi (Bantu C25 in the Guthrie classifica-tion) is a language spoken in Congo-Brazzaville, without standard orthography.",
"We use a corpus (Godard et al., 2017) of 5517 parallel utterances (about 4.4 hours of audio) collected from three native speakers.",
"The corpus provides non-standard grapheme transcriptions (close to the language phonology) produced by linguists, as well as French translations.",
"We sampled 100 segments from the training set to be our dev set, and used the original dev set (514 sentences) as our test set.",
"We employ a 3-layer speech encoding scheme similar to that of Duong et al. (2016).",
"The first bidirectional layer receives the audio sequence in the form of 39-dimensional Perceptual Linear Predictive (PLP) features (Hermansky, 1990) computed over overlapping 25ms-wide windows every 10ms.",
"The second and third layers consist of LSTMs with hidden state sizes of 128 and 512 respectively.",
"Each layer encodes every second output of the previous layer.",
"Thus, the sequence is downsampled by a factor of 4, decreasing the computation load for the attention mechanism and the decoders.",
"In the speech experiments, the decoders 4 The data preprocessing scripts are released with the rest of our code.",
"output the sequences at the grapheme level, so the output embedding size is set to 64.",
"We found that this simpler speech encoder works well for our extremely small datasets.",
"Applying our models to larger datasets with many more speakers would most likely require a more sophisticated speech encoder, such as the one used by Weiss et al. (2017).",
"In Table 2, we present results on three small datasets that demonstrate the e cacy of our models.",
"We compare our proposed models against three baselines and one skyline.",
"The first baseline is a traditional pivot approach (line 1), where the ASR output, a sequence of characters, is the input to a character-based NMT system (trained on gold transcriptions).",
"The skyline model (line 2) is the same NMT system, but tested on gold transcriptions instead of ASR output.",
"The second baseline is translation directly from source speech to target text (line 3).",
"The last baseline is the standard multitask model (line 4), which is similar to the model of Weiss et al. (2017).",
"The cascade model (line 5) outperforms the baselines on the translation task, while only falling behind the multitask model in the transcription task.",
"On all three datasets, the triangle model (lines 6, 7) outperforms all baselines, including the standard multitask model.",
"On Ainu-English, we even obtain translations that are comparable to the skyline model, which is tested on gold Ainu transcriptions.",
"Comparing the performance of all models across the three datasets, there are two notable trends that verify common intuitions regarding the speech transcription and translation tasks.",
"First, an increase in the number of speakers hurts the performance of the speech transcription tasks.",
"The character error rates for Ainu are smaller than the CER in Mboshi, which in turn are smaller than the CER in CALLHOME.",
"Second, the character-level BLEU scores increase as the amount of training data increases, with our smallest dataset (Ainu) having the lowest BLEU scores, and the largest dataset (CALLHOME) having the highest BLEU scores.",
"This is expected, as more training data means that the translation decoder learns a more informed character-level language model for the target language.",
"Note that Weiss et al. (2017) report much higher BLEU scores on CALLHOME: our model un-derperforms theirs by almost 9 word-level BLEU points.",
"However, their model has significantly more parameters and is trained on 10 times more data than ours.",
"Such an amount of data would never be available in our endangered languages scenario.",
"When calculated on the word-level, all our models' BLEU scores are between 3 and 7 points for the extremely low resource datasets (Mboshi-French and Ainu-English), and between 7 and 10 for CALLHOME.",
"Clearly, the size of the training data in our experiments is not enough for producing high quality speech translations, but we plan to investigate the performance of our proposed models on larger datasets as part of our future work.",
"To evaluate the e ect of using the combined score from both decoders at decoding time, we evaluated the triangle models using only the 1-best output from the speech model (lines 8, 9).",
"One would expect that this would favor speech at the expense of translation.",
"In transcription accuracy, we indeed observed improvements across the board.",
"In translation accuracy, we observed a surprisingly large drop on Mboshi-French, but surprisingly little e ect on the other language pairs in fact, BLEU scores tended to go up slightly, but not significantly.",
"Finally, Figure 2 visualizes the attention matrices for one utterance from the baseline multitask model and our proposed triangle model.",
"It is clear that our intuition was correct: the translation decoder receives most of its context from the transcription decoder, as indicated by the higher attention weights of A 12 .",
"Ideally, the area under the red squares (gold alignments) would account for 100% of the attention mass of A 12 .",
"In our triangle model, the total mass under the red squares is 34%, whereas the multitask model's correct attentions amount to only 21% of the attention mass. 5 Word Discovery Although the above results show that our model gives large performance improvements, in abso-lute terms, its performance on such low-resource tasks leaves a lot of room for future improvement.",
"A possible more realistic application of our methods is word discovery, that is, finding word boundaries in unsegmented phonetic transcriptions.",
"After training an attentional encoder-decoder model between Mboshi unsegmented phonetic se-86 A 1 A 1 A 2 A 2 A 12",
"quences and French word sequences, the attention weights can be thought of as soft alignments, which allow us to project the French word boundaries onto Mboshi.",
"Although we could in principle perform word discovery directly on speech, we leave this for future work, and only explore singletask and reconstruction models.",
"We use the same Mboshi-French corpus as in Section 4, but with the original training set of 4617 utterances and the dev set of 514 utterances.",
"Our parallel data consist of the unsegmented phonetic Mboshi transcriptions, along with the word-level French translations.",
"We first replicate the model of Boito et al. (2017), with a single-layer bidirectional encoder and single layer decoder, using an embedding and hidden size of 12 for the base model, and an embedding and hidden state size of 64 for the reverse model.",
"In our own models, we set the embedding size to 32 for Mboshi characters, 64 for French words, and the hidden state size to 64.",
"We smooth the attention weights A using the method of Duong et al. (2016) with a temperature T = 10 for the softmax computation of the attention mechanism.",
"Following Boito et al. (2017), we train models both on the base Mboshi-to-French direction, as well as the reverse (French-to-Mboshi) direction, with and without this smoothing operation.",
"We further smooth the computed soft alignments of all models so that a mn = ( a mn 1 + a mn + a mn + 1 ) / 3 as a post-processing step.",
"From the single-task models we extract the A 1 attention matrices.",
"We also train reconstruction models on both directions, with and without the invertibility regularizer, extracting both A 1 and A 12 matrices.",
"The two matrices are then combined so that A = A 1 + ( A 12 ) T .",
"Evaluation is done both at the token and the type level, by computing precision, recall, and F-score over the discovered segmentation, with the best results shown in Table 3. We reimplemented the base (Mboshi-French) and reverse (French-Mboshi) models from Boito et al. (2017), and the performance of the base model was comparable to the one reported.",
"However, we were unable to 87 Model (with smoothing) Tokens Types Precision Recall F-score Precision Recall F-score Boito et al. 2017 base 5.85 6.82 6.30 6.76 15.00 9.32 (reported) reverse 21.44 16.49 18.64 27.23 15.02 19.36 Boito et al. 2017 base 6.87 6.33 6.59 6.17 13.02 8.37 (reimplementation) reverse 7.58 8.16 7.86 9.22 11.97 10.42 our single-task base 7.99 7.57 7.78 7.59 16.41 10.38 reverse 11.31 11.82 11.56 9.29 14.75 11.40 reconstruction + 0 .",
"reproduce the significant gains that were reported when using the reverse model ( italicized in Table 3).",
"Also, our version of both the base and reverse singletask models performed better than our reimplementation of the baseline.",
"Furthermore, we found that we were able to obtain even better performance at the type level by combining the attention matrices of a reconstruction model trained with the invertibility regularizer.",
"Boito et al. (2017) reported that combining the attention matrices of a base and a reverse model significantly reduced performance, but they trained the two models separately.",
"In contrast, we obtain the base ( A 1 ) and the reverse attention matrices ( A 12 ) from a model that trains them jointly, while also tying them together through the invertibility regularizer.",
"Using the regularizer is key to the improvements; in fact, we did not observe any improvements when we trained the reconstruction models without the regularizer.",
"For evaluating our models on text translation, we use the Europarl corpus which provides parallel sentences across several European languages.",
"We extracted 1,450,890 three-way parallel sentences on English, French, and German.",
"The concatenation of the newstest 20112013 sets (8,017 sentences) is our dev set, and our test set is the concatenation of the newstest 2014 and 2015 sets (6,003 sentences).",
"We test all architectures on the six possible translation directions between English (en), French (fr) and German (de).",
"All the sequences are represented by subword units with byte-pair encoding (BPE) (Sennrich et al., 2016) trained on each language with 32000 operations.",
"On all experiments, the encoder and the decoder(s) have 2 layers of LSTM units with hidden state size and attention size of 1024, and embedding size of 1024.",
"For this high resource scenario, we only train for a maximum of 40 epochs.",
"The accuracy of all the models on all six language pair directions is shown in Table 4. In all cases, the best models are the baseline single-task or simple multitask models.",
"There are some instances, such as English-German, where the reconstruction or the triangle models are not statistically significantly di erent from the best model.",
"The reason for this, we believe, is that in the case of text translation between so linguistically close languages, the lower level representations (the output of the encoder) provide as much information as the higher level ones, without the search errors that are introduced during inference.",
"A notable outcome of this experiment is that we do not observe the significant improvements with the reconstruction models that Tu et al. (2017) observed.",
"A few possible di erences between our experiment and theirs are: our models are BPE-based, theirs are word-based; we use Adam for optimization, they use Adadelta; our model has slightly fewer parameters than theirs; we test on less typologically di erent language pairs than 88 Model s t en fr en de fr en fr de de en de fr singletask 20.92 12.69 20.96 11.24 16.10 15.29 multitask s x , t 20.54 12.79 20.01 11.18 16.31 15.07 cascade s x t 15.93 11.31 16.58 7.60 13.46 13.24 cascade s t x 20.34 12.27 19.17 11.09 15.24 14.78 reconstruction 20.19 12.44 20.63 10.88 15.66 13.44 reconstruction + L inv 20.72 12.64 20.11 10.46 15.43 12.64 triangle s x t 20.39 12.70 17.93 10.17 14.94 14.07 triangle s x t + L trans 20.52 12.64 18.34 10.42 15.22 14.37 triangle s t x 20.38 12.40 18.50 10.22 15.62 14.77 triangle s t x + L trans 20.64 12.42 19.20 10.21 15.87 14.89 Table 4: BLEU scores for each model and translation direction s t .",
"However, we also observe that in most cases our proposed regularizers lead to increased performance.",
"The invertibility regularizer aids the reconstruction models in achiev slightly higher BLEU scores in 3 out of the 6 cases.",
"The transitivity regularizer is even more e ective: in 9 out the 12 source-target language combinations, the triangle models achieve higher performance when trained using the regularizer.",
"Some of them are statistical significant improvements, as in the case of French to English where English is the intermediate target language and German is the final target.",
"The speech translation problem has been traditionally approached by using the output of an ASR system as input to a MT system.",
"For example, Ney (1999) and Matusov et al. (2005) use ASR output lattices as input to translation models, integrating speech recognition uncertainty into the translation model.",
"Recent work has focused more on modelling speech translation without explicit access to transcriptions.",
"Duong et al. (2016) introduced a sequence-to-sequence model for speech translation without transcriptions but only evaluated on alignment, while Anastasopoulos et al. (2016) presented an unsupervised alignment method for speech-to-translation alignment.",
"Bansal et al. (2017) used an unsupervised term discovery system (Jansen et al., 2010) to cluster recurring audio segments into pseudowords and translate speech using a bag-of-words model.",
"Berard et al. (2016) translated synthesized speech data using a model similar to the Listen Attend and Spell model (Chan et al., 2016).",
"A larger-scale study (Berard et al., 2018) used an end-to-end neural system system for translating audio books between French and English.",
"On a di erent line of work, Boito et al. (2017) used the attentions of a sequence-to-sequence model for word discovery.",
"Multitask learning (Caruana, 1998) has found extensive use across several machine learning and NLP fields.",
"For example, Luong et al. (2016) and Eriguchi et al. (2017) jointly learn to parse and translate; Kim et al. (2017) combine CTCand attention-based models using multitask models for speech transcription; Dong et al. (2015) use multitask learning for multiple language translation.",
"Toshniwal et al. (2017) apply multitask learning to neural speech recognition in a less traditional fashion: the lower-level outputs of the speech encoder are used for fine-grained auxiliary tasks such as predicting HMM states or phonemes, while the final output of the encoder is passed to a character-level decoder.",
"Our work is most similar to the work of Weiss et al. (2017).",
"They used sequence-to-sequence models to transcribe Spanish speech and translate it in English, by jointly training the two tasks in a multitask scenario where the decoders share the encoder.",
"In contrast to our work, they use a large corpus for training the model on roughly 163 hours of data, using the Spanish Fisher and CALL-89 HOME conversational speech corpora.",
"The parameter number of their model is significantly larger than ours, as they use 8 encoder layers, and 4 layers for each decoder.",
"This allows their model to adequately learn from such a large amount of data and deal well with speaker variation.",
"However, training such a large model on endangered language datasets would be infeasible.",
"Our model also bears similarities to the architecture of the model proposed by Tu et al. (2017).",
"They report significant gains in Chinese-English translation by adding an additional reconstruction decoder that attends on the last states of the translation decoder, mainly inspired by auto-encoders.",
"We presented a novel architecture for multitask learning that provides the second task with higher-level representations produced from the first task decoder.",
"Our model outperforms both the singletask models as well as traditional multitask architectures.",
"Evaluating on extremely low-resource settings, our model improves on both speech transcription and translation.",
"By augmenting our models with regularizers that implement transitivity and invertibility, we obtain further improvements on all low-resource tasks.",
"These results will hopefully lead to new tools for endangered language documentation.",
"Projects like BULB aim to collect about 100 hours of audio with translations, but it may be impractical to transcribe this much audio for many languages.",
"For future work, we aim to extend these methods to settings where we don't necessarily have sentence triples, but where some audio is only transcribed and some audio is only translated.",
"Acknowledgements This work was generously supported by NSF Award 1464553.",
"We are grateful to the anonymous reviewers for their useful comments."
] | [
"objective",
"objective",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"method",
"method",
"abstain",
"objective",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"result",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"objective",
"other",
"other",
"other",
"abstain",
"other",
"objective",
"result",
"result",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain"
] |
[
"The existence of universal models to describe the syntax of languages has been debated for decades.",
"The availability of resources such as the Universal Dependencies treebanks and the World Atlas of Language Structures make it possible to study the plausibility of universal grammar from the perspective of dependency parsing.",
"Our work investigates the use of high-level language descriptions in the form of typological features for multilingual dependency parsing.",
"Our experiments on multilingual parsing for 40 languages show that typological information can indeed guide parsers to share information between similar languages beyond simple language identification.",
"Human languages may share some syntactic features, but differ on others.",
"For example, some languages tend to place the subject before the verb (e.g., English) whereas others favour the reverse order (e.g., Arabic), and some do not exhibit a clear preference (e.g., Polish).",
"These features can be viewed as the parameters of a language's syntax (Greenberg, 1963; Chomsky, 1995).",
"When training a multilingual parser, it could be interesting to explicitly represent these parameters, and to integrate them into the parsing model.",
"If a successful strategy to do so was found, then, a parser could be trained simultaneously on several languages whose syntactic parameters have been explicitly represented.",
"Such parser could then use a single model to parse texts in any language with known syntactic parameters.",
"In theory, if we had at our disposal a set of parameters that completely describes the syntax of languages as well as treebanks that explore the whole space of parameters and their values, then such a universal parser could be designed.",
"To make such a program realistic, though, several issues have to be addressed.",
"In this paper, we propose to study the feasibility of learning such multilingual parser by addressing some of these issues.",
"The first one is the choice of syntactic parameters that will be used (Naseem et al., 2012; Tackstrom et al., 2013; Zhang and Barzilay, 2015).",
"In our work, we approximate these parameters by extracting syntactic information from the World Atlas of Language Structures (WALS) (Dryer and Haspelmath, 2013).",
"1 A language is represented by a vector containing the values it selects in the WALS.",
"This vector plays the role of the parameters mentioned above.",
"The second issue is the design of a unified scheme for representing syntax.",
"Our natural choice is the Universal Dependencies (UD) initiative.",
"2 UD specifically proposes a set of universal dependency relations, part-of-speech tags and morphological features (Nivre et al., 2016).",
"The UD treebanks are available for many languages, annotated according to common guidelines.",
"The third issue is the lexicon.",
"UD proposes a common language for describing languages' morpho-syntax, but we do not dispose of a uni-versal lexicon to which we can map the lexical units of different languages.",
"The solution adopted in this work is to resort to delexicalised parsing (Zeman and Resnik, 2008).",
"This technique consists in ignoring the lexicon when training a parser.",
"Such impoverishment of the data leads to less accurate parsers, but offers a simple solution to the lexicon issue.",
"Using an alternative solution for representing words in different languages, such as multilingual word embeddings, would have introduced in our experimental setting some biases that are difficult to assess and would have prevented 1 https://wals.info/ 2 http://universaldependencies.org to measure the precise influence of the typological features on the behaviour of the parser.",
"The fourth issue concerns the parser, which must be language independent and produce syntactic trees based on combinations of parameter values and sentential configurations.",
"We use a transition-based parser with a multi-layer perceptron classifier (Chen and Manning, 2014), responsible for proposing how parameter values match observable patterns in the data.",
"Our research hypotheses are:",
"(a) features derived from the WALS enable cross-lingual sharing in multilingual parsing, and",
"(b) these features do more than acting as mere language identifiers.",
"Our main contributions are to reassess the utility of the WALS as informant of typological features of parsed languages, to evaluate their benefit in a controlled multilingual setting with full supervision, and to perform a set of analyses to better understand how they interact with the parser model.",
"In addition to multilingual parsing, our method is suitable for zero-shot learning for under-resourced languages (Ammar et al., 2016; Guo et al., 2015).",
"After discussing related work (Sec. 2), we describe UD (Sec. 3), the WALS (Sec. 4) and our parser (Sec. 5).",
"The experimental setup (Sec. 6) precedes our results (Sec. 7), analyses (Sec. 8) and conclusions (Sec. 9).",
"Our work is at the intersection of three trends in the multilingual dependency parsing literature.",
"The first is transfer parsing , when a parser is trained on a language (or a collection of languages) and tested on another one.",
"The second is delexicalised parsing , which aims at abstracting away from the lexicon in order to neutralise genre, domain and topic biases which are heavily marked in the treebanks' vocabulary.",
"The third trend is the use of a handcrafted typological resources , such as the WALS, in multilingual NLP methods.",
"Transfer parsing is often a suitable solution when dealing with low-resource languages (Mc-Donald et al., 2011).",
"Projected transfer relies on parallel corpora in which one of the languages does not have labelled training data to learn a parser, but the other does.",
"One commonly employed solution is to use word alignments to project parsed sentences from one side onto the low-resource side of the parallel text, using heuristics (Hwa et al., 2005) or partial annotations (Lacroix et al., 2016).",
"Agic et al. (2016) parse the resource-rich languages in a multi-parallel corpus, proposing a projection method to obtain POS tags and dependency trees for low-resource languages from multiple-language word alignments.",
"The parsing model for the target language can also be obtained in an unsupervised fashion, by optimising a function that combines the likelihood of parallel data and the likelihood of the transferred model on non-annotated data in the low-resource language (Ma and Xia, 2014).",
"Instead of assuming the availability of parallel corpora, direct transfer approaches capitalize on language similarities.",
"For instance, Lynn et al. (2014) build parser for Irish by first training a delexicalised parser on another language, and then applying it on Irish.",
"They surprisingly found out that Indonesian was the language providing the best parsing results for Irish, even if they do not belong to the same language family, because long-distance dependencies are better represented in Indonesian than in the other languages tested.",
"Low-resource languages may have some (insuf-ficient) amount of training material available.",
"One can employ bilingual parsing, concatenating training corpora in two languages, to verify if there is an improvement in the results compared to a monolingual parser (Vilares et al., 2015).",
"Direct transfer and bilingual parsing methods are close to the present article, since we also concatenate training corpora.",
"However, in our case, we combine treebanks from many more sources (around 40 languages) and include typological features.",
"The combination of corpora in multiple languages for parser training is facilitated by the recent advent of multilingual standards and resources, in particular in Universal Dependencies for dependency syntax (Nivre et al., 2016).",
"This initiative enables the annotation of POS, morphology and syntactic dependencies for all languages with the same guidelines and label sets.",
"The availability of such corpora favours the development of cross-lingual methods (Tiedemann, 2015).",
"Multilingual parsing research is also encouraged by initiatives such as the CoNLL 2017 and 2018 shared tasks, on highly multilingual dependency parsing from raw text (Zeman et al., 2017, 2018).",
"Delexicalised parsers ignore the word forms and lemmas when analysing a sentence, usually relying on more abstract features such as word classes and POS tags.",
"The use of delexicalised parsers is especially relevant when learning multilingual parsers, since languages generally share only a limited amount of lexical units.",
"The approach proposed by Zeman and Resnik (2008) consists in adapting a parser for a new related language using either parallel corpora or delexicalised parsing.",
"This method can be used to quickly construct a parser if the source and target languages are suf-ficiently related.",
"McDonald et al. (2011) show that delexicalised parsers can be directly transferred between languages, yielding significantly higher accuracy than unsupervised parsers.",
"Moreover, typological features such as those present in the WALS provide information about the structure of languages (Dryer and Haspelmath, 2013).",
"These could be useful to guide multilingual parsers, informing them about the model parameters that can be shared among languages with similar characteristics.",
"Naseem et al. (2012) and Zhang and Barzilay (2015) use word-order features available for all their languages, while Ponti et al. (2018) used features they judged relevant in many categories (not only word order).",
"The parameters proposed in the WALS are not the only way to represent properties of languages.",
"Methods based on language embeddings ( Ostling and Tiedemann, 2017; Bjerva et al., 2019) also constitute interesting language representation.",
"Tackstrom et al. (2013) use a multilingual delexicalised transfer method, showing how selective parameter sharing, based on typological features and language family membership, can be incorporated in a discriminative graph-based dependency parser.",
"They select the typological features based on those used by Naseem et al. (2012), removing two features not considered useful.",
"The work closest to ours experimented with concatenating treebanks to train a multilingual parser (Ammar et al., 2016).",
"The authors use an S-LSTM transition-based parser similar to ours (although we do not include recurrent representations) trained on a set of lexicalised features that include multilingual word embeddings, Brown clusters, and fine-grained POS tags, whereas we only use coarse-grained POS and morphological features in a delexicalised setting.",
"They include a one-hot language-ID vector, a set of six word-order features from the WALS (Naseem et al., 2012), or the whole WALS vectors.",
"We use the two former plus a set of 22 selected features from WALS.",
"They perform experiments on seven high-resourced languages while we report results on a larger set of 40 languages.",
"Although Ammar et al. (2016) showed that, in a lexicalised setting, treebank concatenation could perform on par with monolingual parsers, the origins and limits of these improvements are not clear.",
"We explore directions for assessing the benefits of typological features in a delexicalised parser.",
"A major issue in multilingual parsing is the consistency of annotation across languages, since most corpora are annotated using different guidelines and tagsets.",
"Universal Dependencies (UD) is an initiative whose goal is to create cross-linguistically consistent treebanks, facilitating cross-lingual analyses for language and parsing studies.",
"Currently at version 2.3, 129 treebanks in 76 languages are available.",
"We use the UD v2.0 release for training and development, and the CoNLL 2017 shared task test sets for evaluation.",
"For training and development, 64 UD treebanks in 45 languages are available.",
"These treebanks vary in size: some are very small (e.g., 529 words for Kazakh), whereas others can be rather large (e.g., 1,842,867 words for Czech).",
"Test corpora contain at least 10,000 words per language and are available for 49 languages.",
"3 We learn delexicalised parsers from the UD treebanks using universal parts of speech (UPOS) and morphological features (FEAT) as input, and predicting labelled dependency trees which include language-specific extensions (e.g., acl:relcl ).",
"Morphological features are present in almost all treebanks, but exhibit high variability.",
"Therefore, we choose to keep only the 16 most frequent features (e.g., Number, Case, VerbForm), which appear in at least 28 languages.",
"Furthermore, morphology is represented as a list of key=value pairs, which we split so that each pair is considered separately, yielding a fixed set of 16 morphological features per word.",
"The World Atlas of Language Structures (WALS) is a database of structural (phonological, grammatical and lexical) properties of languages gathered by 55 authors from descriptive materials such",
"as reference grammars.",
"We have used this resource to associate to every language of UD corpora a set of features describing its properties that are relevant for syntactic parsing.",
"The WALS describes 2,676 languages with a set of 192 features, organized into 11 feature genus (e.g. Phonology, Word Order).",
"It can therefore be represented as a matrix W of 2,676 rows and 192 columns, in which cell W ( l, f ) gives the value of feature f for language l , and each row W ( l ) is the feature vector of a language l .",
"This matrix has been pruned and completed to match our experimental setup.",
"First, we have kept only the rows corresponding to the 49 languages of our test corpora.",
"Conversely, four UD languages do not appear in the WALS and have been left aside: Old Church Slavonic (cu), Gothic (got), Ancient Greek (grc), and Latin (la).",
"As a result, we obtain a reduced version of W containing 45 rows.",
"We experimented with two language representations obtained from the WALS.",
"The first one, henceforth WN , is based on the work of Naseem et al. (2012).",
"They selected the six Word Order features available for all their 17 target languages, identified by the codes 81A, 85A, 86A, 87A, 88A, 89A 4 .",
"These features cover phenomena such as verb-object and adjective-noun order, and have been widely discussed in the literature (Tackstrom et al., 2013; Zhang and Barzilay, 2015; Ammar et al., 2016).",
"The resulting matrix has 45 rows (languages) and 6 columns (features).",
"However, the WALS seen as a matrix is sparse, as some features are unspecified for some languages.",
"Therefore, we chose to keep only languages for which at most half of this vector is unspecified, resulting in the removal of 5 more languages: Galician (gl), Upper Sorbian (hsb), Kazakh (kk), Slovak (sk), and Uyghur (ug).",
"All our experiments are carried out on this set of 40 languages.",
"The second language representation proposed in this work, henceforth W 80 , is a relaxed version of WN .",
"Since the WALS is sparse, we include in W 80 all features specified for at least 80% of our 40 languages.",
"Furthermore, in addition to features from the Word Order family, we also include features from the Simple Clauses family.",
"This results in a matrix of 40 rows and 22 columns corresponding to 3 features from the Simple Clauses family (101A, 112A, 116A), and 19 from the Word 4 The description of the features of the WALS relevant for this paper can be found in appendix A. Romance Germanic Slavic Random WN 0.33 1.33 0.67 2.41 W 80 4.13 4.47 4.19 10.15 Table 1: MID values of typological language genus compared to Random.",
"The final matrices WN and W 80 obtained after feature selection are not complete: they contain respectively 4 and 35 unspecified values, which were filled automatically.",
"Each matrix W (short-hand for WN and W 80 ) offers a straightforward way to compare languages l 1 and l 2 using the Hamming distance 6 between their vectors W ( l 1 ) and W ( l 2 ) , noted d ( l 1 , l 2 ) .",
"To fill in the missing values, we have selected, for every language l 1 containing unspecified feature values (?'), the corresponding value from its closest fully specified language l 2 , that is, l 2 = arg min l i | ?",
"/ W ( l i ) d ( l 1 , l i ) .",
"The WN and W 80 matrices only provide partial descriptions of languages, heavily biased towards parsing and ignoring other aspects (e.g., phonol-ogy).",
"Nevertheless, it is tempting to compare how they relate languages that belong to the same typological genus.",
"In order to do so, we have concentrated on three genus present in our set of 40 languages: Romance (6 languages), Germanic (6 languages) and Slavic (7 languages), and computed how close the vectors of these languages are.",
"We define the mean internal distance (MID) of a language set L = { l 1 , . . . , l n } , as the average of the distances of every pair of languages in L : MID ( L ) = 1 n 2 n (cid:88) ( l i ,l j ) L L i (cid:54) = j d ( l i , l j ) We have computed the MID of each language genus, and compared it with the MID of randomly chosen sets of 6 languages (number of languages in the Romance and Germanic genus).",
"The results in Table 1 clearly indicate that WALS vectors capture language genus similarities.",
"extracted from the WALS can measure language proximity.",
"It could be interesting, for example, to reproduce the results of (Rabinovich et al., 2017) on reconstructing phylogenetic trees of language from the WALS features.",
"The parser used in our experiments is an arc-eager transition-based parser (Nivre, 2008), trained with a dynamic oracle (Goldberg and Nivre, 2012).",
"The prediction of transitions is performed with a multilayer perceptron (MLP), as in Chen and Manning (2014).",
"The MLP consists of an input layer, one hidden layer, and an output layer.",
"Two sets of delexicalised features have been defined for the prediction: BASIC and EXTENDED.",
"BASIC is a standard set composed of 9 POS features, 7 syntactic features, 32 morphological features, and a distance feature (the distance between the head and the dependent).",
"7 EXTENDED adds to BASIC new features that correspond to the WALS vectors WN (6 features) and W 80 (22 features), and/or the language ID of the sentence's language (1 feature).",
"Each feature, including ID, is associated with a zero-initialized learnable embedding of size",
"3. The input layer of the MLP corresponds to the concatenation of the embeddings of the different features, with dimensions varying from 396 to 465, depending on the configuration (with or without language vectors WN and W 80 , or a language identifier ID ).",
"The output layer has 263 neurons, corresponding to the number of transitions that the parser can predict.",
"The hidden layer has 1,000 units, the dropout rate used during training is equal to 0.4, the number of epochs is equal to 10, the activation function is a ReLU, the loss function is negative softmax log likelihood, and the learning algorithm is AMSgrad, using default parameters from Dynet (Neubig et al., 2017).",
"8 At every step of the parsing process, the parser predicts an action to perform, which may yield the creation of a new dependency between two words of the sentence.",
"The prediction of the actions is based on the values of the features fed to the MLP.",
"In BASIC mode, these features describe different aspects of the head and the dependent, as well as their neighbourhood.",
"For example, if the head is a 7 Corpora, configuration files and WALS vectors available at: http://pageperso.lis-lab.fr/carlos.",
"verb and the dependent is a noun located before the verb, a subject dependency has high probability in languages that prefer subject-verb ordering (SV).",
"In EXTENDED mode, the information of whether the language is SV is made explicit.",
"The MLP has therefore the possibility to combine a sentential configuration (e.g., a noun before a verb) with a language configuration (e.g., the language is SV) when predicting an action.",
"All languages that share a common feature in W will therefore be able to perform the same prediction for sentential configurations that are specific to this common feature (e.g., the noun preceding the verb and the language being SV).",
"9 6 Experimental Settings Corpora Our experiments were performed on the CoNLL 2017 shared task data (Zeman et al., 2017), on gold tokenisation and ignoring contractions (i.e., ranges).",
"We evaluate our models individually on each of the 40 languages for which we have a W ( l ) vector (section 4), using the original CoNLL 2017 shared task test sets.",
"The test corpora for each language are simply the concatenation of all test treebanks for that language.",
"Training and development are performed on multilingual corpora (henceforth TRAIN-ML and DEV-ML) derived from the training and development treebanks of 37 UD languages.",
"10 The UD training and development corpora have different sizes for different languages, ranging from 529 words for Kazakh (kk) to 1,842,867 for Czech (cs).",
"Thus, simply concatenating all corpora to constitute TRAIN-ML and DEV-ML would overrepresent certain languages and possibly bias the parser towards them.",
"This is why we have decided to balance TRAIN-ML and DEV-ML across languages.",
"First, all available training and development corpora of the 37 languages have been concatenated.",
"From this large corpus, we build two new intermediate corpora, PRE-TRAIN-ML and PRE-DEV-ML, with each sentence having 90% chances to belong to PRE-TRAIN-ML, and 10% chances 9 Our parser cannot predict non projective trees, systematically generating a wrong parse at test time.",
"The average non projectivity rate of the test corpora is equal to 1%, with a standard deviation of 1% among the 40 languages.",
"We ran some tests with pseudo projective tree transformation (Nivre and Nilsson, 2005), but it had a negligible impact on the results, so we have decided to keep the original projective algorithm.",
"10 Three languages among our 40 target languages have no corresponding training nor development data (bxr, kmr, sme).",
"to belong to PRE-DEV-ML.",
"Second, we build TRAIN-ML (respectively DEV-ML) by randomly selecting sentences from PRE-TRAIN-ML (resp. PRE-DEV-ML) until the number of tokens exceeds 20,000 (resp. 2,000 for DEV-ML) per language.",
"At the end, we shuffle the selected sentences to obtain the final training and development corpora TRAIN-ML and DEV-ML.",
"Using this procedure, the same sentence can appear several times in a corpus.",
"Nonetheless, this method guarantees a balanced representation of every language in TRAIN-ML and DEV-ML.",
"Metrics The quality of the predicted trees is assessed with a standard measure for dependency parsing: labelled attachment score (LAS).",
"11 , 12 We report LAS per language, as well as MACRO-LAS which is the macro-average of LAS on all languages that have a training set.",
"This measure is therefore independent of the size of the test corpus of each language, and is not biased towards over-represented languages in the test sets.",
"Training Configurations Our experiments on several (cid:104) training corpus, language vector (cid:105) pairs are designated by the following codes: L : Monolingual corpus.",
"The training corpus of a language l consists of the sentences of l in TRAIN-ML.",
"Thirty-seven BASIC delexicalised parsers have been trained, one per language.",
"This configuration corresponds to the standard one in parsing experiments: training and testing on the same language.",
": Multilingual corpus.",
"A BASIC parser is trained on the whole TRAIN-ML corpus, with no indication of the inputs language.",
"The parsing model is delexicalised, so the corpus contains only POS tags (gold), morphological features (gold) and syntactic relations (to be learned).",
"13 ID : Multilingual corpus + language ID .",
"An EXTENDED parser is trained on the TRAIN-ML corpus using as extra feature the identifier of the language attached to each word.",
"12 Using the CoNLL shared task 2017 evaluation script.",
"13 The decision to use gold POS tags and morphological features may seem unrealistic.",
"This article is the first step of a process in which we intend to predict the POS tags and the morphological features in the same fashion.",
"WN , W 80 : Multilingual corpus + WALS.",
"Two EXTENDED parsers are trained on the TRAIN-ML corpus, with WN (resp. W 80 ) vectors derived from the WALS attached to each word.",
"The detail of the LAS obtained for every language, as well as the macro-averaged LAS (MACRO) are displayed in Table",
"2. We comment below the results for L , and compare the results of meaningful pairs of experiments, summarised in Table",
"3. L ID WN W 80 Lang.",
"L : The results obtained in the L experiment show an important variation of performances for different languages.",
"LAS ranges from 46.78 for X Y X Y min max L 5.68 3.32 -0.51 es 14.37 zh W 80 4.00 2.58 0.59 hi 13.10 ru W 80 WN 1.24 1.45 -1.64 tr 5.80 ru ID 3.27 3.00 -0.06 nl 13.48 zh L ID 2.41 1.99 -0.91 es 7.65 ko W 80 ID 0.73 1.54 -4.07 zh 3.12 el Table 3: Differences between X and Y configurations: average ( X Y ), standard deviation ( ), minimum and maximum with corresponding languages.",
"Turkish to 81.44 for Italian.",
"A detailed investigation for the reasons of such a variability is beyond the scope of this paper.",
"Let us just mention a few hypotheses.",
"Some are language specific, such as the balance between morphological and syntactic marking of linguistic constructions (i.e., morphologically rich languages are probably favoured in our setting, since the morphological analysis is given as input to the parser).",
"Others are genre specific: the corpora for different languages pertain to different genres.",
"Although delexicalisa-tion neutralises some genre biases (some genres can feature a moderate lexical variability which can ease parsing) genres can also influence syntax, through sentence length (longer sentences are generally harder to parse), or the ratio of error-prone constructions, such as ambiguous prepositional phrases and coordination.",
"Finally, annotation quality is heterogeneous across languages, potentially explaining the variability in LAS.",
"L vs : An expected drop in performances is observed when switching from L to .",
"The MACROLAS loses 5.68 points.",
"The main hypothesis to explain such a drop is the noise introduced when mixing different languages.",
"This noise takes the form of contradictory information seen by the parser during training.",
"For example, the sentential configuration associated to a subject dependency in SV and VS languages are very different, yet the parser is unaware of this distinction and will see contradictory examples.",
"The variation of the LAS drop is different across languages.",
"In the case of Spanish, switching from L to even increases LAS (+0.51 points).",
"We do not have a conclusive explanation for this result.",
"The intuitive explanation is that is a (noisy) language which on average is closer to Spanish than it is to Chinese (which performance drops by 14.37 points).",
"This fact itself is the consequence that, on average, languages that compose are closer to Spanish than they are to Chinese.",
"vs W 80 : This is our first major result: when adding W 80 to the parser, the MACRO LAS increases by 4 points when compared to .",
"LAS increases for all languages.",
"There are two interpretations of this result.",
"The optimistic one is that W 80 helps decreasing the noise introduced by mixing languages in by explaining some apparently contradictory information in the data through the use of linguistic features encoded in the WALS.",
"The pessimistic interpretation is that the WALS vectors are merely an arbitrary encoding of the languages.",
"In this case, the parser's MLP would be associating sentential configurations to specific languages, thus learning different models for different languages.",
"Figuring out what the model is actually learning is not an easy task.",
"We propose in section 8 some clues to answer this question.",
"Moreover, there is not a clear tendency to increase or decrease when using the WALS vector in the case of the 3 languages without training data.",
"More experiments are required to study the performances when the language is not in the training corpus.",
"WN vs W 80 : When added to , vectors WN and W 80 do not have the same impact on the performances.",
"Adding W 80 to yields an increase of 4 points while adding WN increases the performance by 2.77 points only.",
"The parser is therefore able to take advantage from a richer description of languages when learning the model.",
"This result indicates that the disappointing parsing results reported by Ammar et al. (2016), who adopted the WN vector, are probably due to the fact that the features extracted from the WALS were not rich enough to explain differences between languages that are important for a parser.",
"vs ID : Adding the ID vector to yields an improvement of 3.28 MACRO points.",
"This increase was expected since, in this setting, sentential configurations are associated to a language ID, which helps decreasing the noise in the data.",
"L vs ID : One could expect that ID would reach the result obtained by L since in both configurations the same amount of data is available and languages are unambiguously identified.",
"This is not the case: the performance of ID is 2.41 points behind L .",
"The difference in performances is due to the MLP architecture (in particular the size of the hidden layer), which is the same for ID and for each of the L models.",
"Each language is described with more parameters in an L model than it is in the ID model.",
"ID vs W 80 : This is our second major result: adding W 80 to yields better results than adding ID to .",
"This result indicates that it is more interesting, in our setting, to describe a language as a vector of typological features, allowing to identify features that are common to several languages, than describing a language by an arbitrary code.",
"As mentioned above, such a conclusion is valid for models of a fixed size only, which is the case here.",
"It could be the case that, when increasing the number of parameters of the models, ID gets better results than W 80 .",
"We do not report here a series of experiences combining ID and W 80 .",
"We observed a slight improvement (MACRO=67.86) when adding ID in the input of the parser.",
"This effect indicates that the information contained in ID and W 80 vectors are complementary and the parser has the opportunity to rely on both of them.",
"Figuring out exactly how the parser uses this information is a complex issue that we address in the following section.",
"As already conjectured, one hypothesis for explaining the behaviour of the parser in the presence of W is that it uses the additional features to identify a language, not to better generalise on the syntactic phenomena that the features address.",
"Table 4 shows the accuracy of a logistic regression classifier trained to predict the language ID based on either the input features of the parser's MLP, or on the activations after the hidden layer, with , WN and W 80 .",
"The table shows that indeed, WALS features, especially W 80 , greatly improve the capability of the language classifier, suggesting that the parser can use language identity in its predictions.",
"The fact that this information is still available just before the decision layer means that it can be used for predicting parsing actions.",
"Another interesting analysis consists in comparing the distribution of activations for two languages.",
"In the following, the activations are measured at the hidden layer before the ReLU nonlinearity, and are assumed to follow normal distributions at the neuron level.",
"We compute the Jensen-Shannon Divergence (JSD) between the activations of a given neuron for a pair of languages.",
"Table 5 shows the mean, maximum and minimum neuron-level JSD between cherry-Configuration Features Accuracy input 0.432 WN input 0.678 W 80 input 0.954 hidden 0.436 WN hidden 0.682 W 80 hidden 0.956 Table 4: Language identification accuracy for a logistic regression classifier trained on the activations after the hidden layer, or at the input.",
"picked language pairs.",
"We selected three language pairs with increasing distance.",
"Dutch and German (nl-de) belong to the same typological genus and have identical W 80 vectors.",
"Portuguese and French (pt-fr) also belong to the same genus but their vectors differ in six features (e.g. 101A pronominal subject, 143E postverbal negation).",
"On the other extreme, Russian Buriat and Irish (brx-ga) have very different W 80 vectors, with only two shared values out of 22.",
"For nl-de, the average difference between the activation distributions in W 80 (0.854) is lower than in (0.86), suggesting that W 80 helps leveraging the similarity between those languages, which is also confirmed by an increase in LAS (Table 2).",
"For pt-fr, however, the addition of W 80 results in an increase in the average distance between the activation distributions (0.912) when compared to (0.878).",
"Analogously, this difference also increases by a larger margin (from 0.89 to 1.16) for the most distant pair bxr-ga.",
"Overall, these observations indicate that W 80 reinforces parameter sharing between similar languages and increases contrast between dissimilar ones.",
"As an example, Figure 1 shows that the distributions for the neuron with highest JSD are very similar for nl-de while they are different for bxr-ga.",
"This paper has studied how high-level typological language descriptions coming from the WALS can guide a multilingual parser to learn cross-language generalisations.",
"Two interpretations of what the parser is doing in the light of such information have been opposed.",
"In the first (optimistic) one, the parser uses the high-level descriptions to cluster coherent observable patterns across languages.",
"In the second (pessimistic) one, the parser uses the high-level descriptions given as input to figure out the identity of the language and uses this ID to trigger parts of the model that are language specific.",
"Our results and parsing model analyses hint that, although it is difficult to draw definitive conclusions, the model indeed uses information in the WALS vectors as language identifiers, but some extra gain is observed, favouring the cross-lingual sharing hypothesis.",
"As future work, we plan to study the influence of typological features on each dependency type.",
"Whereas a delexicalised parser offers a simple experimental setup, it impacts parsing performance.",
"Thus, we would like to use multilingual word embeddings to make lexical information accessible to the parser, making it more realistic.",
"The results in section 8 suggest that the parser struggles between two behaviours.",
"One way to intervene would be to penalise the parser when it correctly identifies the language, using adversarial learning (Ganin et al., 2016).",
"Our experiments on the three languages with no training corpus are not conclusive on the usefulness of the WALS vector in zero-shot setting, and we plan to make more tests in this setting."
] | [
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"method",
"abstain",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain"
] |
[
"Pinecone Systems [email protected]",
"BERT based ranking models have achieved superior performance on various information retrieval tasks.",
"However, the large number of parameters and complex self-attention operations come at a significant latency overhead.",
"To remedy this, recent works propose late-interaction architectures, which allow pre-computation of intermediate document representations, thus reducing latency.",
"Nonetheless, having solved the immediate latency issue, these methods now introduce storage costs and network fetching latency, which limit their adoption in real-life production systems.",
"In this work, we propose the Succinct Document Representation (SDR) scheme that computes highly compressed intermediate document representations, mitigating the stor-age/network issue.",
"Our approach first reduces the dimension of token representations by encoding them using a novel autoencoder architecture that uses the document's textual content in both the encoding and decoding phases.",
"After this token encoding step, we further reduce the size of the document representations using modern quantization techniques.",
"Evaluation on MSMARCO's passage re-reranking task show that compared to existing approaches using compressed document representations, our method is highly efficient, achieving 4x11.6x higher compression rates for the same ranking quality.",
"Similarly, on the TREC CAR dataset, we achieve 7.7x higher compression rate for the same ranking quality.",
"Information retrieval (IR) systems traditionally comprise of two stages: retrieval and ranking.",
"Given a user query, the role of the retrieval stage is to quickly retrieve a set of candidate documents Both authors contributed equally to the paper.",
"from a (very large) search index.",
"Retrieval is typically fast but not accurate enough; in order to improve the quality of the end result for the user, the candidate documents are re-ranked using a more accurate but computationally expensive algorithm.",
"Neural approaches have achieved the state of the art ranking performance in IR applications (Yates et al., 2021).",
"Transformer networks such as BERT (Devlin et al., 2019) consistently show better ranking effectiveness at the cost of a higher computational cost and latency (Nogueira and Cho, 2019).",
"To rank k documents, the ranker is called k times with an input of the form (query, document), where the query is the same, but the document is different.",
"Several works (MacAvaney et al., 2020; Gao et al., 2020b; Chen et al., 2020; Cao et al., 2020; Nie et al., 2020; Gao et al., 2020b; Khattab and Zaharia, 2020) have proposed to modify BERT-based rankers in a way that allows part of the model to compute query and document representations separately, and then produce the final score using a low-complexity interaction block; we denote these models as late-6624 interaction rankers.",
"Such approaches pre-compute document representations to improve latency significantly.",
"Next, at runtime the model computes the query representation (once), retrieves the precomputed document representations, and is only required to run a low-complexity interaction block k times to produce the final ranking score.",
"Precomputing document representations has shown to significantly reduce latency and at the same time retain comparable scores to BERT models (Gao et al., 2020b).",
"However, this does not account for additional storage and/or network fetching latency costs.",
"The representations typically consist of the contextual token embeddings in a transformer model, which consume orders of magnitude more storage than storing the entire corpus search index (cf. 5.1).",
"In this work, we propose Succinct Document Representation (SDR), a general scheme for compressing document representations.",
"It enables late-interaction rankers to be efficient in both latency and storage, while maintaining high ranking quality.",
"SDR is suitable for any ranking scheme that uses contextual embeddings, and achieves extreme compression ratios (2-3 orders of magnitude) with little to no impact on retrieval accuracy.",
"SDR consists of two major components: (1) embedding dimension reduction using an autoencoder with side information and (2) distribution-optimized quantization of the reduced-dimension vectors.",
"In SDR, the autoencoder consists of two subnetworks: an encoder that reduces the vector's dimensions and a decoder that reconstructs the compressed vector.",
"The encoder's output dimension represents the tradeoff between reconstruction fi-delity and storage requirements.",
"To improve the compression-reliability tradeoff, we leverage static token embeddings, which are available since the ranker has access to the document text (as it needs to render it to the user), and are computationally cheap to obtain.",
"We feed these embeddings to both the encoder and decoder as side information , allowing the autoencoder to focus more on storing just the context of a token, and less on its original meaning that is available in the static embeddings.",
"Ablation tests verify that adding the static vectors significantly improves the compression rates for the same ranking accuracy.",
"Since data storage is measured in bits rather than floating-point numbers, SDR uses quantization techniques to reduce storage size further.",
"Given that it is hard to evaluate the amount of information in each of the encoder's output dimensions, we perform a randomized Hadamard transform on the vectors, resulting in (1) evenly spread information across all coordinates and (2) transformed vectors that follow a Gaussian-like distribution.",
"We utilize known quantization techniques to represent these vectors using a small number of bits, controlling for the amount of quantization distortion.",
"Existing late-interaction schemes either ignore the storage overhead, or consider basic compression techniques, such as a simple (1 layer) autoencoder and float16 quantization.",
"However, this is insufficient to reach reasonable storage size (MacA-vaney et al., 2020); furthermore, this results in an increased fetching latency.",
"For the MSMARCO dataset, we used a distilled model with a reduced vector width (Hofsttter et al., 2020a) as the initial pre-trained weights for the late-interaction model.",
"On top of this, we used a non-linear autoencoder consisting of 2 dense layers followed by float16 quantization, a natural extension of MacAvaney et al. (2020).",
"This baseline achieves compression rates of 30x with no noticeable reduction in retrieval accuracy (measured with the official MRR@10 metric).",
"In comparison with this strong baseline, our SDR scheme achieves an additional compression rate of between 4x to 11.6x with the same ranking quality, reducing document representation size to the same order of magnitude as the retrieved text itself.",
"In Figure 1 we include a high-level presentation of the baseline, a variant of our method with float16 quantization, and our full method.",
"For the TREC CAR dataset, for which we do not have a reduced-width baseline, we used a BERT model as the pre-trained weights for the late-interaction model.",
"The baseline with 2 dense layers and float16 quantization achieves a 30x compression rates with a slight reduction in accuracy.",
"The SDR scheme reaches the same quality while improving compression rate by another 7.7x.",
"We propose the Succinct Document Representation (SDR) scheme for compressing the document representations required for fast Transformer-based rankers.",
"The scheme is based on a specialized autoencoder architecture and subsequent quantization.",
"For the MSMARCO passage retrieval task, SDR shows compression ratios of 121x with no noticeable decrease in ranking performance.",
"Compared to existing approaches for producing compressed representations, our method attains better compression rates (between 4x and 11.6x) for the same ranking quality.",
"Similar results are demonstrated on the TREC CAR dataset.",
"We provide a thorough analysis of the SDR system, showing that the contribution of each of the components to the compression-ranking effectiveness is significant.",
"Late-interaction models.",
"The idea of running several transformer layers for the document and the query independently, and then combining them in the last transformer layers, was developed concurrently by multiple teams: PreTTR (MacAvaney et al., 2020), EARL (Gao et al., 2020a), DC-BERT (Nie et al., 2020), DiPair (Chen et al., 2020), and the Deformer (Cao et al., 2020).",
"These works show that only a few layers where the query and document interact are sufficient to achieve results close to the performance of a full BERT ranker at a fraction of the runtime cost.",
"For each document, the contextual token vectors are stored in a cache and retrieved during the document ranking phase.",
"This impacts both storage cost as well as latency cost of fetching these vectors during the ranking phase.",
"MORES (Gao et al., 2020b), extends late-interaction models, where in the last interaction layers only the query attends to the document (and not vice-versa).",
"As document are typically much longer, this results in additional performance improvements with similar storage requirements.",
"ColBERT (Khattab and Zaharia, 2020) is another variant that runs all transformer layers independently for the query and the document, and the interaction between the final vectors is done through a sum-of-max operator.",
"A similar work, the Transformer-Kernel (TK) (Hofsttter et al., 2020b), has an interaction block based on a low-complexity kernel operation.",
"Both ColBERT and TK result in models with lower runtime latency at the expense of a drop in ranking quality.",
"However, the storage requirements for both approaches are still significant.",
"Some of the works above acknowledge the issue of storing the precomputed document representations and proposed partial solutions.",
"In ColBERT (Khattab and Zaharia, 2020), the authors proposed to reduce the dimension of the final token embedding using a linear layer.",
"However, even moderate compression ratios caused a large drop in ranking quality.",
"In the PreTTR model (MacAvaney et al., 2020), it was proposed to address the storage cost by using a standard auto-encoder architecture and the float16 format instead of float32.",
"Again, the ranking quality drops even with moderate compression ratios (they measured up to 12x).",
"Several other works (Guu et al., 2020; Karpukhin et al., 2020; Xiong et al., 2021; Qu et al., 2020; Lu et al., 2020) proposed representing the queries and documents as vectors (as opposed to a vector per token), and using dot product as the interaction block.",
"While this ranker architecture approach is simple (and can also be used for the retrieval step via an approximate nearest neighbor search such as FAISS (Johnson et al., 2017), ScaNN (Guo et al., 2020) or the Pinecone managed service 2 ), the overall ranking quality is generally lower compared to methods that employ a query-document cross-attention interaction.",
"For that reason these methods are used mainly for first-stage retrieval, followed by a reranking step.",
"Compressed embeddings.",
"Our work reduces storage requirements by reducing the number of bits per floating-point value.",
"Quantization gained attention and success in reducing the size of neural network parameters (Gupta et al., 2015; Essam et al., 2017; Wang et al., 2018; Wu et al., 2018) and distributed learning communication costs (Suresh et al., 2017; Alistarh et al., 2017; Konecn`y and Richtrik, 2018; Vargaftik et al., 2021, 2022).",
"Specifically, compressing word embeddings has been studied as an independent goal.",
"May et al. (2019) studied the effect of quantized word embeddings on downstream applications and proposed a metric for quantifying this effect with simple linear models that operate on the word embeddings directly.",
"As our work is concerned with compressing contextual embeddings, these methods do not apply since the set of possible embeddings values is not bounded by the vocabulary size.",
"Nevertheless, as in (May et al., 2019), we also observe that simple quantization schemes are quite effective.",
"Our work uses recent advances in this area to further reduce storage requirements for document representation, which, to the best of our knowledge, were not previously attempted in this context.",
"Our work is based on the late-interaction architecture (MacAvaney et al., 2020; Gao et al., 2020b; Chen et al., 2020; Cao et al., 2020; Nie et al., 2020), which separates BERT into L independent layers for the documents and the queries, and T L interleaving layers, where T is the total number of layers in the original model, e.g., 12 for BERT-Base.",
"Naively storing all documents embeddings consumes a huge amount of storage with a total of m h 4 bytes per document, where m is the average number of tokens per document and h is the model hidden size (384 for the distilled version and 768 for the BERT version).",
"For MSMARCO, with 8.8M documents and m = 76 .",
"9 , it leads to a high storage cost of over a terabyte, which is not affordable except in large production systems.",
"Our compression scheme for the document representations consists of two sequential steps,",
"(i) dimensionality reduction and",
"(ii) block-wise quantization, described in 3.1 and 3.2 respectively.",
"To compress document representations, we reduce the dimensionality of token representations (i.e., the output of BERT's L -th layer) using an autoencoder.",
"Standard autoencoder architectures typically consist of a neural network split into an encoder and a decoder: the encoder projects the input vector into a lower-dimension vector, which is then reconstructed back using the decoder.",
"Our architecture, AESI, extends the standard autoencoder by using the document's text as side information to both the encoder and decoder.",
"Such an approach is possible since, no matter how the document scores are computed, re-ranking systems have access to the document's text in order to render it back to the user.",
"In the rest of this section, we add the precise details of the AESI architecture.",
"Side Information.",
"In line with our observation that the ranker has access to the document's raw text, we propose utilizing the token embedding information, which is computed by the embedding layer used in BERT's architecture.",
"The token embeddings encode rich semantic information about the token itself; however, they do not fully capture the context in which they occur; hence, we refer to them as static embeddings .",
"For example, through token embeddings, we cannot disambiguate between the different meanings of the token bank , which can refer to either a geographical location (e.g., river bank) or a financial institution, depending on the context.",
"Static embeddings are key for upper BERT layers, which learn the contextual representation of tokens via the self-attention mechanism.",
"We use the static embeddings as side information to both the encoder and decoder parts of the autoencoder.",
"This allows the model to focus on encoding the distilled context , and less on the token information since it is already provided to the decoder directly.",
"AESI Approach.",
"For a token whose representation we wish to compress, our approach proceeds as follows.",
"We take the L -th layer's output contextual representation of the token together with its static embedding and feed both inputs to the autoencoder.",
"The information to be compressed (and reconstructed) is the contextual embedding, and the side-information, which aids in the compression task, is the static embedding.",
"The decoder takes the encoder output, along with the static embedding, and attempts to reconstruct the contextual embedding.",
"Figure 2 shows the AESI architecture.",
"AESI approach has two parameters that are determined empirically.",
"First, the L -th transformer layer of the contextual representation provided as input, which has a direct impact on latency 3 .",
"Second, the size of the encoder's output directly impacts the compression rate and thus storage costs.",
"Encoding starts by concatenating the input vector (i.e., the output of layer L , the vector we compress) and the static token embedding (i.e., the output of BERT's embedding layer), and then passes the concatenated vector through an encoder network, which outputs a c -dimensional encoded vector .",
"Decoding starts by concatenating the encoded vector with the static token embedding, then passes the concatenated vector through a decoder layer, which reconstructs the input vector.",
"Specifically, we use a two-layer dense network for both the encoder and the decoder, which can be written using the following formula: e = E ( v, u ) := W e 2 (cid:0) gelu (cid:0) W e 1 ( v ; u ) (cid:1)(cid:1) (1) v (cid:48) = D ( e, u ) := W d 2 (cid:0) gelu (cid:0) W d 1 ( e ; u ) (cid:1)(cid:1) (2) where v R h is the contextualized token em-3 A ranker has to compute layers L + 1 onward online.",
"bedding (the output of the L -th layer), u R h is the static token embedding (the output of the embedding layer, which is the input to BERT's layer 0 and includes token position embeddings and type embeddings), and u ; v means concatenation of these vectors.",
"W e 1 R i 2 h , W e 2 R c i , W d 1 R i ( c + h ) , W d 2 R h i are trainable parameters.",
"h is the dimension of token embeddings (e.g., 384), i is the intermediate autoencoder size, and c is the dimension of the projected (encoded) vector.",
"gelu( ) is an non-linear activation function (Hendrycks and Gimpel, 2016).",
"Additional autoencoder variations are explored in 5.3.",
"Storing the compressed contextual representations in a naive way consumes 32 bits (float32) per coordinate per token, which is still costly.",
"To further reduce storage overhead, we propose to apply a quantization technique, which uses a predetermined B bits per coordinate.",
"However, different coordinates and different tokens have different importance and possibly also different scales, so using the same number of bits and same quantization threshold for all of them increases the quantization error.",
"To remedy this issue, we follow an approach similar to EDEN quantization (Vargaftik et al., 2022), which uses a randomized Hadamard transform prior to quantization.",
"Loosely speaking, this shuffles the information across all coordinates.",
"Furthermore, each of the coordinates is guaranteed to follow Gaussian-like distribution, for which quantization boundaries can be computed optimally.",
"For the sake of brevity, the full description of the quantization algorithm is deferred to Appendix A. Efficiently applying the Hadamard transform requires the size of the input to be a power of two.",
"In addition, the input dimension should be large enough (specifically, larger than the output of AESI) so that information can be shuffled effectively.",
"Therefore, we concatenate the AESI vectors of all tokens from a single document, then segment it to a larger block size (we use 128), padding the last block with zeros when necessary.",
"The padding slightly increases space requirements and is considered when evaluating the compression efficiency.",
"In this section we describe the datasets used to evaluate the competing approaches for ranking documents given a query.",
"Next, we describe the baseline and the different configurations of SDR with emphasis on how we measure the compression ratio.",
"To evaluate the effectiveness of our proposed approach (SDR) and the competing baseline, we consider two information retrieval datasets, each with different characteristics.",
"MSMARCO passage re-ranking In this task (Nguyen et al., 2016), we are given a query and a list of 1,000 passages (retrieved via BM25), and the task is to rerank the passages according to their relevance to the query.",
"The corpus consists of 8.8M passages, downloaded from the web.",
"We consider two query sets: (1) MSMARCO-DEV , the development set for the MSMARCO passage reranking task, which consists of 6,980 queries.",
"On average, each query has a single relevant passage, and other passages are not annotated.",
"The models are measured using the mean reciprocal rank metric (MRR@10).",
"(2) TREC 2019 DL Track .",
"Here we consider the test queries from TREC 2019 DL Track passage reranking dataset.",
"Unlike MSMARCO-DEV, there are multiple passages annotated for each query with graded relevance labels (instead of binary labels), allowing us to use the more informative nDCG@10 metric.",
"Due to the excessive annotation overhead, this dataset consists of just 200 queries, so results are noisier compared to MSMARCO-DEV.",
"TREC Complex Answer Retrieval (CAR) is a dataset (Dietz et al., 2017) curated from Wikipedia.",
"It maps from article and section titles to relevant paragraphs.",
"Following Nogueira and Cho (2019), 6628 we use the automatic by-article annotations variant, which considers all paragraphs within the same article as relevant.",
"The dataset consists of 30M passages, making storage requirements a more significant challenge compared to the MSMARCO task.",
"The test query set consists of 2,254 queries with an average of 2.74 positive passages per query.",
"We use the MAP@1K official metric.",
"For both datasets, in addition to the quality metrics, we also measure the Compression Ratio (CR) as the amount of storage required to store the token embeddings when compared to the baseline model.",
"E.g., CR = 10 implies storage size that is one tenth of the baseline vectors.",
"Our algorithm is based on the late-interaction architecture (MacAvaney et al., 2020; Gao et al., 2020a; Nie et al., 2020; Chen et al., 2020; Cao et al., 2020).",
"We created a model based on this architecture, which we name BERTSPLIT , consisting of 10 layers that are computed independently for the query and the document with an additional two late-interaction layers that are executed jointly.",
"For MSMARCO, we initialized the model from reduced width pre-trained weights 4 and fine-tuned it using knowledge distillation from an ensemble of BERT-Large, BERT-Base, and ALBERT-Large (Hofsttter et al., 2020b) on the MSMARCO small training dataset, which consists of almost 40M tuples of query, a relevant document, and an irrelevant document.",
"For CAR, the model is initialized from pre-trained BERT-base model and trained on 50M samples curated by Nogueira and Cho (2019).",
"We trained autoencoder variants on a random subset of 500k documents to reduce training time.",
"We incorporate the quantization overhead into the computation of the compression ratios , including meta-data and the overhead of padding (cf. Appendix A).",
"In the following sections, we denote the SDR variants as AESI-{c}-{B}b where { c } is replaced with the width of the encoded vector and { B } is replaced with the number of bits in the quantization scheme.",
"When discussing AESI with no quantization, we simply write AESI-{c}.",
"To measure end to end latency, we configured an OpenSearch 5 cluster in AWS.",
"We used default pro-duction configurations, with 3 r6g.large datanode machines; disk space was set to 0.5TB.",
"For ranking, we used a single g4dn.xlarge machine, featuring a single T4 GPU instance.",
"This makes the cost of these two components similar.",
"In this section, we present the end to end latency results ( 5.1), show compression ratios and quality tradeoff of the SDR scheme ( 5.2).",
"We then examine how the proposed autoencoder ( 5.3) compares with other baselines and present additional measurements ( 5.4).",
"Table 1 (top) shows the latency benefits of SDR on the MSMARCO dataset, assuming document embeddings are stored in the OpenSearch retrieval system and 1k documents are retrieved per query.",
"The Distilbert model (full interaction architecture) has the highest quality and smallest index size (since it is only executed online).",
"However, ranking latency is prohibitively expensive.",
"As a baseline, we use a late interaction model, a two-layer autoencoder with code dimension 24 and float16 quantization, denoted Late+AE-24.",
"For this baseline, the ranking latency is significantly reduced at a cost in terms of quality.",
"However, the document representation 5 https://aws.amazon.com/ opensearch-service/ .",
"is large, causing retrieval and overall latency to increase to 0.7 and 1.22 seconds, respectively.",
"SDR, with a dimension of 16 and 6-bits quantization, reaches the same quality as the baseline while striking a better balance between retrieval and ranking latency, reaching overall latency of 1.1 seconds.",
"The index size is also significantly reduced compared to the baseline compression algorithm.",
"We also consider variants of the algorithms where the documents are pre-tokenized, and the tokenization output is retrieved instead of computing at runtime (marked as +tok in the table).",
"This further improves the ranking latency at the expense of a slight increase in index size.",
"Note that the baseline does not use the raw text and therefore does not benefit from precomputed tokens.",
"Table 1 (bottom) shows the latency results on the CAR dataset.",
"Here too, the BERT baseline has the highest ranking quality, at the cost of prohibitive latency.",
"The late interaction variants we consider have the same configuration as in the MSMARCO case, where the baseline uses 24 features (with float16 quantization) and SDR uses 16 features (with 6 bits EDEN quantization).",
"Unlike in the MSMARCO case, the quality (i.e., MAP@1k score) of these two options is not similar.",
"This makes SDR better than the baseline in latency, index size, as well as quality (by a large margin of over 14%).",
"In Appendix D we explore additional configurations and show that the baseline with 52 features reaches the same quality as SDR-16-6b.",
"However, we do not measure end-to-end latency for this case due to the excessive storage size and indexing time.",
"Note that using 52 features for the baseline is expected to have a negative impact on retrieval latency, making the benefits of SDR even more pronounced.",
"Table 2 shows the results on the MSMARCO query sets for SDR and its compression ratio against storing contextual token embeddings uncompressed.",
"In terms of compression ratio, it can be seen that AESI allows us to massively reduce storage requirements both with and without quantization.",
"AESI -16-6b reduces storage requirements by 121x, while at the same time showing no significant ranking performance drop.",
"Using AESI-16-6b, a document's embedding can be stored with only 947 bytes and the entire MSMARCO collection can Quant.bits( B )",
"be stored within 8.6GB.",
"There are several advantages of fitting the entire collection's representation into the main memory of the hosting machine, allowing for fast access, further fine-tuning, etc.",
"If further compression rates are required, AESI-8-5b uses just 5 bytes per token, reaching a compression rate of 277x and 487 bytes per document on average.",
"At this level of compression, the entire MSMARCO corpus fits in 3.8GB.",
"The MRR@10 drop is noticeable (0.0119) but still quite low.",
"Finally, for TREC19-DL, the impact of compressing token embeddings is less evident.",
"Only in the most extreme cases such as AESI-4-4b we see a significant drop in nDCG@10 performance.",
"These results demonstrate that the performance drop is very small, showing the effectiveness of our method.",
"To better understand the impact of the autoencoder, we present MRR@10 results as a function of autoencoder dimensions (i.e., number of floats stored per token) and with the different autoencoder configurations.",
"In addition to the 2-layer AESI architecture we described in 3.1 ( AESI-2L ), we consider the following variations: 6630 0.20 0.22 0.24 0.26 0.28 0.30 0.32 0.34 0.36 0.38 0 4 8 12 16 20 24 28 32 Auto Encoder Dimension (width) MRR @ 10 AE1L AE2L AESIDEC2L AESI1L AESI2L Figure 3: MRR@10 was measured on the MSMARCO-DEV-25 dataset as a function of autoencoder dimensions.",
"AutoEncoder with 2 Layers (AE-2L).",
"Standard 2-layer autoencoder with gelu activation.",
"This is the same as AESI, only without the side information.",
"AutoEncoder with 1 Layer (AE-1L).",
"Standard autoencoder with a single dense layer in the encoder and decoder.",
"AESI with 1 Layer (AESI-1L).",
"AESI with a single dense encoder and decoder layer.",
"DECoder-only AESI (AESI-DEC-2L).",
"Provides side information to the decoder but not the encoder.",
"To reduce measurement overhead, we ran the experiment only over the MSMARCO dataset.",
"In addition, we took only the top 25 BERTSPLIT passages for each query, denoted MSMARCO-DEV -25, which has a negligible impact on the results.",
"Figure 3 shows the results for the different autoencoder configurations.",
"Providing the side information to the autoencoder proves to be very effective in reducing storage costs, especially when the encoded vector size is small.",
"A 2-layer encoder/decoder model, as expected, is more effective than a single-layer model.",
"The gap is especially large when using side information, showing that the interaction between the encoded vector and the static token embeddings is highly nonlinear.",
"Finally, providing the static embeddings only to the decoder is slightly inferior to providing it also to the encoder.",
"the quantization technique we use to several other techniques, including Deterministic Rounding (Gersho",
"and Gray, 1992), Stochastic Rounding (Connolly et al., 2021), and Subtractive Dithering (Roberts, 1962; Gray and Stockham, 1993).",
"Due to lack of space, the results appear in Appendix B. We found that a randomized Hadamard transform improves quality (assuming similar bit rate), especially in the low-bits regime.",
"Using a quantization technique fit-ted to the Gaussian distribution of post randomized Hadamard transform data further improve quality, making the EDEN quantization superior to other quantization techniques in our case.",
"Our scheme uses a fixed number of bits per coordinate, which is essential for performance.",
"However, variable-rate compression can further reduce storage.",
"We used rate-distortion theory (from the information theory field) to upper bound the benefits of such techniques by 11%, which does not seem to justify the added system complexity (cf. Appendix B).",
"Intrinsic Evaluation of AESI-Encoded Vectors To better understand the impact of side information, we measure the error rate between an input vector and its reconstructed vector (i.e., after encoding and decoding).",
"As expected, in practically all cases, adding the side information reduces error rate compared to a 2-layer autoencoder (AE-2L) with the same code dimension.",
"In IR, the document frequency of a token is known to be negatively correlated with the token's importance.",
"We found that the error rate for AE-2L decreases with frequency, while the error rate for AESI increases with frequency.",
"This shows that the AESI scheme can better focus on tokens that are important for ranking.",
"A possible explanation for this phenomena is that the static embeddings for infrequent tokens are more informative (i.e., more helpful as side information) compared to static embeddings for frequent tokens (e.g., the').",
"We also found AESI excels more in compressing nouns, verbs, and adjectives, while AE-2L excels more in compressing punctuation, determiners, and ad-positions.",
"Again, this demonstrate that the static embeddings is most helpful in encoding tokens that are crucial for ranking.",
"The details of this evaluation are provided in Appendix C. 6 Conclusion In this paper, we proposed a system called SDR to solve the storage cost and latency overhead of existing late-interaction transformer based models for passage re-ranking.",
"The SDR scheme uses a novel 6631 autoencoder architecture that uses static token embeddings as side information to improve encoding quality.",
"In addition, we explored different quantization techniques and showed that the recently proposed EDEN performs well in our use case and presented extensive experimentation.",
"Overall, the SDR scheme reduces pre-computed document representation size by 4x11.6x compared to a baseline that uses existing approaches.",
"In future work, we plan to continue investigating means to reduce pre-computed document representation size.We believe that additional analysis of BERT's vector and their interaction with the context would be fundamental in such an advancement."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"method",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"method",
"abstain",
"method",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"method",
"other",
"objective",
"other",
"method",
"other",
"objective",
"abstain",
"abstain",
"other",
"method",
"other",
"other",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"This work presents a new approach to unsupervised abstractive summarization based on maximizing a combination of coverage and fluency for a given length constraint.",
"It introduces a novel method that encourages the inclusion of key terms from the original document into the summary: key terms are masked out of the original document and must be filled in by a coverage model using the current generated summary.",
"A novel unsupervised training procedure leverages this coverage model along with a fluency model to generate and score summaries.",
"When tested on popular news summarization datasets, the method outperforms previous unsupervised methods by more than 2 R-1 points, and approaches results of competitive supervised methods.",
"Our model attains higher levels of abstraction with copied passages roughly two times shorter than prior work, and learns to compress and merge sentences without supervision.",
"Summarization, or the task of condensing a doc-ument's main points into a shorter document, is important for many text domains, such as headlines for news and abstracts for research papers.",
"This paper presents a novel unsupervised abstractive summarization method that generates summaries directly from source documents, without the aid of example summaries.",
"This approach simultaneously optimizes for the following important properties of a good summary: coverage of the keywords of the document, fluency of generated language, and brevity of generated summaries.",
"Original Document : Chilean President announced Wednesday that his country, which has been paralyzed by protests over the last two weeks, will no longer host two major international summits .",
"[...] The President has now canceled the hosting of the economic APEC fo-rum and COP25 environmental summit , which were both due to take place later this year .",
"[...] Masked Document : announced Wednesday that his country, which has been by over the last two weeks, will no longer two major international .",
"[...] The has now the of the and , which were both due to take place later this .",
"[...] Summary Loop [10 word constraint] : Pinera cancelled the APEC summit at Santiago.",
"Summary Loop [24 word constraint] : Pinera said Chileans have been canceled the hosting of the APEC summit, which was scheduled to take place in November.",
"Coverage score: 0.33 Summary Loop [45 word constraint] : Sebastian Pinera announced Wednesday that his country will not hold the APEC summit, which was scheduled to take place in Santiago.",
"Pinera said that Chileans had been paralyzed by protests over the last two weeks.",
"Coverage score: 0.39 Figure 1: Motivating example.",
"One of the main contributions of this work is a novel method of inducing good coverage of important concepts from the original article.",
"The coverage model we propose takes as input the original document with keywords masked out (see Figure 1).",
"It uses the current best automatically generated summary to try to uncover the missing keywords.",
"The more informative the current summary is, the more successful the coverage model is at guessing the blanked out keywords from the original document.",
"A resulting coverage score is fed back into the training process of the summarization model with the objective of producing summaries with high coverage.",
"A second contribution is our unsupervised training procedure for summarization, the Summary Loop , which leverages the coverage model as well as a simple fluency model to generate and score summaries.",
"During training, the procedure is conditioned on a desired summary length, forcing the Summarizer model to adapt to a length budget.",
"Figure 1 shows Summary Loop summaries obtained for the same document under three different length budgets.",
"A third contribution is a set of specialized techniques employed during training to guide the model away from pathological behavior.",
"These guard rails include a method for reducing repetition, for encouraging the model to complete sentences, and to avoid frame filling patterns.",
"The models trained through the Summary Loop outperform all prior unsupervised summarization methods by at least 2 ROUGE-1 points on common news summarization datasets (CNN/DM and Newsroom), and achieve within a few points of state-of-the-art supervised algorithms, without ever being exposed to any summaries.",
"In addition, summaries generated by our method use 50 % more summarization techniques (compression, merging, etc.) than prior automatic work and achieve higher levels of abstraction, reducing by almost half the gap between human-generated summaries and automatic summaries in terms of length of copied spans.",
"Supervised Abstractive Summarization.",
"Sequence-to-sequence (seq2seq) (Sutskever et al., 2014) models trained using teacher-forcing are the most common approach to abstractive summarization (Nallapati et al., 2016).",
"A common architecture is the Pointer-Generator (See et al., 2017).",
"Performance can further be improved by constraining the attention (Gehrmann et al., 2018; Gui et al., 2019; Wang et al., 2019) and using pretrained Transformer-based language models (Lewis et al., 2019; Chi et al., 2019; Edunov et al., 2019).",
"Through architectural changes, the training procedure remains constant: using a large corpus of document-summary pairs, the model is trained to reproduce target summaries.",
"Unsupervised Summarization.",
"Most unsupervised summarization work is extractive: sentences deemed relevant are pulled out of the original document and stitched into a summary, based on a heuristic for a sentence's relevance (Mihalcea and Tarau, 2004; Barrios et al., 2015; West et al., 2019).",
"Nikolov and Hahnloser (2019)'s abstractive approach is partially unsupervised, not requiring parallel data, but only a group of documents and a group of summaries.",
"In contrast, our work does not require any summaries, and is trained using only documents.",
"Radford et al. (2019) summarize documents using a language model (GPT2) in a Zero-shot learning setting.",
"The model reads the document followed by a special token TL/DR, and is tasked with continuing the document with a summary.",
"Our work is an extension of this work: we initialize our Summarizer model with a GPT2 and specialize it with a second unsupervised method.",
"Summarization and Q&A.",
"Eyal et al. (2019) and Arumae and Liu (2018) turn reference summaries into fill-in-the-blank (FIB) questions, either as an evaluation metric or to train an extractive summarization model.",
"In this work, we directly generate FIB questions on the document being summarized, bypassing the need for a reference summary.",
"Scialom et al. (2019)'s work stays closer to a Q&A scenario, and uses a Question Generation module to generate actual questions about the document, answered by a Squad-based (Rajpurkar et al., 2018) model using the generated summary.",
"We refrain from using actual questions because question generation remains a challenge, and it is unclear how many questions should be generated to assess the quality of a summary.",
"RL in Summarization.",
"Paulus et al. (2018) introduced Reinforcement Learning (RL) to neural summarization methods by optimizing for ROUGE scores, leading to unreadable summaries.",
"Since then, Reinforcement Learning has been used to select sentences with high ROUGE potential (Chen and Bansal, 2018), or optimize modified versions of ROUGE that account for readability (Pasunuru and Bansal, 2018).",
"In all cases, the reward being computed relies on a reference summary, making the methods supervised.",
"We craft a reward that does not require a target summary allowing our training process to remain unsupervised.",
"For this work, the definition of a summary",
"A summary is a brief, fluent text that",
"Brevity, fluency and coverage are the three pillars of a good summary.",
"Under a length constraint, a good quality summary should contain as much information about the original document as possible while retaining fluent and coherent English.",
"Subsection 3.1 lays out the steps in the Summary Loop.",
"Subsections 3.23.5 specify how each component is represented by a neural network.",
"Section 4 shows how to train a summarizer model using this architecture in an unsupervised manner.",
"1 3.1 Summary Loop Steps Numbers in Figure 2 correspond to the following steps:",
"1. Summarizer receives a document D and length-constraint L, and produces a summary S fulfilling the length constraint.",
"2. Using a Masking Procedure, D is modified into a masked document M, where important words have been replaced with blanks.",
"3. Coverage receives S and M, and uses them to fill in each blank in M with a word, producing F. F and D are compared, and the resulting fill-in accuracy is called the Coverage Score.",
"4. Fluency receives S, and gives a Fluency Score based on its assessment of the quality of the Summary's writing.",
"5. The Fluency Score is added to the Coverage Score (as a weighed sum) into a Summary Score for the (D, S) pair.",
"6. Reinforcement Learning is used to train the Summarizer to produce summaries with high Summary Score.",
"The Summary Loop does not rely on the use of a target/reference/human-written summary, but only the summaries produced by the Summarizer model.",
"The process can therefore be iterated upon without supervision from Summarization datasets.",
"We use a Generative Transformer (Radford et al., 2019) as the model architecture of the summarizer.",
"We make this choice for two reasons.",
"First, Generative Transformers can produce text one word at a time, allowing the system to produce abstractive 1 The code, model checkpoints and other resources are available at https://github.com/CannyLab/ summary_loop .",
"summaries.",
"Second, we use the pretrained Generative Transformer to initialize the Summarizer.",
"Practically, the Summarizer first reads through the entire document, followed by a special START token, signaling summarization.",
"The Summarizer produces a probability distribution over words in its vocabulary, and a word is picked from the distribution and fed back as an input into the model.",
"This procedure is repeated and halts either when the summary reaches a length constraint, or when the Summarizer produces a special END token.",
"See Appendix C for the model size and initialization used to train the summarization paper.",
"The Masking Procedure decides on a set of keywords that are important elements in the document that should be recoverable using a summary.",
"The keywords are replaced with blanks, indirectly indicating which information should be present in the summary.",
"We use a tf-idf-based approach to decide on the set of masked keywords, as it is both simple and has been shown to represent word relevance to a document (Ramos, 2003).",
"Masking procedure implementation details are presented in Section A of the Appendix.",
"We select the k words with highest tf-idf score for the document to serve as the masked words.",
"The k parameter represents a balance: if too many words are masked, the filling-in becomes impos-Finetuned-BERT C h il e w ill no t ho s t t he e c ono m i c APEC and t he COP 25 , t w o ... <SEP> < MASK > < MASK > announ c ed W edne s da y t ha t h i s c oun t r y , w h i c h ha s been < MASK > b y < MASK > o v e r t he l a s t t w o w ee ks , ...",
"sible, but if too few are masked, the Summarizer model will not be encouraged to include sufficient content in its summary.",
"Varying the value of k (10,12,15,20) yielded only small discernible difference in the Summarizers produced, and we use k = 15 in all our final experiments.",
"The masking procedure can be adapted to a specific domain.",
"For instance, if summarizing fi-nancial documents, the masking procedure could systematically mask all numbers, encouraging the Summarizer model to add numbers to its summary.",
"The Coverage Model receives a computationally generated summary and the masked document and attempts to fill in each blank word.",
"The task of filling in blanks is similar to masked language modeling (MLM), used to pretrain BERT-like (Devlin et al., 2019) models.",
"In MLM, some of the words are replaced with a special MASK token, and the model must use other information (unmasked words) to fill in the masked words.",
"Because of the similarity to our task, we use a BERT-based neural network as the architecture for the coverage model.",
"However, the coverage task differs from MLM in two ways.",
"First, we modify the masking procedure: instead of masking a random percentage of the words (often 15% for BERT), we mask all appearances of the keywords selected by the masking procedure described in Section 3.3.",
"Second, the input to the coverage model is a concatenation of the unmasked summary, a separator token and the masked document.",
"The model can leverage unmasked information available in the summary to fill in the masked document.",
"The Coverage Model is illustrated in Figure",
"3. 3.4.1 Computing a Coverage Score Using the masking procedure, we obtain M = f ( D ) , the masked document.",
"The coverage model produces the filled document F = g ( M, S ) .",
"Raw coverage score is the fraction of correctly filled in words in F. Let D i , F i and M i correspond to the i th word in their respective document, IM the set indices of words that have been masked.",
"Then: RawCov( D, S ) = (cid:107) i IM if D i = F i (cid:107) (cid:107) IM (cid:107) (1) The model can use information in the unmasked (visible) words of M to predict the masked words.",
"For instance, if the word Chile is visible, then Santiago would be a well-informed guess near the word capital, which might not be masked out.",
"This is undesirable, because coverage should account for what information the model can learn from the summary S, not what it can guess from the unmasked portion of D. To counteract this problem, we modify the raw coverage score by computing how much information the model can guess without the summary present, using an empty string summary: F = g ( M, ) .",
"We then normalize a summary's coverage by subtracting the empty string coverage from the raw coverage, leaving only filled-in words answerable using S, as shown in Equation",
"2. NormCov( D, S ) = RawCov( D, S ) RawCov( D, ) (2) In a nutshell, raw coverage score answers the question: What fraction of blanked words can be correctly filled in with this summary? and normalized coverage score answers: What is the increase in the fraction of blanks that can be correctly filled in with this summary, compared to having no sum-mary?",
"In the rest of this paper, Coverage Score refers to Normalized Coverage Score.",
"We train the Coverage Model once, and its weights are then fixed during the training of the Summarizer.",
"In order to train the Coverage Model, we need pairs of documents (D) and summaries (S).",
"However, we operate under the assumption that we do not have access to summaries (to keep the procedure unsupervised).",
"In order to remove this dependency, we use the first 50 words of the unmasked Summary Dataset Summary Length Raw Coverage Norm.",
"document ( D [: 50] ) as a proxy for document summaries.",
"The Coverage Model is initialized with a trained BERT model (Devlin et al., 2019), and trained using ( D, D [: 50]) pairs on the coverage task.",
"Because BERT is already trained on the similar MLM task, the Coverage model is able to leverage knowledge accrued by BERT.",
"The Coverage Model converges after roughly 5 hours of training on a Titan X GPU.",
"We present properties of the raw and normalized coverage through the analysis of existing human-written summary datasets.",
"We focus our analysis on three datasets in the news domain: (1) a headline dataset obtained from common US news web-sites (Laban and Hearst, 2017), (2) the Newsroom dataset (Grusky et al., 2018), and (3) the CNN/DM dataset (Nallapati et al., 2016).",
"For each dataset, we take document/summary pairs and obtain raw and normalized coverage score through our Coverage model, reported in Table",
"1. First, longer summaries obtain higher coverage scores: a CNN/DM summary with an average of 45 words can be used to fill in 73% of the blanks correctly, compared to 48% for a 9 word headline.",
"Across datasets, the correlation between summary length and raw coverage score is 0.56, confirming that longer summaries contain more information, according to coverage.",
"Second, we simulate the first k words 2 of the document as a summary.",
"We use k = 10 , 24 , 46 to match average word length in the three datasets.",
"For two of the three values (10 and 46), the coverage of human-written summaries is higher than the first-k word counterpart.",
"This is remarkable: even though the summary is farther away lexically (i.e., 2 We choose the first k words due to the similarity to Lede 3 (first 3 sentences), a common baseline in news.",
"A model solely trained to optimize coverage has no incentive to write in good English, use punctuation, determinants or pronouns, as these are not words removed by the masking procedure.",
"The objective of a Fluency Model is to judge the writing quality of the summary, independent of its coverage.",
"Given the right corpus, we argue that a language model's probability can be modified into a Fluency Score.",
"Therefore, we adapt a language model into the Fluency Model.",
"We choose the generative Transformer (Radford et al., 2019) architecture for our Fluency model, as it can be trained into a powerful language model.",
"Just as with the Summarizer, by using a standardized architecture and model size, we can make use of pretrained models.",
"However, it is important for Fluency to fine tune the language model on the target domain, so that the Summarizer is rewarded for generating text similar to target content.",
"To produce a uniform Fluency Score, we linearly scale the language model's log-probability of a given summary ( LM ( S ) ) between an ideal value LP low and a maximum value LP high : Fluency( S ) = 1 LM ( S ) LP low LP high LP low (3) This ensures that the Fluency( S ) is usually in the range [0 , 1] .",
"LP low and LP high are picked specifi-cally for a particular language model, and ensure that the log-probability magnitudes of a specific language model do not affect the overall scores.",
"The final Summary Score is a weighed sum of the Coverage and Fluency Scores:",
", are hyperparameters giving relative importance to Coverage and Fluency.",
"We set = 5 , = 1 in all our experiments.",
"Model choice, size, and initialization are summarized in Figure A1.",
"We first outline the training procedure and then detail several guard-rail mechanisms used during",
"training to prevent the Summarizer from learning pathological writing strategies.",
"Figure A2 presents training plots of a Summary Loop model and interpretation of the different learning phases.",
"We use Reinforcement Learning to train the Summarizer component (agent), such that it achieves high summary score (reward).",
"Note that the Coverage and Fluency models are frozen, and their weights are not trained.",
"We make this choice as allowing Fluency and Coverage models to evolve could enable the models to coordinate and cheat.",
"We use the Self-critical sequence training (SCST) method (Rennie et al., 2017), as it has been shown to perform well on similar text generation tasks optimizing BLEU for image captioning or ROUGE scores in summarization.",
"In SCST, the Summarizer is used to produce two summaries of document D : a greedy summary S , using a decoding strategy that always picks the most likely next word, and a sampled summary S s , picking the next word in the summary by sampling from the word distribution.",
"Summaries are scored using the Summary Loop: R = SummaryScore ( D, S ) R s = SummaryScore ( D, S s ) Then we minimize the following loss: L = ( R R s ) N (cid:88) i =0 log p ( w si | w s 1 , ..., w si 1 , D ) Where p ( w si | ... ) represent the probability of the i th word conditioned on previously generated word, according to the model.",
"Intuitively, if R s > R , minimizing L maximizes the likelihood of the sampled sequence which is desired because it outperformed the greedy summary and increases expected reward of the model.",
"During training, the Summarizer model learns pathological summarization strategies.",
"We build training guard rails to detect the pathological behavior and penalize the model during training.",
"A guard rail has a binary effect: if a pathology is detected in a summary, its Summary Score is reduced by a penalty amount .",
"We use = 2 for all experiments.",
"We found three training guard rails to be useful: No-repetition, Finish-your-sentence, and No-frame-filling.",
"A common problem in neural text generation is repetition of text.",
"Based on the observation that 3-grams seldom repeat in common summarization datasets, the No-repetition training guard rail raises a penalty on a summary when it contains any repeated 3-gram.",
"When generating a summary, the model can either produce the END token, or generate a number of words up to the length constraint.",
"We observe that if the model does not produce the END token, it often generates partial sentences, which is undesirable.",
"Because we want to encourage the model to generate an END token, the Finish-your-sentence raises a penalty if a summary has no END token.",
"During training, the model sometimes learns to overly rely on sentence patterns that achieves high reward as a one size fits all summary.",
"In one example the model learns to produce summaries solely of the form: X talks with Y about the Z.",
"The model uses this frame, filling in the X, Y and Z slots with relevant keywords and entities to achieve a small but positive coverage.",
"This form of frame-filling is undesirable, as the model often produces inaccurate information to fit the entities to the pattern.",
"We implement a guard rail to penalize the model when frame-filling patterns are observed.",
"During training, we keep track of the last 100 summaries produced by the model.",
"We then aggregate the frequency of words for each word position in the 100 summaries.",
"If any word appears more than 50% of the time at a specific word position, we raise the No-frame-filling penalty.",
"In the example given above, the word talks appeared in the second word position in more than 50% of the summaries, as well as the word about in the fifth position.",
"These rule-based training guard rails are simple and effective.",
"In our finalized trained models, very few summaries exhibit penalized behavior: 2% for no-repetition, 5% for finish-your-sentence, and 2.5% for no-frame-filling.",
"We present results for Summary Loop models trained in the news domain under three different length constraints: 10, 24, and 46 words, matching the distributions of the Headline, Newsroom",
"(Grusky et al., 2018) and CNN/DM (Nallapati et al., 2016) datasets.",
"We compare our summaries using the standard ROUGE metric, and by analyzing summaries for the errors made, the technique used and the level of abstraction.",
"Finally, we show the Summary Loop can be complemented with supervision, reducing the amount of data needed to achieve comparable ROUGE results.",
"Table 2 and Table 3 present ROUGE results on the CNN/DM and Newsroom datasets respectively.",
"In both cases, Summary Loop outperforms other unsupervised methods, and is competitive with supervised methods despite not being exposed to any example summaries.",
"On CNN/DM, Summary Loop performs in between the Pointer Generator and Bottom Up architecture in terms of ROUGE-1.",
"On the Newsroom, Summary Loop is within 0.6 ROUGE-1 points of the Pointer-Generator with Coverage and surpasses it by 2 ROUGE-L points.",
"Recent breakthroughs in pretrained Transformer models have shown that using larger models in Summarization can lead to large improvements.",
"For instance, a large version of the PEGASUS model (Zhang et al., 2019a) outperforms the base version by 2.3 ROUGE-1 points.",
"Because Summary Loop experiments were performed using base models, we expect that using larger Transformer models could lead to similar gains.",
"Table 2 confirms that human-written summaries obtain amongst the highest Fluency and Coverage scores.",
"Human-written summaries are only outperformed by Summary Loop summaries, and the Lede-3 baseline.",
"However, the Summary Loop summaries are obtained by directly optimizing for Fluency and Coverage, and Lede-3 baseline summaries achieve their higher Coverage at the expense of being much longer (i.e. 84 words on average compared to 58 in human-written summaries).",
"We perform a manual analysis of 200 randomly-selected summaries on the test set of CNN/DM from the Pointer-Generator with Coverage (PGC), Bottom-Up (BU) and the unsupervised Summary Loop (SL).",
"We annotated each summary with two types of errors: Inaccurate (information in summary contradicts document), Ungrammatical (one sentence or more is not properly constructed), and Error Made PGC BU SL Inaccurate (%) 11 31 24 Ungrammatical (%) 7 15 18 Technique Used (Success/Total) PGC (S/T) BU (S/T) SL (S/T) Sent.",
"four summarization techniques: Sentence Compression (summary sentence is a document sentence with words removed), Sentence Merging (2 or more document sentences are merged into a summary sentence), Novel Sentence (original sentence in the summary), and Entity Manipulation (a named entity is modified or simplified, e.g. changing a full name to a last name).",
"We present Summary Loop examples illustrating each error and technique in Figures A3 A8.",
"The analysis was performed by the first author of the paper, labeling article/summary pairs without knowledge of model origin.",
"A summary can manifest any number of summarization Techniques, or none.",
"Labeling is binary: if a summary exhibits more than one or instances of a Technique, it receives a 1, otherwise it receives a 0.",
"Results of the analysis are summarized in Table",
"4. SL uses significantly more summarization techniques (425) than PGC (148) and BU (287) summaries.",
"Beyond raw counts, SL is more successful at applying summarization techniques (59% success) than BU (50% success), but less successful than PGC (72%).",
"Note however that PGC takes lit-tle risk: 19% of the summaries go beyond sentence compression, and 39% are extractive, using none of the summarization techniques.",
"All methods generating summaries one word at a time have potential for abstraction.",
"In Figure 4 we analyze human and system written summaries for abstraction level.",
"We measure a summary's level of abstraction by looking at the length of spans Figure 4: Histogram and average copied span lengths for abstractive summaries.",
"copied from the document.",
"Summary Loop is the most abstractive automated method, although less so than human written summaries.",
"SL cuts nearly in half the length of copied spans compared to other automated methods.",
"If summaries are available, we show that they can complement the unsupervised Summary Loop.",
"We run supervised experiments on CNN/DM using a generative Transformer architecture and varying the initialization.",
"We compare initializing with (1) random weights, (2) the original GPT2 weights, and (3) the Summary Loop weights of target length 45.",
"We train each model with teacher forcing, comparing using the entire CNN/DM training set to just 10% of it.",
"The results are summarized in Table",
"5. First, initializing with the Summary Loop leads to higher ROUGE score both in the 10% and full dataset setting.",
"As expected, results improve when using the entirety of the data, and the Summary Loop initialized model trained with the entirety of CNN/DM obtains a ROUGE-1 F1-score of 41.0, within the confidence interval of the supervised Bottom Up (Gehrmann et al., 2018) architecture.",
"This is a strong result as the Transformer we use is a generic language model, and is not specialized for summarization.",
"Second, initializing with Summary Loop and training with 10% of CNN/DM yields comparable ROUGE scores to initializing with GPT2 and using the entire CNN/DM, showing that Summary Loop can be useful when fewer summaries are available.",
"Customizing summaries.",
"In Figure 1, we illustrate the effect of the length constraint by summarizing the same document under three different length constraints.",
"Each model adapts to its word budget.",
"However, length is only one way to customize summaries.",
"One might want to summarize based on point of view, chronology, theme, etc.",
"Fluency vs. Grammaticality.",
"By choosing to represent the validity of summaries with a Language model, we encourage fluent summaries (i.e., with likely sequences of words) but not necessarily grammatical ones.",
"Extending the scoring to include grammaticality, either by using a parsing model, or leveraging the Corpus of Linguistic Acceptability (Warstadt et al., 2019) could prove useful.",
"Summarization in the wild.",
"Because our method is unsupervised, it can be applied to new domains and languages.",
"In this work, we bene-fited from pretrained BERT and GPT2 models in English, which do not yet exist publicly for other languages.",
"Once they become available in other languages, the Summary Loop can be ported over.",
"Abstraction dangers.",
"Recent work around measuring factuality in generated text, using Natural Language Inference (Guo et al., 2018) or rule-based fact extraction (Zhang et al., 2019b) becomes increasingly important with summaries that are more abstractive.",
"This work can be naturally included into the Summary Loop, with a fact-checker model generating an accuracy score.",
"In this work we present a new approach to unsupervised abstractive summarization based on maximizing a combination of coverage and fluency for a given length constraint.",
"When tested on common news summarization datasets, our method significantly outperforms previous unsupervised methods, and gets within the range of competitive supervised methods.",
"Our models attain levels of abstraction closer to human-written summaries, although with more abstraction, more potential for factual inaccuracies arise.",
"We would like to thank Forrest Huang, David Chan, Roshan Rao, Katie Stasaski and the ACL reviewers for their helpful comments.",
"This work was supported by the first author's internship at Bloomberg, and a Bloomberg Data Science grant.",
"We also gratefully acknowledge support received from an Amazon Web Services Machine Learning Research Award and an NVIDIA Corporation GPU grant."
] | [
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"method",
"other",
"other",
"objective",
"other",
"method",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"result",
"abstain",
"other",
"other",
"other"
] |
[
"The prosperity of Massive Open Online Courses (MOOCs) provides fodder for many NLP and AI research for education applications, e.g., course concept extraction, prerequisite relation discovery, etc.",
"However, the publicly available datasets of MOOC are limited in size with few types of data, which hinders advanced models and novel attempts in related topics.",
"Therefore, we present MOOCCube, a large-scale data repository of over 700 MOOC courses, 100k concepts, 8 million student behaviors with an external resource.",
"Moreover, we conduct a prerequisite discovery task as an example application to show the potential of MOOCCube in facilitating relevant research.",
"The data repository is now available at http: //moocdata.cn/data/MOOCCube .",
"Massive open online courses (MOOCs) boom swiftly in recent years and have provided convenient education for over 100 million users worldwide (Shah, 2019).",
"As a multi-media, large-scale online interactive system, MOOC is an excellent platform for advanced application research (Vol-ery and Lord, 2000).",
"Since MOOC is committed to helping students learn implicit knowledge concepts from diverse courses, many efforts from NLP and AI raise topics to build novel applications for assistance.",
"From extracting course concepts and their prerequisite relations (Pan et al., 2017b; Roy et al., 2019; Li et al., 2019) to analyzing student behaviors (Zhang et al., 2019; Feng et al., 2019), MOOC-related topics, tasks, and methods snowball in recent years.",
"Despite the plentiful research interests, the resource from real MOOCs is still impoverished.",
"Most of the publicly available datasets are designed for a specific task or method, e.g., Zhang et",
"al.(2019) build a MOOC enrollment dataset for course recommendation and (Yu et al., 2019) is only for course concept expansion, which merely contains a subset of MOOC elements.",
"Consequently, they are not feasible enough to support ideas that demand more types of information.",
"Moreover, these datasets only contain a small size of specific entities or relation instances, e.g., prerequisite relation of TutorialBank (Fabbri et al., 2018) only has 794 cases, making it insufficient for advanced models (such as graph neural networks).",
"Therefore, we present MOOCCube, a data repository that integrates courses, concepts, student behaviors, relationships, and external resources.",
"Compared with existing education-related datasets, MOOCCube maintains the following advantages: Large-scale : MOOCCube contains over 700 MOOC courses, 38k videos, 200k students, and 100k concepts with 300k relation instances, which provide sufficient resources for models that require large-scale data.",
"High-coverage : Obtained from real MOOC websites and external resources, the courses, concepts, and student behaviors in MOOCCube have profuse attributes and relationships, offering comprehensive information for various related tasks.",
"As shown in Figure 1, a data cell of MOOCCube is in terms of concepts, courses, and students, which represents a learning fact, i.e., a student s learns concept k in course c .",
"Through different queries, MOOCCube can provide various combinations of these data cells to support existing research.",
"In this paper, we first introduce the data collection process and then give an insight into the characteristics of MOOCCube by analyzing its statistics in different aspects.",
"We also conduct a typical NLP application task on MOOCCube and discuss the future directions on the usage of our datasets.",
"Our contribution is in two folds:",
"a) an investigation of NLP and AI application research in online education, especially in MOOCs;",
"b) a large-scale data repository of MOOCs, which organizes data in three dimensions: student behaviors, courses, and knowledge concepts.",
"Figure 1 gives an overview of MOOCCube, which models various facts of MOOCs in three main dimensions: courses , concepts and students .",
"Due to the rich relationships among these entities, we organize the data into a form of a knowledge base for convenient storage and query.",
"Through specific queries, MOOCCube can support diverse related applications, e.g., we can build a dataset for dropout prediction tasks by collecting a stu-dent's all behaviors in a certain course, and build a concept extraction dataset with all concepts in all courses.",
"In subsequent sections, we introduce how to obtain and process the abundant data from Xue-tangX 1 , one of the largest MOOC website in China, while considering the issue of privacy protection.",
"Courses are the foundation of MOOCs and consist of a series of pre-recorded videos .",
"Regarding each course as an entity, we extract the synopsis, video list, teacher , and the organization , offering this course as its attributes.",
"As shown in Figure 1, We obtain each video's subtitle and save the order of videos for further knowledge discovery in MOOCs.",
"Notably, we also record the description of the teacher and the organization from Wikidata 2 as an external resource.",
"Course concepts refer to the knowledge concepts taught in the course videos.",
"For each video, we extract 10 most representative course concepts from subtitles (Pan et al., 2017b).",
"We also record the concept description from Wikidata and search top 10 related papers for each concept via AMiner 3 (Tang et al., 2008) as external resource.",
"Moreover, as many NLP types of research are interested in discovering semantic relationships among concepts, we further build a novel concept taxonomy with prerequisite chains as a concept graph (Gordon et al., 2016).",
"Concept Taxonomy .",
"A solid concept taxonomy is favorable for further research in course content (Gordon et al., 2017).",
"However, existing taxonomies like ConceptNet (Liu and Singh, 2004) or Wiki Taxonomy (Ponzetto and Strube, 2007) cannot be directly applied to course concepts because course concepts are mostly academic terms and the non-academic categories greatly interfere with the quality of taxonomy.",
"Thus, we select a cross-lingual term taxonomy from CNCTST 4 as a basis and lead manual annotation to build a serviceable course concept taxonomy for MOOCCube.",
"Prerequisite Chain .",
"Prerequisite relation is de-fined as: If concept A can help understanding concept B, then there is a prerequisite relation from A to B (Gordon et al., 2016).",
"Prerequisite relation has received much attention in recent years (Pan et al., 2017a; Fabbri et al., 2018; Li et al., 2019) and has a direct help for teaching applications.",
"To build prerequisite chains, we first reduce the amount of candidate concept pairs by utilizing taxonomy information (Liang et al., 2015) and video dependency (Roy et al., 2019), and then lead 3 https://aminer.org 4 http://www.cnctst.cn/ manual annotation.",
"Student behavior data not only supports relevant research (such as course recommendation (Zhang et al., 2019), video navigation (Zhang et al., 2017), dropout prediction (Feng et al., 2019)), but also indicates the relationships between courses and concepts (Liang et al., 2015).",
"To meet different needs, we preserve the enrollment records and video watch logs of over 190,000 users from 2017 to 2019.",
"Note that video watch logs record student behavior in detail, e.g., click a certain sentence, jump back to a video point, etc.",
"Considering the data quality and privacy, we first remove the users with less than two video watching records and then anonymize the user names into UserIDs.",
"We further shuffled these IDs and relinked them to the most popular names 5 .",
"We lead data processing and annotations, including 1) process the extracted course videos into subtitles; 2) process the related papers into Json files; 3) the annotation of course/video dependency; 4) large-scale annotation of concept taxonomy and prerequisite relations.",
"All the annotations are provided by students in corresponding domains with strict quality controls 6 .",
"In this section, we analyze various aspects of MOOCCube to provide a deeper understanding of the dataset.",
"Comparison with similar datasets .",
"Table 1 shows statistics of MOOCCube and other AI-In-Education datasets, including KDDCup2015 (Pre-dicting dropout in MOOCs) (Cup, 2015), hierarchical MOOC recommendation (HMR) (Zhang et al., 2019), prerequisite relation learning(PRL) (Pan et al., 2017a), TutorialBank (Fabbri et al., 2018) and LectureBank (Li et al., 2019).",
"The comparison is conducted in two aspects: Data Size .",
"MOOCCube contains the largest data size, especially the course concept graph.",
"For example, the number of prerequisite concept pairs 5 Published by Social Security Administration, https: //www.ssa.gov/ 6 Some annotation and quality control details are in Appendix.",
"exceeds the existing datasets by almost 100 times, and hereafter supports the attempts of advanced models such as neural networks on related tasks.",
"Data Dimension .",
"Existing datasets are clearly divided into two categories: datasets centered on user behavior, such as HMR, they only contain very little course content information; datasets centered on course content, such as LectureBank, they focus on the concepts in the education material instead.",
"MOOCCube organically combines these types of data in the MOOC environment so that researchers can analyze specific learning behavior.",
"Concept Graph .",
"Figure 2 shows the concept distribution over different categories.",
"Overall, we divide the concepts into 24 domains.",
"There are signifi-cantly more concepts in engineering courses than in natural sciences or social sciences, while the number of sub-fields is the opposite.",
"Since there are more than 1,500 valid concepts in each field, the concept information in MOOCCube is abundant.",
"Moreover, the statistic of prerequisite concept pairs in Table 1 indicates its rarity: only 6% of concept pairs maintain a solid prerequisite relation, which explains its scarcity in existing datasets.",
"Student Behavior .",
"Figure",
"3(a) shows the course distribution of enrolled users, which substantially fits a normal distribution.",
"Despite a few courses with rare students, 451 courses are enrolled by over 100 users.",
"Figure",
"3(b) presents a user view of the data, indicating more than 70% of users possess over ten videos watching records.",
"These statistical results give an insight into abundant interaction between MOOCCube students, courses, and videos.",
"Such a wealth of data enables MOOCCube to support multiple tasks such as course recommendation (Zhang et al., 2019), concept mining (Yu et al., 2019), etc.",
"In this section, we conduct an important and typical task, prerequisite relation discovery as an example application of MOOCCube by utilizing different types of data from it.",
"As introduced in Section 2.3, prerequisite relation indicates what should a student learn at first.",
"Since existing efforts have attempted to discover such relationships among concepts from different types of information, we reproduce the following methods on MOOCCube and present some basic new models.",
"MOOC-LR and MOOC-XG learn such relations from the course video list and the abstracts of Wikipedia (Pan et al., 2017b), we select Logic Re-Dataset Course Video Concept Prerequisite Taxonomy Student Enrollment Video Watching External Resource KDDCup2015 39 112,448 200,904 1,319,032 HMR 1,302 82,535 458,454 PRL 20 1,356 573 3,504 Corpus TutorialBank 200 794 200 Corpus, Paper LectureBank 60 208 921 1,221 Corpus, Paper, Blog MOOCCube 706 38,181 106,056 17,686 3,152 199,199 682,753 4,874,298 Corpus, Paper Course , Video , Concept , Student are the sum of respective entities.",
"Prerequisite is the number of relation instances, Taxonomy is the number of finest taxonomy categories, and Enrollment and Video Watching are the records of behavior.",
"PREREQ employs a network to detect such relationships from course and video dependency (Roy et al., 2019).",
"Here we present an improved version PREREQ-S by introducing stu-dents' video watch order to enhance the video dependency network, i.e., we sort the watched videos of each student by time and utilize these sequences for replacing the video sequences in the original paper.",
"PCNN and PRNN .",
"We present two simple DNN models, which first encode the embeddings (Cao et al., 2017) of the concept pairs and then train an MLP to classify the prerequisite ones.",
"Result Analysis .",
"Overall, PREREQs perform best in F1-score, while student behavior is beneficial to P R F1-Score MOOC-LR 0.667 0.479 0.565 MOOC-XG 0.607 0.507 0.552 PREREQ 0.606 0.755 0.672 PREREQ-S 0.651 0.730 0.688 PCNN 0.629 0.636 0.630 PRNN 0.681 0.668 0.659 Table 2: Results of prerequisite discovery.",
"the precision of this model (PREREQ-S improves the precision to 0.651).",
"We argue that the diverse information provided by MOOCCube helps to discover such relationships.",
"Meanwhile, two simple DNN models perform competitive results in this task, which indicates that the existing methods are indeed limited by the amount of data (Most advanced models cannot be trained on small datasets).",
"In this section, we introduce the research of NLP in education, especially in MOOCs, as well as several publicly available related datasets.",
"Existing research in MOOCs uses courses and students as the main resource, which can be divided into two categories according to the research object: one focuses on the content of the courses, such as the course concept extraction (Pan et al., 2017b), prerequisite relation discovery (Pan et al., 2017a), and course concept expansion (Yu et al., 2019); the other focuses on the learning behavior of students, such as the prediction of dropouts (Feng et al., 2019), course recommendations (Zhang et al., 2019; Cao et al., 2019), etc.",
"Due to the different tasks, researchers have to repeat the work to build their datasets, which arouses the original motivation of MOOCCube.",
"In addition, some researchers also try to obtain education information from other resources, e.g., ACL Anthology (Radev et al., 2013), TutorialBank (Fabbri et al., 2018), and LectureBank (Li et al., 2019).",
"They collected concepts and relationships from papers and lectures and also built diverse datasets.",
"Though they are also limited in data scale, these beneficial attempts guide the construction of MOOCCube.",
"We present MOOCCube, a multi-dimensional data repository containing courses, concepts, and student activities from real MOOC websites.",
"Obtaining large-scale data in all dimensions, MOOCCube can support new models and diverse NLP applications in MOOCs.",
"We also conduct prerequisite relation extraction as an example application, and experimental results show the potential of such a repository.",
"Promising future directions include: 1) utilize more types of data from MOOCCube to facilitate existing topics; 2) employ advanced models in existing tasks; 3) more innovative NLP application tasks in online education domain.",
"Zhiyuan Liu is supported by the National KeyRe-search and Development Program of China(No. 2018YFB1004503), and others are supported by NSFC key project (U1736204, 61533018), a grant from Beijing Academy of Artificial Intelligence (BAAI2019ZD0502), a grant from the Insititute for Guo Qiang, Tsinghua University, THUNUS NExT Co-Lab, the Center for Massive Online Education of Tsinghua Univerisity, and XuetangX."
] | [
"abstain",
"abstain",
"method",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"result",
"abstain",
"other"
] |
[
"Even though BERT has achieved successful performance improvements in various supervised learning tasks, BERT is still limited by repetitive inferences on unsupervised tasks for the computation of contextual language representations.",
"To resolve this limitation, we propose a novel deep bidirectional language model called a T ransformer-based T ext A utoencoder ( T-TA ).",
"The T-TA computes contextual language representations without repetition and displays the benefits of a deep bidirectional architecture, such as that of BERT.",
"In computation time experiments in a CPU environment, the proposed T-TA performs over six times faster than the BERT-like model on a reranking task and twelve times faster on a semantic similarity task.",
"Furthermore, the T-TA shows competitive or even better accuracies than those of BERT on the above tasks.",
"Code is available at https://github.com/joongbo/tta.",
"A language model is an essential component of many natural language processing (NLP) applications ranging from automatic speech recognition (ASR) (Chan et al., 2016; Panayotov et al., 2015) to neural machine translation (NMT) (Sutskever et al., 2014; Sennrich et al., 2016; Vaswani et al., 2017).",
"Recently, the Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2019) and its variations have led to significant improvements in learning natural language representation and have achieved state-of-the-art performances on various downstream tasks such as the General Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2019) and question answering (Rajpurkar et al., 2016).",
"BERT continues to succeed in various unsupervised tasks, such as the N -best list reranking for ASR and NMT (Shin et al., 2019; Salazar et al., 2019), con-firming that deep bidirectional language models are useful in unsupervised applications as well.",
"However, concerning its applications to unsupervised learning tasks, BERT is significantly inefficient at computing language representations at the inference stage (Salazar et al., 2019).",
"During training, BERT adopts the masked language modeling (MLM) objective, which is to predict the original word of the explicitly masked word from the input sequence.",
"Following the MLM objective, each contextual word representation should be computed by a two-step process: masking a word in the input and feeding the result to BERT.",
"During the inference stage, this process is repeated n times to obtain the representations of all the words within a text sequence (Wang and Cho, 2019; Shin et al., 2019; Salazar et al., 2019), resulting in a computational complexity of O ( n 3 ) 1 in terms of the number of words n .",
"Hence, it is necessary to reduce the computational complexity when applying the model to situations where the inference time is critical, e.g. , mobile environments and real-time systems (Sanh et al., 2019; Lan et al., 2019).",
"Considering this limitation of BERT, we submit a new research question: Can we construct a deep bidirectional language model with a minimal inference time while maintaining the accuracy of BERT?",
"In this paper, in response to the above question, we propose a novel bidirectional language model named the T ransformer-based T ext A utoencoder ( T-TA ), which has a reduced computational complexity of O ( n 2 ) when applying the model to unsupervised applications.",
"The proposed model is trained with a new learning objective named language autoencoding (LAE).",
"The LAE objective, which allows the target labels to be the same as the text input, is to predict every token in the input sequence simultaneously without merely copying 1 A complexity of O ( n 2 ) is derived from the per-layer complexity of the Transformer (Vaswani et al., 2017).",
"the input to the output.",
"To learn the proposed objective, we devise both a diagonal masking operation and an input isolation mechanism inside the T-TA based on the Transformer encoder (Vaswani et al., 2017).",
"These components enable the proposed T-TA to compute contextualized language representations at once while maintaining the benefits of the deep bidirectional architecture of BERT.",
"We conduct a series of experiments on two unsupervised tasks: N -best list reranking and unsupervised semantic textual similarity.",
"First, by conducting runtime experiments in a CPU environment, we show that the proposed T-TA is 6 .",
"35 times faster than the BERT-like model in the reranking task and 12 .",
"7 times faster in the unsupervised semantic textual similarity task.",
"Second, despite its faster inference time, the T-TA achieves competitive performances relative to BERT on reranking tasks.",
"Furthermore, the T-TA outperforms BERT by up to 8 points in Pearson's r on unsupervised semantic textual similarity tasks.",
"When referring to an autoencoder for language modeling, sequence-to-sequence learning approaches have been commonly used.",
"These approaches encode a given sentence into a compressed vector representation, followed by a decoder that reconstructs the original sentence from the sentence-level representation (Sutskever et al., 2014; Cho et al., 2014; Dai and Le, 2015).",
"To the best of our knowledge, however, none of these approaches consider an autoencoder that encodes word-level representations (such as BERT) without an autoregressive decoding process.",
"Many studies have been performed on neural network-based language models for word-level representations.",
"Distributed word representations were proposed and attracted considerable interest, as they were considered to be fundamental building blocks for NLP tasks (Rumelhart et al., 1986; Bengio et al., 2003; Mikolov et al., 2013b).",
"Subsequently, researchers explored contextualized representations of text where each word has a different representation depending on the context (Peters et al., 2018; Radford et al., 2018).",
"Most recently, a Transformer-based deep bidirectional model was proposed and applied to various supervised-learning tasks with remarkable success (Devlin et al., 2019).",
"For unsupervised tasks, researchers have adopted recently developed language-representation models and investigated their effectiveness; a typical example is the N -best list reranking for ASR and NMT tasks.",
"In particular, studies have integrated left-to-right and right-to-left language models (Arisoy et al., 2015; Chen et al., 2017; Peris and Casacu-berta, 2015) to outperform conventional unidirectional language models (Mikolov et al., 2010; Sun-dermeyer et al., 2012) in these tasks.",
"Furthermore, BERT-based approaches have been explored and have achieved significant performance improvements on these tasks because bidirectional language models yield the pseudo-log-likelihood of a given sentence, and this score is useful in ranking the n -best hypotheses (Wang and Cho, 2019; Shin et al., 2019; Salazar et al., 2019).",
"Another line of research involves reducing the computation time and memory consumption of BERT.",
"Lan et al. (2019) proposed parameter-reduction techniques, factorized embedding parameterization and cross-layer parameter sharing and reported 18 times fewer parameters and a 1 .",
"7 -fold increase in the training time.",
"Similarly, Sanh et al. (2019) presented a method to pretrain a smaller model that can be fine-tuned for downstream tasks and achieved 1 .",
"4 times fewer parameters with a 1 .",
"6 fold increase in the inference time.",
"However, none of these studies developed methods that directly revise the BERT architecture to reduce the computational complexity during the inference stage.",
"In a conventional language modeling task, the i th token x i is predicted using its preceding context x <i = [ x 1 , . . . , x i 1 ] ; throughout this paper, this objective is known as causal language modeling (CLM) following (Conneau and Lample, 2019).",
"As shown in Figure 1a, we can obtain (left-to-right) contextualized language representations HC = [ HC 1 , . . . , HC n ] after feeding the input sequence to the CLM-trained language model only once, where HC i = h C ( x <i ) is the hidden representation of the i -th token.",
"This paper takes this unidirectional language model (uniLM) as our speed baseline.",
"However, contextualized language representations obtained from the uniLM are insuf-ficient to accurately encode a given text because future contexts cannot be leveraged to understand the current tokens during the inference stage.",
"Recently, BERT (Devlin et al., 2019) was designed to enable the full contextualization",
"of language representations by using the MLM objective, in which some tokens from the input sequence are randomly masked; the objective is to predict the original tokens at the masked positions using only their context.",
"As in Figure 1b, we can obtain a contextualized representation of the i -th token HM i = h M ( M i ( x )) by masking the token in the input sequence and feeding it to the MLM-trained model, where M i ( x ) = [ x 1 , . . . , x i 1 , [MASK] , x i +1 , . . . , x n ] signifies an external masking operation.",
"This paper takes this bidirectional language model (biLM) as our performance baseline.",
"However, this mask-and-predict approach should be repeated n times to obtain all the language representations HM = [ HM 1 , . . . , HM n ] because learning occurs only at the masked position during the MLM training stage.",
"Although the resulting language representations are robust and accurate, as a consequence of this repetition, the model is significantly inefficient when applied to unsupervised tasks such as N -best list reranking (Wang and Cho, 2019; Shin et al., 2019; Salazar et al., 2019).",
"In this paper, we propose a new learning objective named language autoencoding (LAE) for obtaining fully contextualized language representations without repetition.",
"The LAE objective, with which the output is the same as the input, is to predict every token in a text sequence simultaneously without merely copying the input to the output.",
"For the proposed task, a language model should reproduce the whole input at once while avoiding overfitting; otherwise, the model outputs only the representation copied from the input representation without learning any statistics of the language.",
"To this end, the flow of information from the i -th input to the i -th output should be blocked inside the model shown in Figure 1c.",
"From the LAE objective, we can obtain fully contextualized language representations HL = [ HL 1 , . . . , HL n ] all at once, where HL i = h L ( x \\ i ) and x \\ i = [ x 1 , . . . , x i 1 , x i +1 , . . . , x n ] .",
"The method for blocking the flow of information is described in the next section.",
"In this section, we introduce the novel architecture of the proposed T-TA shown in Figure 2.",
"As indicated by its name, the T-TA architecture is based on the Transformer encoder (Vaswani et al., 2017).",
"To learn the proposed LAE objective, we develop both a diagonal masking operation and an input isolation mechanism inside the T-TA.",
"Both developments are designed to enable the language model to predict all tokens simultaneously while maintaining the deep bidirectional property (see the descriptions in the following subsections).",
"For brevity, we refer to the original paper on the Transformer encoder (Vaswani et al., 2017) for other details regarding the standard functions, such as the multihead attention and scaled dot-product attention mechanisms, layer normalization, and the position-wise fully connected feed-forward network.",
"As shown in Figure 3, a diagonal masking operation is implemented inside the scaled dot-product attention mechanism to be self-unknown during the inference stage.",
"This operation prevents information from flowing to the same position in the next layer by masking out the diagonal values in the input of the softmax function.",
"Specifically, the output vector at each position is the weighted sum of the value V at other positions, where the attention weights come from the query Q and the key K .",
"The diagonal mask becomes meaningless when we use it together with a residual connection or utilize it within the multilayer architecture.",
"To retain the self-unknown functional, we can remove the residual connection and adopt a single-layer architecture.",
"However, it is essential to utilize a deep architecture to understand the intricate patterns of natural language.",
"To this end, we further develop the architecture described in the next section.",
"We now propose an input isolation mechanism to ensure that the residual connection and the multilayer architecture are compatible with the abovementioned diagonal masking operation.",
"In the input isolation mechanism, the key and value inputs ( K and V , respectively) of all encoding layers are isolated from the network flow and are fixed to the sum of the token embeddings and the position embeddings.",
"Hence, only the query inputs ( Q ) are updated across the layers during the inference stage by referring to the fixed output of the embedding layer.",
"Additionally, we input the position embeddings to the Q of the very first encoding layer, thereby making the self-attention mechanism effective.",
"Otherwise, the attention weights will be the same at all positions, and thus, the first self-attention mechanism will function as a simple average of all the input representations (except the self position).",
"Finally, we apply the residual connection only to the query to completely maintain unawareness.",
"The dashed arrows in Figure 2 show the proposed input isolation mechanism inside the T-TA.",
"By using diagonal masking and input isolation in conjunction, the T-TA can have multiple encoder layers, enabling the T-TA to obtain high-quality contextual language representations after feeding a sequence into the model only once.",
"Heretofore, we have introduced the new learning objective named LAE, and the novel deep bidirectional language model named T-TA.",
"We will verify the architecture of the proposed T-TA in Section 4.3.1 and compare our model with the recently proposed strong baseline BERT in Section 4.3.2.",
"Here, we discuss how diagonal masking with input isolation preserves the self-unknown property in detail.",
"As shown in Figure 2, we have two input embeddings, namely, token embeddings X = [ X 1 , . . . , X n ] T R n d and position embeddings P = [ P 1 , . . . , P n ] T R n d , where d is an embedding dimension.",
"From the input isolation mechanism, the key and value K = V = X + P have the information of the input tokens and are fixed in all layers, but the query Q l is updated across the layers during the inference stage starting from the position embeddings Q 1 = P in the first layer.",
"Let us consider the l -th encoding layer's query input Q l and its output H l = Q l +1 : H l = SMSAN ( Q l , K , V ) = g ( Norm ( Add ( Q l , f ( Q l , K , V )))) , (1) where SMSAN ( ) is the self-masked self-attention network, namely, the encoding layer of the T-TA, g ( x ) = Norm ( Add ( x, FeedForward ( x ))) signifies two upper subboxes of the encoding layer in Figure 2, and f ( ) is the (multihead) diagonal-masked self-attention (DMSA) mechanism.",
"As illustrated in Figure 3, the DMSA module computes Z l as follows: Z l = f ( Q l , K , V ) = DMSA ( Q l , K , V ) = SoftMax ( DiagMask ( Q l KT / d )) V .",
"In the DMSA module, the i -th element of Z l = [ Z l 1 , . . . , Z ln ] T is always computed by a weighted average of the fixed V while discarding the information of the i -th token X i in V i .",
"Specifically, Z li is the weighted average of V with the attention weight vector s li , i.e. , Z li = s li V , where s li = [ s l 1 , . . . , s li 1 , 0 , s li +1 , . . . , s ln ] R 1 n .",
"Here, we note that the DMSA mechanism is related only to the self-unknown property since no token representations are referred to each other in subsequent transformations from Z l to H l .",
"Therefore, we can guarantee that the i -th element of the query representation in any layer, Q li , never encounters the corresponding token representation starting from Q 1 i = P i .",
"Consequently, the T-TA preserves the self-unknown property during the inference stage while maintaining the residual connection and multilayer architecture.",
"There are several differences between the strong baseline BERT (Devlin et al., 2019) and the proposed T-TA, while both models learn deep bidirectional language representations.",
"While BERT uses an external masking operation in the input, the T-TA has an internal masking operation in the model, as we intend.",
"Additionally, while BERT is based on a denoising autoencoder, the T-TA is based on an autoencoder.",
"With this novel approach, the T-TA does not need mask-and-predict repetition during the computing of contextual language representations.",
"Consequently, we reduce the computational complexity from O ( n 3 ) with the BERT to O ( n 2 ) with the T-TA in applications to unsupervised learning tasks.",
"As in the T-TA, feeding an intact input (without masking) into BERT is also possible.",
"However, we argue that this process will significantly diminish the model performance in unsupervised applications since the MLM objective does not consider intact tokens much.",
"In the next section, we include experiments that reveal the model performance with intact inputs (described in Tables 1, 3, and 4).",
"For further reference, we also suggest a previous study that reported the same opinion (Salazar et al., 2019).",
"To evaluate the proposed method, we conduct a series of experiments.",
"We first evaluate the contextual language representations obtained from the T-TA on N -best list reranking tasks.",
"We then apply our method to unsupervised semantic textual similarity (STS) tasks.",
"The following sections will demonstrate that the proposed model is much faster than BERT during the inference stage (Section 5.2) while showing competitive or even better accuracies than those of BERT on reranking tasks (Sec-tion 5.3) and STS tasks (Section 5.4).",
"The main purpose of this paper is to compare the proposed T-TA with a biLM trained with the MLM objective.",
"For a fair comparison, each model has the same number of parameters based on the Transformer as follows: | L | = 3 self-attention layers with d = 512 input and output dimensions, h = 8 attention heads, and d f = 2048 hidden units for the position-wise feed-forward layers.",
"We use a Gaussian error linear unit ( gelu ) activation function (Hendrycks and Gimpel, 2016) rather than the standard rectified linear unit ( relu ) following Ope-nAI GPT (Radford et al., 2018) and BERT (Devlin et al., 2019).",
"In our experiments, we set the position embeddings to be trainable following BERT (Devlin et al., 2019) rather than a fixed sinusoid (Vaswani et al., 2017) with supported sequence lengths up to 128 tokens.",
"We use WordPiece embeddings (Wu et al., 2016) with a vocabulary of approximately | V | (cid:39) 30 , 000 tokens.",
"The weights of the embedding layer and the last softmax layer of the Transformer are shared.",
"For the speed baseline, we also implement a uniLM that has the same number of parameters as the T-TA and biLM.",
"For training, we create a training instance consisting of a single sentence with [BOS] and [EOS] tokens at the beginning and end of each sentence, respectively.",
"We use 64 sentences as the training batch and train the language models over 1 M steps for ASR and 2 M steps for NMT.",
"We train the language models with Adam (Kingma and Ba, 2014) with an initial learning rate of 1 e 4 and coefficients of 1 = 0 .",
"9 of 2 = 0 .",
"999 ; the learning rate is set to warm up over the first 50 k steps, and the learning rate exhibits linear decay.",
"We use a dropout probability of 0.1 on all layers.",
"Our implementation is based on Google's official code for BERT 2 .",
"To train the language models that we implement, we use an English Wikipedia dump (approximately 13 GB in size) containing approximately 120 M sentences.",
"The trained models are used for reranking in NMT and unsupervised STS tasks.",
"For the ASR reranking task, we use additional in-domain training data, namely, 4.0 GB of normalized text data from the official LibriSpeech corpus containing approximately 40 M sentences.",
"We first measure the runtime of each language model to compute the contextual language representation HL R n d of a given text sequence.",
"In the unsupervised STS tasks, we directly use HL for the analysis.",
"In the case of the reranking task, further computation is required: we compute Softmax ( HLET ) to obtain the likelihood of each token, where E R | V | d is the weight parameter of the softmax layer.",
"Therefore, the computational complexity of the reranking task is larger than that of the STS task.",
"To measure the runtime, we use an Intel(R) Core(TM) i7-6850K CPU (3.60 GHz) and the Ten-sorFlow 1.12.0 library with Python 3.6.8 on Ubuntu 16.04.06 LTS.",
"In each experiment, we measure the runtime 50 times and average the results.",
"Figure 4 shows that the T-TA exhibits faster runtimes than the biLM, and the gap between the T-TA and biLM increases as the sentence becomes longer.",
"To facilitate a numerical comparison, we set the standard number of words to 20 , which is approxi-2 https://github.com/google-research/bert Figure 4: Average runtimes of each model according to the number of words on STS and reranking tasks, subscripted as sts and rrk , respectively.",
"mately the average number of words in a contemporary English sentence (DuBay, 2006).",
"In this setup, in the STS tasks, the T-TA takes approximately 9 .",
"85 ms, while the biLM takes approximately 125 ms; hence, the T-TA is 12 .",
"7 times faster than the biLM.",
"In the reranking task, the T-TA is 6 .",
"35 times faster than the biLM (which is still significant); this reduction occurs because the repetition of the biLM is related only to computing HL rather than Softmax ( HLET ) .",
"For the visual clarity of Figure 4, we omit the runtime results of the uniLM, which is as fast as the T-TA (see Appendix B.1).",
"With such a fast inference time, we next demonstrate that the T-TA is as accurate as BERT.",
"To evaluate the language models, we conduct experiments on the unsupervised task of reranking the N -best list.",
"In these experiments, we apply each language model to rerank the 50 best candidate sentences, which are obtained in advance using each sequence-to-sequence model on ASR and NMT.",
"The ASR and NMT models we implement are detailed in Appendices A.1 and A.2.",
"We rescore the sentences by linearly interpolating two scores from a sequence-to-sequence model and each language model as follows: score = (1 ) score s 2 s + score lm , where score s 2 s is the score from the sequence-to-sequence model, score lm is the score from the language model calculated by the sum (or mean) of the log-likelihood of each token, and the interpolation weight is set to a value that leads to the best performance in the development set.",
"One of the strong baseline language models, the pretrained BERT-base-uncased model (De-vlin et al., 2019), is used for reranking tasks.",
"We also include the reranking results from the traditional count-based 5 -gram language models trained on each dataset using the KenLM library (Heafield, 2011).",
"We note that the T-TA and biLM (including BERT) assign the pseudo-log-likelihood to the score of a given sentence, whereas the uniLM assigns the log-likelihood.",
"Because the reranking task is based on the relative scores of the n -best hypotheses, the fact that the bidirectional models yields the pseudo-log-likelihood of a given sentence does not impact this task (Wang and Cho, 2019; Shin et al., 2019; Salazar et al., 2019).",
"For reranking in ASR, we use prepared N -best lists obtained from dev and test sets using Seq2Seq ASR , which we train on the LibriSpeech ASR corpus.",
"Additionally, we use the N -best lists obtained from (Shin et al., 2019) to confirm the robustness of the language models in a testing environment.",
"Table 1 shows the word error rates (WERs) for each method after reranking.",
"The interpolation weights are 0.3 or 0.4 in all N -best lists for ASR.",
"First, we confirm that the bidirectional models trained with the LAE (T-TA) and MLM (biLM) objectives consistently outperform the uniLM trained with the CLM objective.",
"The performance gains from reranking are much lower in the better base system Seq2Seq ASR , and it is evidently challenging to rerank the N -best list using a language model if the speech recognition model performs well enough.",
"Interestingly, the T-TA is competitive with (or even better than) the biLM; this may result from the gap between the training and testing of the biLM: the biLM predicts multiple masks at a time when training but predicts only one mask at a time when testing.",
"Moreover, the 3-layer T-TA is better than the 12-layer BERT-base, showing that in-domain data are critical to language model applications.",
"Finally, we note that feeding an intact input to BERT (the corresponding model is denoted as w/ BERT \\ M in Table 1) causes the model to underper-form relative to the other models, demonstrating that the mask-and-predict approach is necessary for effective reranking.",
"To compare the reranking performances in another domain, NMT, we again prepare N -best lists using Seq2Seq NMT 3 from the WMT13 German-to-English (De En) and French-to-English (Fr En) test sets.",
"Table 2 shows the bilingual evaluation understudy (BLEU) scores for each method after reranking.",
"Each interpolation weight becomes a value that shows the best performance on each test set with each method in NMT.",
"The interpolation weights are 0.4 or 0.5 in the N -best lists for NMT.",
"We confirm again that the bidirectional models trained with the LAE and MLM objectives perform better than the uniLM trained with the CLM objective.",
"Additionally, the Fr En translation has less effect on the reranking than the De En translation because the base NMT system for Fr En is better than that for De En.",
"The 12 -layer BERT model appears much better than the other models at reranking on NMT; hence, the N -best hypotheses of the NMT model seem to be more indistinguish-3 The Seq2Seq models for De En and Fr En are trained independently using the t2t library (Vaswani et al., 2018).",
"able than those of the ASR model from a language modeling perspective.",
"All the reranking results on the ASR and NMT tasks demonstrate that the proposed T-TA performs both efficiently (similar to the uniLM) and effectively (similar to the biLM).",
"In addition to the reranking task, we apply the language models to an STS task, that is, measuring the similarity between the meaning of sentence pairs.",
"We use the STS Benchmark (STS-B) (Cer et al., 2017) and Sentences Involving Compositional Knowledge (SICK) (Marelli et al., 2014) datasets, both of which have a set of sentence pairs with corresponding similarity scores.",
"The evaluation metric of STS is Pearson's r between the predicted similarity scores and the reference scores of the given sentence pairs.",
"In this section, we address the unsupervised STS task to examine the inherent ability of each language model to obtain contextual language representations, and we mainly compare the language models that are trained on the English Wikipedia dump.",
"To compute the similarity score of a given sentence pair, we use the cosine similarity of two sentence representations, where each representation is obtained by averaging each language model's contextual representations.",
"Specifically, the contextual representations of a given sentence are the outputs of the final encoding layer of each model, denoted as context in Tables 3 and",
"4. For comparison, we use noncontextual representations, which are obtained from the outputs of the embedding layer, denoted as embed in Tables 3 and",
"4. As a strong baseline for unsupervised STS tasks, we also include the 12-layer BERT model (Devlin Method STS-B-dev STS-B-test context embed context embed BERT 64.78 -54.22 BERT \\ M 59.17 60.07 47.91 48.19 BERT [CLS] 29.16 17.18 uniLM 56.25 63.87 39.57 55.00 uniLM [EOS] 40.75 38.30 biLM 59.99 -50.76 biLM \\ M 53.20 58.80 36.51 49.08 T-TA 71.88 54.75 62.27 44.74 GloVe -52.4 -40.6 Word2Vec -70.0 -56.5 Table 3: Pearson's r 100 results on the STS-B dataset.",
"et al., 2019), and we employ BERT in the mask-and-predict approach for computing the contextual representations of each sentence.",
"Note that we use the most straightforward approach for the unsupervised STS task to focus on comparing token-level language representations.",
"The STS-B dataset has 5749/1500/1379 sentence pairs with train/dev/test splits and corresponding scores ranging from 0 to",
"5. We test the language models on the STS-B-dev and STS-B-test sets using the simplest approach on the unsupervised STS task.",
"As additional baselines, we include the results of GloVe (Pennington et al., 2014) and Word2Vec (Mikolov et al., 2013a) from the official sites of STS Benchmark 4 .",
"Table 3 shows our T-TA trained with the LAE objective best captures the semantics of a sentence over the Transformer-based language models.",
"Remarkably, our 3-layer T-TA trained on a relatively small dataset outperforms the 12-layer BERT trained on a larger dataset (Wikipedia + BookCor-pus).",
"Furthermore, the embedding representations are trained better by the CLM objective than by the other language modeling objectives; we suppose that the uniLM depends strongly on the embedding layer due to its unidirectional context constraint.",
"Since the uniLM encodes all contexts in the last token, [EOS] , we also use the last representation as the sentence representation; however, this approach does not outperform the average sentence 4 http://ixa2.si.ehu.es/stswiki/index.php/STSbenchmark Method SICK-test context embed BERT 64.31 BERT \\ M 61.18 64.63 uniLM 54.20 65.69 biLM 58.98 biLM \\ M 53.79 62.67 T-TA 69.49 60.77 Table 4: Pearson's r 100 results on the SICK dataset.",
"representation.",
"Similarly, BERT has a special token, [CLS] , which is trained for the next sentence prediction objective; thus, we also use the [CLS] token to see how this model learns the sentence representation, but it significantly underperforms the other models.",
"We further evaluate the language models on the SICK dataset, which consists of 4934/4906 sentence pairs with training/testing splits and scores ranging from 1 to",
"5. The results are in Table 4, from which we obtain the same observations as those reported for STS-B.",
"All results on unsupervised STS tasks demonstrate that the T-TA learns textual semantics best using the token-level LAE objective.",
"In this work, we propose a novel deep bidirectional language model, namely, the T-TA, to eliminate the computational overload of applying BERT to unsupervised applications.",
"Experimental results on N -best list reranking and unsupervised STS tasks demonstrate that the proposed T-TA is significantly faster than the BERT-like approach, and its encoding ability is competitive with (or even better than) that of BERT.",
"K. Jung is with ASRI, Seoul National University, Korea.",
"This work was supported by the Ministry of Trade, Industry & Energy (MOTIE, Korea) under the Industrial Technology Innovation Program (No.10073144) and by the NRF grant funded by the Korean government (MSIT) (NRF2016M3C4A7952587)."
] | [
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"abstain",
"other",
"other"
] |
[
"Michael Collins 2 , Kristina Toutanova",
"In this paper we study yes/no questions that are naturally occurring meaning that they are generated in unprompted and unconstrained settings.",
"We build a reading comprehension dataset, BoolQ, of such questions, and show that they are unexpectedly challenging.",
"They often query for complex, non-factoid information, and require difficult entailment-like inference to solve.",
"We also explore the effectiveness of a range of transfer learning baselines.",
"We find that transferring from entailment data is more effective than transferring from paraphrase or extractive QA data, and that it, surprisingly, continues to be very beneficial even when starting from massive pre-trained language models such as BERT.",
"Our best method trains BERT on MultiNLI and then re-trains it on our train set.",
"It achieves 80.4% accuracy compared to 90% accuracy of human annotators (and 62% majority-baseline), leaving a significant gap for future work.",
"Understanding what facts can be inferred to be true or false from text is an essential part of natural language understanding.",
"In many cases, these inferences can go well beyond what is immediately stated in the text.",
"For example, a simple sentence like Hanna Huyskova won the gold medal for Belarus in freestyle skiing. implies that (1) Belarus is a country, (2) Hanna Huyskova is an athlete, (3) Belarus won at least one Olympic event, (4) the USA did not win the freestyle skiing event, and so on.",
"To test a model's ability to make these kinds of inferences, previous work in natural language in-1 Work completed while interning at Google.",
"2 Also affiliated with Columbia University, work done at Google.",
"Q : Has the UK been hit by a hurricane?",
"P : The Great Storm of 1987 was a violent extratropical cyclone which caused casualties in England, France and the Channel Islands ...",
"A : Yes.",
"[An example event is given.] Q : Does France have a Prime Minister and a President?",
"P : ...",
"The extent to which those decisions lie with the Prime Minister or President depends upon ...",
"A : Yes.",
"[Both are mentioned, so it can be inferred both exist.] Q : Have the San Jose Sharks won a Stanley Cup?",
"P : ...",
"The Sharks have advanced to the Stanley Cup finals once, losing to the Pittsburgh Penguins in 2016 ...",
"A : No.",
"[They were in the finals once, and lost.] Figure 1: Example yes/no questions from the BoolQ dataset.",
"ference (NLI) proposed the task of labeling candidate statements as being entailed or contradicted by a given passage.",
"However, in practice, generating candidate statements that test for complex inferential abilities is challenging.",
"For instance, evidence suggests (Gururangan et al., 2018; Jia and Liang, 2017; McCoy et al., 2019) that simply asking human annotators to write candidate statements will result in examples that typically only require surface-level reasoning.",
"In this paper we propose an alternative: we test models on their ability to answer naturally occurring yes/no questions.",
"That is, questions that were authored by people who were not prompted to write particular kinds of questions, including even being required to write yes/no questions, and who did not know the answer to the question they were asking.",
"Figure 1 contains some examples from our dataset.",
"We find such questions often query for non-factoid information, and that human annotators need to apply a wide range of inferential abilities when answering them.",
"As a result, they can be used to construct highly inferential reading comprehension datasets that have the added benefit of being directly related to the practical end-task of answering user yes/no questions.",
"Yes/No questions do appear as a subset of some existing datasets (Reddy et al., 2018; Choi et al., 2018; Yang et al., 2018).",
"However, these datasets are primarily intended to test other aspects of question answering (QA), such as conversational QA or multi-step reasoning, and do not contain naturally occurring questions.",
"We follow the data collection method used by Natural Questions (NQ) (Kwiatkowski et al., 2019) to gather 16,000 naturally occurring yes/no questions into a dataset we call BoolQ (for Boolean Questions).",
"Each question is paired with a paragraph from Wikipedia that an independent annotator has marked as containing the answer.",
"The task is then to take a question and passage as input, and to return yes or no as output.",
"Figure 1 contains some examples, and Appendix A.1 contains additional randomly selected examples.",
"Following recent work (Wang et al., 2018), we focus on using transfer learning to establish baselines for our dataset.",
"Yes/No QA is closely related to many other NLP tasks, including other forms of question answering, entailment, and paraphrasing.",
"Therefore, it is not clear what the best data sources to transfer from are, or if it will be sufficient to just transfer from powerful pre-trained language models such as BERT (Devlin et al., 2018) or ELMo (Peters et al., 2018).",
"We experiment with state-of-the-art unsupervised approaches, using existing entailment datasets, three methods of leveraging extractive QA data, and using a few other supervised datasets.",
"We found that transferring from MultiNLI, and the unsupervised pre-training in BERT, gave us the best results.",
"Notably, we found these approaches are surprisingly complementary and can be combined to achieve a large gain in performance.",
"Overall, our best model reaches 80.43% accuracy, compared to 62.31% for the majority baseline and 90% human accuracy.",
"In light of the fact BERT on its own has achieved human-like performance on several NLP tasks, this demonstrates the high degree of difficulty of our dataset.",
"We present our data and code at https://goo.gl/boolq .",
"Yes/No questions make up a subset of the reading comprehension datasets CoQA (Reddy et al., 2018), QuAC (Choi et al., 2018), and Hot-PotQA (Yang et al., 2018), and are present in the ShARC (Saeidi et al., 2018) dataset.",
"These datasets were built to challenge models to understand conversational QA (for CoQA, ShARC and QuAC) or multi-step reasoning (for HotPotQA), which complicates our goal of using yes/no questions to test inferential abilities.",
"Of the four, QuAC is the only one where the question authors were not allowed to view the text being used to answer their questions, making it the best candidate to contain naturally occurring questions.",
"However, QuAC still heavily prompts users, including limiting their questions to be about pre-selected Wikipedia articles, and is highly class imbalanced with 80% yes answers.",
"The MS Marco dataset (Nguyen et al., 2016), which contains questions with free-form text answers, also includes some yes/no questions.",
"We experiment with heuristically identifying them in Section 4, but this process can be noisy and the quality of the resulting annotations is unknown.",
"We also found the resulting dataset is class imbalanced, with 80% yes answers.",
"Yes/No QA has been used in other contexts, such as the templated bAbI stories (Weston et al.) or some Visual QA datasets (Antol et al., 2015; Wu et al., 2017).",
"We focus on answering yes/no questions using natural language text.",
"Question answering for reading comprehension in general has seen a great deal of recent work (Ra-jpurkar et al., 2016; Joshi et al., 2017), and there have been many recent attempts to construct QA datasets that require advanced reasoning abilities (Yang et al., 2018; Welbl et al., 2018; Mihaylov et al., 2018; Zellers et al., 2018; Zhang et al., 2018).",
"However, these attempts typically involve engineering data to be more difficult by, for example, explicitly prompting users to write multi-step questions (Yang et al., 2018; Mihaylov et al., 2018), or filtering out easy questions (Zellers et al., 2018).",
"This risks resulting in models that do not have obvious end-use applications since they are optimized to perform in an artificial setting.",
"In this paper, we show that yes/no questions have the benefit of being very challenging even when they are gathered from natural sources.",
"Natural language inference is also a well studied area of research, particularly on the MultiNLI (Williams et al., 2018) and SNLI (Bow-man et al., 2015) datasets.",
"Other sources of entailment data include the PASCAL RTE challenges (Bentivogli et al., 2009, 2011) or SciTail (Khot et al., 2018).",
"We note that, although SciTail, RTE-6 and RTE-7 did not use crowd workers to generate candidate statements, they still use sources (multiple choices questions or document summaries) that were written by humans with knowledge of the premise text.",
"Using naturally occurring yes/no questions ensures even greater independence between the questions and premise text, and ties our dataset to a clear end-task.",
"BoolQ also requires detecting entailment in paragraphs instead of sentence pairs.",
"Transfer learning for entailment has been studied in GLUE (Wang et al., 2018) and SentE-val (Conneau and Kiela, 2018).",
"Unsupervised pre-training in general has recently shown excellent results on many datasets, including entailment data (Peters et al., 2018; Devlin et al., 2018; Radford et al., 2018).",
"Converting short-answer or multiple choice questions into entailment examples, as we do when experimenting with transfer learning, has been proposed in several prior works (Demszky et al., 2018; Poliak et al., 2018; Khot et al., 2018).",
"In this paper we found some evidence suggesting that these approaches are less effective than using crowd-sourced entailment examples when it comes to transferring to natural yes/no questions.",
"Contemporaneously with our work, Phang et al. (2018) showed that pre-training on supervised tasks could be beneficial even when using pre-trained language models, especially for a textual entailment task.",
"Our work confirms these results for yes/no question answering.",
"This work builds upon the Natural Questions (NQ) (Kwiatkowski et al., 2019), which contains some natural yes/no questions.",
"However, there are too few (about 1% of the corpus) to make yes/no QA a very important aspect of that task.",
"In this paper, we gather a large number of additional yes/no questions in order to construct a dedicated yes/no QA dataset.",
"An example in our dataset consists of a question, a paragraph from a Wikipedia article, the title of the article, and an answer, which is either yes",
"or no.",
"We include the article title since it can potentially help resolve ambiguities (e.g., corefer-ent phrases) in the passage, although none of the models presented in this paper make use of them.",
"We gather data using the pipeline from NQ (Kwiatkowski et al., 2019), but with an additional filtering step to focus on yes/no questions.",
"We summarize the complete pipeline here, but refer to their paper for a more detailed description.",
"Questions are gathered from anonymized, aggregated queries to the Google search engine.",
"Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words 3 and are of sufficient length, to be effective.",
"Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.",
"Annotators label question/article pairs in a three-step process.",
"First, they decide if the question is good , meaning it is comprehensible, unambiguous, and requesting factual information.",
"This judgment is made before the annotator sees the Wikipedia page.",
"Next, for good questions, annotators find a passage within the document that contains enough information to answer the question.",
"Annotators can mark questions as not answer-able if the Wikipedia article does not contain the requested information.",
"Finally, annotators mark whether the question's answer is yes or no.",
"Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.",
"Note that, unlike in NQ, we only use questions that were marked as having a yes/no answer, and pair each question with the selected passage instead of the entire document.",
"This helps reduce ambiguity (ex., avoiding cases where the document supplies conflicting answers in different paragraphs), and keeps the input small enough so that existing entailment models can easily be applied to our dataset.",
"3 The full set is: { did, do, does, is, are, was, were, have, has, can, could, will, would } .",
"pipeline with an additional 3k questions with yes/no answers from the NQ training set to reach a total of 16k questions.",
"We split these questions into a 3.2k dev set, 3.2k test set, and 9.4k train set, ensuring questions from NQ are always in the train set.",
"Yes answers are slightly more common (62.31% in the train set).",
"The queries are typically short (average length 8.9 tokens) with longer passages (average length 108 tokens).",
"In the following section we analyze our dataset to better understand the nature of the questions, the annotation quality, and the kinds of reasoning abilities required to answer them.",
"First, in order to assess annotation quality, three of the authors labelled 110 randomly chosen examples.",
"If there was a disagreement, the authors conferred and selected a single answer by mutual agreement.",
"We call the resulting labels gold-standard labels.",
"On the 110 selected examples, the answer annotations reached 90% accuracy compared to the gold-standard labels.",
"Of the cases where the answer annotation differed from the gold-standard, six were ambiguous or debatable cases, and five were errors where the annotator misunderstood the passage.",
"Since the agreement was sufficiently high, we elected to use singly-annotated examples in the training/dev/test sets in order to be able to gather a larger dataset.",
"Part of the value of this dataset is that it contains questions that people genuinely want to answer.",
"To explore this further, we manually define a set of topics that questions can be about.",
"An author categorized 200 questions into these topics.",
"The results can be found in the upper half of Table",
"1. Questions were often about entertainment media (including T.V., movies, and music), along with other popular topics like sports.",
"However, there are still a good portion of questions asking for more general factual knowledge, including ones about historical events or the natural world.",
"We also broke the questions into categories based on what kind of information they were requesting, shown in the lower half of Table",
"1. Roughly one-sixth of the questions are about whether anything with a particular property exists (Existence), another sixth are about whether a particular event occurred (Event Occurrence), and another sixth ask whether an object is known by a particular name, or belongs to a particular category (Definitional).",
"The questions that do not fall into these three categories were split between requesting facts about a specific entity, or requesting more general factual information.",
"We do find a correlation between the nature of the question and the likelihood of a yes answer.",
"However, this correlation is too weak to help outperform the majority baseline because, even if the topic or type is known, it is never best to guess the minority class.",
"We also found that question-only models perform very poorly on this task (see Section 5.3), which helps confirm that the questions Reasoning Types Yes/No Question Answering Examples Paraphrasing (38.7%) Q : Is Tim Brown in the Hall of Fame?",
"Finally, we categorize the kinds of inference required to answer the questions in BoolQ 4 .",
"The definitions and results are shown in Table",
"2. Less than 40% of the examples can be solved by detecting paraphrases.",
"Instead, many questions require making additional inferences (cate-gories Factual Reasoning, By Example, and Other Inference) to connect what is stated in the passage to the question.",
"There is also a significant class of questions (categories Implicit and Missing Mention) that require a subtler kind of inference based on how the passage is written.",
"Why do natural yes/no questions require inference so often?",
"We hypothesize that there are several factors.",
"First, we notice factoid questions that ask about simple properties of entities, such as Was Obama born in 1962?, are rare.",
"We suspect this is because people will almost always prefer to 4 Note the dataset has been updated since we carried out this analysis, so it might be slighly out-of-date.",
"phrase such questions as short-answer questions (e.g., When was Obama born?).",
"Thus, there is a natural filtering effect where people tend to use yes/no questions exactly when they want more complex kinds of information.",
"Second, both the passages and questions rarely include negation.",
"As a result, detecting a no answer typically requires understanding that a positive assertion in the text excludes, or makes unlikely, a positive assertion in the question.",
"This requires reasoning that goes beyond paraphrasing (see the Other-Inference or Implicit exam-ples).",
"We also think it was important that annotators only had to answer questions, rather than generate them.",
"For example, imagine trying to construct questions that fall into the categories of Missing Mention or Implicit.",
"While possible, it would require a great deal of thought and creativity.",
"On the other hand, detecting when a yes/no question can be answered using these strategies seems much easier and more intuitive.",
"Thus, having annotators answer pre-existing questions opens the door to building datasets that contain more inference and have higher quality labels.",
"Models on this dataset need to predict an output class given two pieces of input text, which is a well studied paradigm (Wang et al., 2018).",
"We find training models on our train set alone to be relatively ineffective.",
"Our best model reaches 69.6% accuracy, only 8% better than the majority baseline.",
"Therefore, we follow the recent trend in NLP of using transfer learning.",
"In particular, we experiment with pre-training models on related tasks that have larger datasets, and then fine-tuning them on our training data.",
"We list the sources we consider for pre-training below.",
"Entailment: We consider two entailment datasets, MultiNLI (Williams et al., 2018) and SNLI (Bowman et al., 2015).",
"We choose these datasets since they are widely-used and large enough to use for pre-training.",
"We also experiment with ablating classes from MultiNLI.",
"During fine-tuning we use the probability the model assigns to the entailment class as the probability of predicting a yes answer.",
"Multiple-Choice QA: We use a multiple choice reading comprehension dataset, RACE (Lai et al., 2017), which contains stories or short essays paired with questions built to test the reader's comprehension of the text.",
"Following what was done in SciTail (Khot et al., 2018), we convert questions and answer-options to statements by either substituting the answer-option for the blanks in fill-in-the-blank questions, or appending a separator token and the answer-option to the question.",
"During training, we have models independently assign a score to each statement, and then apply the softmax operator between all statements per each question to get statement probabilities.",
"We use the negative log probability of the correct statement as a loss function.",
"To fine-tune on BoolQ, we apply the sigmoid operator to the score of the question given its passage to get the probability of a yes answer.",
"Extractive QA: We consider several methods of leveraging extractive QA datasets, where the model must answer questions by selecting text from a relevant passage.",
"Preliminary experiments found that simply transferring the lower-level weights of extractive QA models was ineffective, so we instead consider three methods of constructing entailment-like data from extractive QA data.",
"First, we use the QNLI task from GLUE (Wang et al., 2018), where the model must determine if a sentence from SQuAD 1.1 (Rajpurkar et al., 2016) contains the answer to an input question or not.",
"Following previous work (Hu et al., 2018), we also try building entailment-like training data from SQuAD 2.0 (Rajpurkar et al., 2018).",
"We concatenate questions with either the correct answer, or with the incorrect distractor answer candidate provided by the dataset, and train the model to classify which is which given the question's supporting text.",
"Finally, we also experiment with leveraging the long-answer portion of NQ, where models must select a paragraph containing the answer to a question from a document.",
"Following our method for Multiple-Choice QA, we train a model to assign a score to (question, paragraph) pairs, apply the softmax operator on paragraphs from the same document to get a probability distribution over the paragraphs, and train the model on the negative log probability of selecting an answer-containing paragraph.",
"We only train on questions that were marked as having an answer, and select an answer-containing paragraph and up to 15 randomly chosen non-answer-containing paragraphs for each question.",
"On BoolQ, we compute the probability of a yes answer by applying the sigmoid operator to the score the model gives to the input question and passage.",
"Paraphrasing: We use the Quora Question Paraphrasing ( QQP ) dataset, which consists of pairs of questions labelled as being paraphrases or not.",
"5 Paraphrasing is related to entailment since we expect, at least in some cases, passages will contain a paraphrase of the question.",
"Heuristic Yes/No: We attempt to heuristically construct a corpus of yes/no questions from the MS Marco corpus (Nguyen et al., 2016).",
"MS Marco has free-form answers paired with snippets of related web documents.",
"We search for answers starting with yes or no, and then pair the corresponding questions with snippets marked as being related to the question.",
"We call this task Y/N MS Marco ; in total we gather 38k examples, 5 data.quora.com/First-Quora-Dataset-Release-Question-Pairs 80% of which are yes answers.",
"Unsupervised: It is well known that unsupervised pre-training using language-modeling objectives (Peters et al., 2018; Devlin et al., 2018; Radford et al., 2018), can improve performance on many tasks.",
"We experiment with these methods by using the pre-trained models from ELMo , BERT , and OpenAI's Generative Pre-trained Transformer ( OpenAI GPT ) (see Section 5.2).",
"First, we experiment with using a linear classi-fier on our task.",
"In general, we found features such as word overlap or TF-IDF statistics were not sufficient to achieve better than the majority-class baseline accuracy (62.17% on the dev set).",
"We did find there was a correlation between the number of times question words occurred in the passage and the answer being yes, but the correlation was not strong enough to build an effective classifier.",
"Yes is the most common answer even among questions with zero shared words between the question and passage (with a 51% majority), and more common in other cases.",
"For our experiments that do not use unsupervised pre-training (except the use of pre-trained word vectors), we use a standard recurrent model with attention.",
"Our experiments using unsupervised pre-training use the models provided by the authors.",
"In more detail: Our Recurrent model follows a standard recurrent plus attention architecture for text-pair classification (Wang et al., 2018).",
"It embeds the premise/hypothesis text using fasttext word vectors (Mikolov et al., 2018) and learned character vectors, applies a shared bidirectional LSTM to both parts, applies co-attention (Parikh et al., 2016) to share information between the two parts, applies another bi-LSTM to both parts, pools the result, and uses the pooled representation to predict the final class.",
"See Appendix A.2 for details.",
"Our Recurrent + ELMo model uses the language model from Peters et al. (2018) to provide contextualized embeddings to the baseline model outlined above, as recommended by the authors.",
"Our OpenAI GPT model fine-tunes the 12 layer 768 dimensional uni-directional transformer from Radford et al. (2018), which has been pre-trained as a language model on the Books corpus (Zhu et al., 2015).",
"Our BERTL model fine-tunes the 24 layer 1024 dimensional transformer from Devlin et al. (2018), which has been trained on next-sentence-selection and masked language modelling on the Book Corpus and Wikipedia.",
"We fine-tune the BERTL and the OpenAI GPT models using the optimizers recommended by the authors, but found it important to tune the optimization parameters to achieve the best results.",
"We use a batch size of 24, learning rate of 1e-5, and 5 training epochs for BERT and a learning rate of 6.25e-5, batch size of 6, language model loss of 0.5, and 3 training epochs for OpenAI GPT.",
"Following the recommendation of Gururangan et al. (2018), we first experiment with models that are only allowed to observe the question or the passage.",
"The pre-trained BERTL model reached 64.48% dev set accuracy using just the question and 66.74% using just the passage.",
"Given that the majority baseline is 62.17%, this suggests there is little signal in the question by itself, but that some language patterns in the passage correlate with the answer.",
"Possibly, passages that present more straightforward factual information (like Wikipedia introduction paragraphs) correlate with yes answers.",
"The results of our transfer learning methods are shown in Table",
"3. All results are averaged over five runs.",
"For models pre-trained on supervised datasets, both the pre-training and the fine-tuning stages were repeated.",
"For unsupervised pretraining, we use the pre-trained models provided by the authors, but continue to average over five runs of fine-tuning.",
"QA Results: We were unable to transfer from RACE or SQuAD 2.0.",
"For RACE, the problem might be domain mismatch.",
"In RACE the passages are stories, and the questions often query for passage-specific information such as the author's intent or the state of a particular entity from the passage, instead of general knowledge.",
"We would expect SQuAD 2.0 to be a better match for BoolQ since it is also Wikipedia-based, but its possible detecting the adversarially-Transfer Task Model Transfer Data #Examples Source Acc.",
"We got better results using QNLI, and even better results using NQ.",
"This shows the task of selecting text relevant to a question is partially transferable to yes/no QA, although we are only able to gain a few points over the baseline.",
"Entailment Results: The MultiNLI dataset out-performed all other supervised methods by a large margin.",
"Remarkably, this approach is only a few points behind BERT despite using orders of magnitude less training data and a much more light-weight model, showing high-quality pre-training data can help compensate for these deficiencies.",
"Our ablation results show that removing the neutral class from MultiNLI hurt transfer slightly, and removing either of the other classes was very harmful, suggesting the neutral examples had limited value.",
"SNLI transferred better than other datasets, but worse than MultiNLI.",
"We suspect this is due to limitations of the photo-caption domain it was constructed from.",
"Marco.",
"Although Y/N MS Marco is a yes/no QA dataset, its small size and class imbalance likely contributed to its limited effectiveness.",
"The web snippets it uses as passages also present a large domain shift from the Wikipedia passages in BoolQ.",
"Unsupervised Results: Following results on other datasets (Wang et al., 2018), we found BERTL to be the most effective unsupervised method, surpassing all other methods of pretraining.",
"MultiNLI.",
"We also experiment with combining these approaches using a two-step pre-training regime.",
"In particular, we fine-tune the pre-trained BERTL on MultiNLI, and then fine-tune the resulting model again on the BoolQ train set.",
"We found decreasing the number of training epochs to 3 resulted in a slight improvement when using the model pre-trained on MultiNLI.",
"We show the test set results for this model, and some other pre-training variations, in Table",
"4. For these results we train five versions of each model using different training seeds, and show the model that had the best dev-set performance.",
"Given how extensively the BERTL model has been pre-trained, and how successful it has been across many NLP tasks, the additional gain of 3.5 points due to using MultiNLI is remarkable.",
"This suggests MultiNLI contains signal orthogonal to what is found in BERT's unsupervised objectives.",
"In Figure 2, we graph model accuracy as more of the training data is used for fine-tuning, both with and without initially pre-training on MultiNLI.",
"Pre-training on MultiNLI gives at least a 5-6 point gain, and nearly a 10 point gain for BERTL when only using 1000 examples.",
"For small numbers of examples, the recurrent model with MultiNLI pretraining actually out-performs BERTL .",
"A surprising result from our work is that the datasets that more closely resemble the format of BoolQ, meaning they contain questions and multi-sentence passages, such as SQuAD 2.0, RACE, or",
"Y/N MS Marco, were not very useful for transfer.",
"The entailment datasets were stronger despite consisting of sentence pairs.",
"This suggests that adapting from sentence-pair input to question/passage input was not a large obstacle to achieving transfer.",
"Preliminary work found attempting to convert the yes/no questions in BoolQ into declarative statements did not improve transfer from MultiNLI, which supports this hypothesis.",
"The success of MultiNLI might also be surprising given recent concerns about the generalization abilities of models trained on it (Glockner et al., 2018), particularly related to annotation artifacts caused by using crowd workers to write the hypothesis statements (Gururangan et al., 2018).",
"We have shown that, despite these weaknesses, it can still be an important starting point for models being used on natural data.",
"We hypothesize that a key advantage of MultiNLI is that it contains examples of contradictions.",
"The other sources of transfer we consider, including the next-sentence-selection objective in BERT, are closer to providing examples of entailed text vs. neutral/unrelated text.",
"Indeed, we found that our two step transfer procedure only reaches 78.43% dev set accuracy if we remove the contradiction class from MultiNLI, regressing its performance close to the level of BERTL when just using unsupervised pre-training.",
"Note that it is possible to pre-train a model on several of the suggested datasets, either in succession or in a multi-task setup.",
"We leave these experiments to future work.",
"Our results also suggest pre-training on MultiNLI would be helpful for other corpora that contain yes/no questions.",
"We have introduced BoolQ, a new reading comprehension dataset of naturally occurring yes/no questions.",
"We have shown these questions are challenging and require a wide range of inference abilities to solve.",
"We have also studied how transfer learning performs on this task, and found crowd-sourced entailment datasets can be leveraged to boost performance even on top of language model pre-training.",
"Future work could include building a document-level version of this task, which would increase its difficulty and its correspondence to an end-user application."
] | [
"abstain",
"method",
"result",
"abstain",
"objective",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"result",
"result",
"method",
"objective",
"other",
"other",
"objective",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"method",
"other",
"other",
"other",
"abstain",
"other",
"other",
"method",
"method",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"result",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"objective",
"objective",
"result",
"abstain"
] |
[
"Building general reading comprehension sys-tems, capable of solving multiple datasets at the same time, is a recent aspirational goal in the research community.",
"Prior work has focused on model architectures or generalization to held out datasets, and largely passed over the particulars of the multi-task learning set up.",
"We show that a simple dynamic sampling strategy, selecting instances for training proportional to the multi-task model's current performance on a dataset relative to its singletask performance, gives substantive gains over prior multi-task sampling strategies, mitigating the catastrophic forgetting that is common in multi-task learning.",
"We also demonstrate that allowing instances of different tasks to be interleaved as much as possible between each epoch and batch has a clear benefit in multitask performance over forcing task homogeneity at the epoch or batch level.",
"Our final model shows greatly increased performance over the best model on ORB, a recently-released multitask reading comprehension benchmark.",
"Building multi-task reading comprehension sys-tems has received significant attention and been a focus of active research (Talmor and Berant, 2019; Xu et al., 2019).",
"These approaches mostly focus on model architecture improvements or generalizability to new tasks or domains.",
"While these contributions are important, it is also important to explore the optimal way to structure training; as we will show, training on instances from diverse datasets (tasks) means that unlike in a single-task setting, ample instances from each task distribution must be represented during training to properly capture that diversity.",
"We explore 2 fundamental aspects of structuring multi-task training: how many instances are sampled from each task per epoch and how those instances are organized within the epoch.",
"We investigate the importance of this structuring by training a multi-task model on the 8 datasets from ORB (Dua et al., 2019b), a recent multi-task reading comprehension benchmark.",
"We first explore the sampling distribution over datasets at each epoch: how many instances from each dataset should be used to train.",
"Prior work has typically either made this a uniform distribution over datasets (implicitly favoring smaller datasets), a distribution proportional to the sizes of the datasets (implicitly favoring larger datasets), or some combination of the two.",
"Because these sampling strategies favor some datasets over others, they can lead to catastrophic forgetting in the non-favored datasets.",
"We introduce a dynamic sampling strategy that selects instances from a dataset with probability proportional to the gap between its current performance on some metric (like EM or F1 score) and measured single-task performance of the same model on that dataset.",
"By adjusting the sampling distribution over the course of training according to what the model is learning, this method is able to mitigate the catastrophic forgetting that is observed with other sampling strategies.",
"Next we explore the impact of within-epoch scheduling strategies: once a set of instances has been selected for training, how should they be ordered and batched together?",
"We explore three different strategies: partitioning, homogeneous batches, and heterogeneous batches.",
"We observe a steady increase in performance as instances from different datasets become more and more interleaved within an epoch (less partitioned) and batches are more heterogeneous.",
"This suggests that more variety in batches aids convergence when performing gradient descent steps as opposed to steps using homogeneous batches which only update the model with respect to one task at a time.",
"Partitioning also yields poorer performance since it does not allow the model to see the least recent tasks later in the epoch which leads to catastrophic forgetting on those tasks.",
"We empirically evaluate these various training strategies on ORB, a recent multi-task reading comprehension benchmark: we take the previous best published model and retrain it using dynamic sampling and heterogeneous batches, leading to a performance increase averaging about 12 points EM and 9 points F1 per task.",
"While we only evaluate on reading comprehension, the methods we present are quite general and can be applied to any multitask learning setting.",
"We explore two main dimensions along which the instances are ordered in multi-task learning: (1) instance sampling from each dataset to get a collection of examples to use for an epoch; and (2) within-epoch scheduling of those instances, determining how they should be ordered and batched.",
"The key consideration for these various strategies is avoiding a phenomenon similar to catastrophic for-getting (Carpenter and Grossberg, 1988), where performance on a specific dataset in an unbalanced training set can drop dramatically when training moves on from that dataset.",
"We investigate the following four alternatives for determining how many instances to draw from each dataset for each epoch:",
"Uniform The simplest way is to uniformly sample instances for each task (Caruana, 1997), which results in an approximately equal number of instances from each dataset per epoch.",
"In practice, this means randomly sampling the same number of training instances from each dataset at each epoch, which will likely be a small subset of all the training instances, as the number of instances in constrained by the smallest dataset.",
"Large datasets will be proportionally underrepresented here.",
"By Size Alternatively, unbalanced datasets can be dealt with by sampling from each task in proportion to their training set size (e.g. Sanh et al., 2019).",
"However, this approach can result in underfitting small-sized tasks and overfitting large-sized tasks if the ratio between size differences is too extreme.",
"1 github.com/mrqa/MRQA-Shared-Task-2019",
"of training epochs and has instances sampled by training set size for the second half.",
"Dynamic The prior two methods use a fixed sampling distribution for every epoch of training.",
"We introduce a new, dynamic sampling strategy that aims to focus training on places where it is most needed.",
"For this sampling strategy, we first compute single-task validation metrics for the model that we are training.",
"For each task, we calculate the gap between current multi-task performance and the respective single-task performance and normalize these metric differentials to create a probability distribution.",
"Then, for every epoch after the first (where we use sampling by size), we sample instances by task from this distribution.",
"If performance on a dataset is far from single-task performance, it will get sampled heavily, while datasets that have reached or exceeded single-task performance will get sampled little if at all.",
"2 We also experimented with modifying the metric used to calculate the differential.",
"We tested using the",
"1) validation loss differential,",
"2) validation EM differential,",
"3) validation F1 differential, and",
"4) the sum of the validation EM and F1 differentials (EM+F1 differential).",
"Amongst these, the validation loss for each dataset reaches the singletask loss far quicker than others.",
"This is likely due to the phenomenon that neural networks can overfit to specific loss functions while still benefitting in terms of accuracy (Guo et al.,",
"2017).This explains why the gap in accuracy metrics can be so wide while the loss gap closed within 1 or 2 epochs.",
"Because of this behavior, the loss differentials were all nearly identical in the first few epochs and behavior became very similar to uniform sampling.",
"We finally decided to use EM+F1 differential as this yielded nominally better performance than EM or F1 differential and significantly better performance than loss differential.",
"We explore several different methods for scheduling and batching the instances within an epoch after the set of instances has been sampled:",
"Partitioned This scheduling strategy partitions the instances in the epoch by task.",
"In other words, the model will never see an instance from a new dataset until all the instances from the current 2 Sharma and Ravindran (2017) use a related technique in reinforcement learning, though the setup is different.",
"dataset are exhausted.",
"It seems intuitive that this strategy would exacerbate catastrophic forgetting on the tasks it saw least recently, especially when there are a large number of tasks.",
"We include this method simply for completeness.",
"Homogeneous Batches This scheduling strategy does not force instances into partitions based on the dataset.",
"Instead, instances from each dataset are batched together, then the batches are shuffled.",
"Heterogeneous Batches This scheduling strategy shuffles all selected instances for the epoch, then batches them together.",
"Each batch could have instances from many different datasets.",
"Uniform Batches This scheduling strategy is used by the baseline model for the MRQA shared task (Fisch et al., 2019) as well as for the best prior result on ORB.",
"This method places one instance per dataset in each batch (forced heterogeneity) until the smallest dataset runs out of instances.",
"This strategy continues with the remaining datasets, until all datasets are exhausted.",
"Setup The eight reading comprehension tasks are from the ORB benchmark (Dua et al., 2019b): DROP (Dua et al., 2019a), DuoRC (Saha et al., 2018), NarrativeQA (Kocisky et al., 2017), NewsQA (Trischler et al., 2017), Quoref (Dasigi et al., 2019), ROPES (Lin et al., 2019), SQuAD (Rajpurkar et al., 2016), and SQuAD",
"2.0 (Rajpurkar et al., 2018).",
"We use the NABERT 3 (Numerically Augmented BERT) model with an additional reasoning type to allow No Answer as an answer to accommodate the SQuAD 2.0 dataset which has about 40,000 No Answer questions.",
"Each training session lasted 30 epochs with 50,000 instances sampled per epoch.",
"Three training ses-sions were conducted per sampling method and the EM and F1 scores shown are averaged over those three sessions.",
"Note that NarrativeQA is evaluated using only ROUGE F1 score.",
"Due to GPU mem-ory constraints, we are limited to a batch size of 4, so we are unable replicate the Uniform Batches configuration of MRQA (requires a batch size of 8 to fit 1 instance from each of the 8 datasets).",
"Sampling Strategies Table 2 shows the effectiveness of the sampling techniques discussed above.",
"Uniform sampling yields a very mediocre performance for 7 datasets but significantly un-derperforms on SQuAD 2.0, which is likely not getting enough representation each epoch for its unique no-answer questions.",
"Sampling by size yields mediocre performances for 7 datasets but un-derperforms on ROPES, which is easily the smallest dataset and therefore gets undersampled.",
"However, performance on Quoref, the second smallest dataset, is still relatively high, which might be explained by its SQuAD-style questions.",
"Exposure to SQuAD, one of the largest datasets, likely benefits performance on Quoref as well.",
"Interestingly, uniform sampling followed by size sampling slightly alleviates the problems from the individual sampling methods but also slightly underforms 3 https://github.com/raylin1000/drop bert Method Average Quoref ROPES DuoRC NarrQA SQuAD SQuAD2 DROP NewsQA EM F 1 EM F 1 EM F 1 EM F 1 EM F 1 EM F 1 EM F 1 EM F 1 EM F 1 SingleTask -53.0 58.6 67.5 72.1 23.3 30.8 -50.3 57.5 73.5 66.0 69.6 57.1 54.4 35.3 49.8 Uniform 49.2 55.8 56.9 61.5 69.7 74.3 23.4 32.1 -53.1 69.3 78.0 38.1 42.9 51.8 54.4 35.0 49.9 By Size 50.0 56.3 53.7 57.7 62.7 68.1 23.3 31.6 -52.4 65.8 74.1 58.1 63.0 52.0 54.5 34.6 49.1 Uni Size 49.7 56.5 55.8 60.0 68.8 73.8 23.2 32.0 -53.0 52.0 63.7 63.4 67.4 49.7 52.2 35.0 49.8 Dynamic 51.7 58.1 56.3 60.4 65.1 71.9 23.1 31.5 -52.9 66.3 74.7 63.2 67.7 53.8 56.3 34.5 49.2 Table 2: Effect of using different instance sampling strategies with heterogeneous batch scheduling Method Average Quoref ROPES DuoRC NarrQA SQuAD SQuAD2 DROP NewsQA EM F 1 EM F 1 EM F 1 EM F 1 EM F 1 EM F 1 EM F 1 EM F 1 EM F 1 Partition 46.1 53.2 50.7 55.3 58.1 65.4 22.1 30.7 -50.9 67.0 76.6 36.5 41.6 55.3 58.2 32.0 47.4 Homo 48.8 54.7 53.3 56.8 61.5 66.6 21.6 29.6 -49.9 63.7 71.7 56.0 60.6 51.8 54.1 33.5 48.2 Hetero 51.7 58.1 56.3 60.4 65.1 71.9 23.1 31.5 -52.9 66.3 74.7 63.2 67.7 53.8 56.3 34.5 49.2 Table 3: Effect of using different epoch scheduling strategies with dynamic sampling Method Average Quoref ROPES DuoRC NarrQA SQuAD SQuAD2 DROP NewsQA EM F 1 EM F 1 EM F 1 EM F 1 EM F 1 EM F 1 EM F 1 EM F 1 EM F 1 ORB 34.4 42.1 35.0 44.7 31.1 37.3 25.4 34.1 -36.6 67.3 77.7 32.8 38.0 20.2 23.6 29.2 44.6 Dynamic 47.6 54.5 59.4 63.9 36.5 44.8 23.0 31.5 -52.0 66.3 74.7 61.2 65.7 51.9 54.2 34.7 49.1 Table 4: Results on ORB test sets.",
"on DROP.",
"Finally, dynamic sampling achieves the highest average performance and fully cures both problems mentioned above since each epoch, the sampling distribution can be adjusted based on which datasets perform poorly.",
"The previous sampling methods have static sampling distributions, so these adjustments are impossible.",
"Scheduling Strategies Table 3 show that heterogeneous batches during sampling leads to the best multi-task performance, and performance steadily decreases as instance grouping becomes more and more homogenized with respect to the dataset.",
"ORB Evaluation Finally, Table 4 shows that our model trained with dynamic sampling and heterogeneous batches significantly outperforms the previous ORB state-of-the-art NABERT baseline model (submitted on 11/12/2019 on the leader-board site 4 ).",
"Our goal was to investigate which instance sampling method and epoch scheduling strategy gives optimal performance in a multi-task reading comprehension setting.",
"The results suggest that dynamic samplingsampling instances from each task based on their respective metric differentials is a fruitful direction to explore for improving performance.",
"We also show that interleaving instances from different tasks within each epoch and forming heterogeneous batches is crucial for optimizing multi-task performance.",
"It is also worth noting that for the DuoRC, NarrativeQA, SQuAD, and Quoref datasets there are cases where the multi-task model outperforms the single-task model.",
"This suggests that for specific cases, we observe an effect similar to data augmentation (like exposure to SQuAD benefitting QuoREF performance as mentioned above) but this needs to be explored further.",
"We hope that future work experiments further with dynamic sampling such as by modifying the metric (e.g., using BLEU or ROUGE score if applicable) and/or modifying other values like number of instances per epoch based on performance metrics (not only does this effectively change learning rate, but it would also allow the model to update the sampling distribution more or less frequently).",
"This work was supported in part by funding from Allen Institute of Artificial Intelligence, in part by Amazon, and in part by the National Science Foundation (NSF) grant #CNS-1730158."
] | [
"abstain",
"abstain",
"result",
"objective",
"result",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"objective",
"result",
"abstain",
"abstain",
"method",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"result",
"abstain",
"objective",
"method",
"other"
] |
[
"The development of neural networks and pretraining techniques has spawned many sentence-level tagging systems that achieved superior performance on typical benchmarks.",
"However, a relatively less discussed topic is what if more context information is introduced into current top-scoring tagging systems.",
"Although several existing works have attempted to shift tagging systems from sentence-level to document-level, there is still no consensus conclusion about when and why it works, which limits the applicability of the larger-context approach in tagging tasks.",
"In this paper, instead of pursuing a state-of-the-art tagging system by architectural exploration, we focus on investigating when and why the larger-context training, as a general strategy, can work.",
"To this end, we conduct a thorough comparative study on four proposed aggregators for context information collecting and present an attribute-aided evaluation method to interpret the improvement brought by larger-context training.",
"Experimentally, we set up a testbed based on four tagging tasks and thirteen datasets.",
"Hopefully, our preliminary observations can deepen the understanding of larger-context training and enlighten more follow-up works on the use of contextual information.",
"The rapid development of deep neural models has shown impressive performances on sequence tagging tasks that aim to assign labels to each token of an input sequence (Sang and De Meulder, 2003; Lample et al., 2016; Ma and Hovy, 2016).",
"More recently, the use of unsupervised pre-trained models (Akbik et al., 2018, 2019; Peters et al., 2018; Devlin et al., 2018) (especially contextualized version) has driven state-of-the-art performance to a Corresponding author new level.",
"Among these works, researchers frequently choose the boundary with the granularity of sentences for tagging tasks (i.e., sentence-level tagging ) (Huang et al., 2015; Chiu and Nichols, 2015; Ma and Hovy, 2016; Lample et al., 2016).",
"Undoubtedly, as a transient, sentence-level setting enables us to develop numerous successful tagging systems, nevertheless the task itself should have not be defined as sentence-level but for simplifying the learning process for machine learning models.",
"Naturally, it would be interesting to see what if larger-context information (e.g., taking information of neighbor sentences into account) is introduced to modern top-scoring systems, which have shown superior performance under the sentence-level setting.",
"A small number of works have made seminal exploration in this direction, in which part of works show significant improvement of larger-context (Luo et al., 2020; Xu et al., 2019) while others don't (Hu et al., 2020, 2019; Luo et al., 2018).",
"Therefore, it's still unclear when and why larger-context training is beneficial for tagging tasks.",
"In this paper, we try to figure it out by asking the following three research questions: Q1 : How do different integration ways of larger-context information influence the system's performance?",
"The rapid development of neural networks provides us with diverse flavors of neural components to aggregate larger-context information, which, for example, can be structured as a sequential topology by recurrent neural networks (Ma and Hovy, 2016; Lample et al., 2016) (RNNs) or graph topology by graph neural networks (Kipf and Welling, 2016; Schlichtkrull et al., 2018).",
"Understanding the discrepancies of these aggregators can help us reach a more generalized conclusion about the effectiveness of larger-context training.",
"To this end, we study larger-context aggregators with three different structural priors (defined in Sec. 3.2) and comprehensively evaluate their efficacy.",
"Q2 : Can the larger-context training easily play to its strengths with the help of recently arising contextualized pre-trained models (Akbik et al., 2018, 2019; Peters et al., 2018; Devlin et al., 2018) (e.g. BERT)?",
"The contextual modeling power of these pre-trained methods makes it worth looking at its effect on larger-context training.",
"In this work, we take BERT as a case study and assess its effectiveness quantitatively and qualitatively.",
"Q3 : If improvements could be observed, where does the gain come and how do different characteristics of datasets affect the amount of gain?",
"Instead of simply figuring out whether larger-context training could work, we also try to interpret its gains.",
"Specifically, we propose to use fine-grained evaluation to explain where the improvement comes from and why different datasets exhibit discrepant gains.",
"Overall, the first two questions aim to explore when larger-context training can work while the third question addresses why .",
"Experimentally, we try to answer these questions by conducting a comprehensive analysis, which involves four tagging tasks and thirteen datasets.",
"Our main observations are summarized in Sec. 8.",
"1 Furthermore, we show, with the help of these observations, it's easier to adapt larger-context training to modern top-performing tagging systems with significant gains.",
"We brief our contributions below: 1) We try to bridge the gap by asking three research questions, between the increasing top-performing sentence-level tagging systems and in-sufficient understanding of larger-context training, encouraging future research to explore more larger-context tagging systems.",
"2) We systematically investigate four aggregators for larger-context and present an attribute-aided evaluation methodology to interpret the relative advantages of them, and why they can work (Sec. 3.2).",
"3) Based on some of our observations, we adapt larger-context training to five modern top-scoring systems in the NER task and observe that all larger-context enhanced models can achieve significant improvement (Sec. 6).",
"Encouragingly , with the help of larger-context training, the performance of Akbik et al. (2018) on the WB (OntoNotes5.0-WB) dataset can be improved by a 10.78 F 1 score.",
"We first explicate the definition of tagging task and then describe several popular datasets as well as typical methods of this task.",
"Sequence tagging aims to assign one of the pre-defined labels to each token in a sequence.",
"In this paper, we consider four types of concrete tasks: Named Entity Recognition (NER), Chinese Word Segmentation (CWS), Part-of-Speech (POS) tagging, and Chunking.",
"The datasets used in our paper are naturally ordered without random shuffling according to the paper that constructed these datasets, except for WNUT-2016 dataset.",
"Named Entity Recognition (NER) We consider two well-established benchmarks: CoNLL-2003 ( CN03 ) and OntoNotes 5.0.",
"OntoNotes 5.0 is collected from six different genres: broadcast conversation ( BC ), broadcast news ( BN ), magazine ( MZ ), newswire ( NW ), telephone conversation ( TC ), and web data ( WB ).",
"Since each domain of OntoNotes 5.0 has its nature, we follow previous works (Dur-rett and Klein, 2014; Chiu and Nichols, 2016; Ghaddar and Langlais, 2018) that utilize different domains of this dataset, which also paves the way for our fine-grained analysis.",
"Chinese Word Segmentation (CWS) We use four mainstream datasets from SIGHAN2005 and SIGHAN2008, in which CITYU is traditional Chinese, while PKU , NCC , and SXU are simplified ones.",
"Chunking (Chunk) CoNLL-2000 ( CN00 ) is a benchmark dataset for text chunking.",
"Part-of-Speech (POS) We use the Penn Treebank ( PTB ) III dataset for POS tagging.",
"2 2.3 Neural Tagging Models Despite the emergence of a bunch of architectural explorations (Ma and Hovy, 2016; Lample et al., 2016; Yang et al., 2018; Peters et al., 2018; Akbik et al., 2018; Devlin et al., 2018) for sequence tagging, two general frameworks can be summarized:",
"(i) cEnc-wEnc-CRF consists of the word-level encoder, sentence-level encoder, and CRF 2 It's hard to cover all datasets for all tasks.",
"layer (Lafferty et al., 2001);",
"(ii) ContPre-MLP is composed of a contextualized pre-trained layer, followed by an MLP or CRF layer.",
"In this paper, we take both frameworks as study objects for our three research questions first, 3 and instantiate them as two specific models: CNN-LSTM-CRF (Ma and Hovy, 2016) and BERT-MLP (Devlin et al., 2018).",
"Let S = s 1 , , s k represent a sequence of sentences, where sentence s i contains n i words: s i = w i, 1 , , w i,n i .",
"Sentence-level tagging models predict the label for each word w i,t sentence-wisely (within a given sentence s i ).",
"CNN-LSTM-CRF , for example, first converts each word w i,t s i into a vector by different word-level encoders wEnc ( ) : w i,t = wEnc( w i,t ) = Lookup( w i,t ) CNN( w i,t ) , (1) where denotes the concatenation operation, Lookup( w i,t ) can be pre-trained by context-free (e.g., GloVe) or context-dependent (e.g., BERT) word representations.",
"And then the concatenated representation of them will be fed into sentence encoder sEnc( ) (e.g., LSTM layer) to derive a contextualized representation for each word.",
"where the lower case s of LSTM ( s ) represents a sentence-level LSTM.",
"Finally, a CRF layer will be used to predict the label for each word.",
"Instead of predicting entity tags sentence-wisely, more contextual information of neighbor sentences can be introduced in diverse ways.",
"Following, we elaborate on how to extend sentence-level tagging to a larger-context setting.",
"The high-level idea is to introduce more contextual information into word-or sentence-level encoder defined in Eq.",
"1 and Eq.",
"2.",
"Here, we propose four larger-context aggregators, whose architectures are illustrated in Fig. 1.",
"Bag-of-Word Aggregator ( bow ) calculates a fused representation r for a sequence of sentences.",
"where BOW( ) is a function that computes the average of all word representations of input sentences.",
"Afterward, r , as additional information, will be injected into the word encoder.",
"More precisely, the word-level encoder and sentence-level encoder can be re-written below: w bowi,t = GloVe( w i,t ) CNN( w i,t ) r , (4) h bowi,t = LSTM ( S ) ( w bowi,t , h bowi,t 1 , ) , (5) where the upper case S of LSTM ( S ) denotes the larger-context encoder that utilizes an LSTM deal with a sequence of sentences ( S = s 1 , , s k ) (instead of solely one sentence).",
"Sequential Aggregator ( seq ) first concatenates all sentences s i S and then encode it with a larger-context encoder LSTM ( S ) .",
"Formally, seq aggregator can be represented as: h seqi,t = LSTM ( S ) ( w seqi,t , h seqi,t 1 , ) , (6) where w seqi,t is defined as Eq.",
"1, and the Lookup( w i,t ) is GloVe.",
"Then, a CRF decoder is utilized to predict the tags for each word.",
"Graph Aggregator ( graph ) incorporates nonlocal bias into tagging models.",
"Each word w i is conceptualized as a node.",
"For edge connections, we define the following types of edges between pairs of nodes (i.e. w i and w j ) to encode various structural information in the context graph:",
"i) if | i j | = 1 ;",
"ii) if w i = w j .",
"In practice, the graph aggregator first collects contextual information over a sequence of sentences, and generate the word representation: G = GraphNN( V , E, ) , (7) where V = { w 1 , 1 , , w 1 ,n 1 , , w k,n k } and w i can be obtained as defined in Eq.",
"1.",
"Additionally, G = { g 1 , 1 , , g 1 ,n 1 , , g k,n k } stores aggregated contextual information for each word.",
"We instantiate GraphNN( ) as graph convolutional neural networks (Kipf and Welling, 2016).",
"Contextualized Sequential Aggregator ( cPre-seq ) is an extension of seq aggregator by using contextualized pre-trained models, such as BERT (Devlin et al., 2018), Flair (Akbik et al., 2018), and ELMo (Peters et al., 2018), as a word encoder.",
"Here, cPre-seq is instantiated as BERT to get the word representation, then followed by a larger-context encoder LSTM ( S ) .",
"We make the length of larger-context for the cPre-seq aggregator within 512 .",
"cPre-seq can be formalized as: h cPrei,t = LSTM ( S ) (BERT( w i,t ) , h cPrei,t 1 , ) .",
"The experiment in this section is designed to answer the first two research questions: Q1 and Q2 (Sec. 1).",
"Specifically, we investigate whether larger-context training can achieve improvement and how different structures of aggregator, contextualized pre-trained models influence it.",
"Settings and Hyper-parameters We adopt CNN-LSTM-CRF as a prototype and augment it with larger-context information by four categories of aggregators: bow , seq , graph , and cPre-seq .",
"We use Word2Vec (Mikolov et al., 2013) (trained on simplified Chinese Wikipedia dump) as non-contextualized embeddings for CWS task, and GloVe (Pennington et al., 2014) for NER, Chunk, and POS tasks.",
"The window size (the number of sentence) k of larger-context aggregators will be explored with a range of k = { 1 , 2 , 3 , 4 , 5 , 6 , 10 } for seq , bow , and cPre-seq .",
"We chose the best performance that the larger-context aggregator achieved with window size k (cid:54) = 1 as the final performance of a larger-context aggregator.",
"4 We use the result from the model with the best validation set performance, terminating training when the performance on development is not improved in 20 epochs.",
"For the POS task, we adopt dataset-level accuracy as evaluated metric while for other tasks, we use a corpus-level F 1 -score (Sang and De Meulder, 2003) to evaluate.",
"Tab.",
"1 illustrates the relative improvement results of four larger-context training ( k > 1 ) relative to the sentence-level tagging ( k = 1 ).",
"To examine whether the larger-context aggregation method has a significant improvement over the sentence-level tagging, we used significant test with Wilcoxon Signed-RankTest (Wilcoxon et al., 1970) at p = 0 .",
"05 level.",
"Results are shown in Tab.",
"1 (the last col-umn).",
"We find that improvements brought by four larger-context aggregators are statistically significant ( p < 0 . 05 ), suggesting that the introduction of larger-context can significantly improve the performance of sentence-level models.",
"Results We detail main observations in Tab.",
"1: 1) For most of the datasets, introducing larger-context information will bring gains regardless of the ways how to introduce it (e.g. bow or graph ), indicating the efficacy of larger contextual information.",
"Impressively, the performance on dataset WB is significantly improved by 7.26 F1 score with the cPre-seq aggregator ( p = 5 . 1 10 3 < 0 . 05 ).",
"2) Overall, comparing with bow and graph aggregators, seq aggregator has achieved larger improvement by average, which can be further enhanced by introducing contextualized pre-trained models (e.g. BERT).",
"3) Incorporating larger-context information with some aggregators also can lead to performance drop on some datasets (e.g, using graph aggrega-4 The settings of window size k are listed in the appendix.",
"tor on dataset MZ lead to 0.16 performance drop), which suggests the importance of a better match between datasets and aggregators.",
"To answer the research question Q2 ( Can the larger-context approach easily play to its strengths with the help of recently arising contextualized pre-trained models? ), we elaborate on how cPre-seq and seq aggregators influence the performance.",
"Results Fig. 2 illustrates the relative improvement achieved by two larger-context methods: seq (blue bar) and cPre-seq (red bar) on four different tagging tasks.",
"We observe that: 1) In general, aggregators equipped with BERT can not guarantee a better improvement, which is dataset-dependent.",
"2) Task-wisely, cPre-seq can improve performance on all datasets on NER, Chunk, and POS tasks.",
"By contrast, seq is beneficial to all datasets on CWS task.",
"It could be attributed to the difference in language and characteristics of the task.",
"Specifically, for most non-CWS task datasets, cPre-seq (7 out of 9 datasets) performs better than seq ( p < 0 . 05 ).",
"Experiments in this section are designed for the research questions Q3 , interpreting where the gains of a larger-context approach come and why different datasets exhibit diverse improvements.",
"To achieve this goal, we use the concept of interpretable evaluation (Fu et al., 2020a) that allows us perform fine-grained evaluation of one or multiple systems.",
"The first step of interpretable evaluation is attribute definition.",
"The high-level idea is, given one attribute, the test set of each tagging task will be partitioned into several interpretable buckets based on it.",
"And F 1 score (accuracy for POS) will be calculated bucket-wisely.",
"Next, we will explicate the general attributes we defined in this paper.",
"We first detail some notations to facilitate definitions of our attributes.",
"We define x as a token and a bold form x as a span, which occurs in a test sentence X = sent( x ) .",
"We additionally define two functions oov( ) that counts the number out of training set words, and ent( ) that tallies the number of entity words.",
"Based on these notations, we introduce some feature functions that can compute different attributes for each span or token.",
"Following, we will give the attribute definition of the NER.",
"Training set-independent Attributes eLen ( x ) = | x | : entity span length sLen ( x ) = | sent( x ) | : sentence length eDen ( x ) = | ent( sent ( x )) | / sLen ( x ) : entity density dOov ( x ) = | oov(sent( x )) | / sLen ( x ) : OOV density Training set-dependent Attributes eFre ( x ) = Fre( x ) : entity frequency eCon ( x ) = Con( x ) : label consistency of entity where Fre( x ) calculates the frequency of input x in the training set.",
"Con( x ) quantify how consistently a given span is labeled with a particular label, and Con( x ) can be formulated as: CITYU NCC SXU PKU 0 .",
"Con( x ) = |{ | lab( )( x ) , E tr }| | str( ) = str( x ) , E tr }| , (10) lab( ) = lab( x ) str( ) = str( x ) , (11) where E tr denotes entities in the training set, lab( ) denotes the label of input span while str( ) represents the surface string of input span.",
"Similarly, we can extend the above two attributes to token-level, therefore obtaining tFre ( x ) and tCon ( x ) .",
"Attributes for CWS task can be defined in a similar way.",
"Specifically, the entity (or token) in NER task corresponds to the word (or character) in CWS task.",
"Note that we omit word density for CWS task since it equals to one for any sentence.",
"We breakdown all test examples into different attribute buckets according to the given attribute.",
"Take entity length ( eLen ) attribute of NER task as an example, first, we calculate each test sample's entity length attribute value.",
"Then, divide the test entities into N attribute buckets ( N = 4 by default) where the numbers of the test samples in all attribute intervals (buckets) are equal, and calculate the performance for those entities falling into the same bucket.",
"To investigate where the gains of the larger-context training come, we conduct a fine-grained evaluation with the evaluation attributes defined in Sec. 5.1.",
"We use the cPre-seq larger-context aggregation method as the base model.",
"Fig. 3 shows the relative improvement of the cPre-seq larger-context aggregation method in NER ( 7 datasets) and CWS tasks ( 4 datasets).",
"The relative improvement is the performance of cPre-seq larger-context tagging minus sentence-level tagging.",
"1) Test spans with lower label consistency can benefit much more from the larger-context training.",
"As shown in Fig. 3 (a,b,i,j), test spans with lower label consistency (NER: eCon,tCon=S/XS , CWS: wCon,cCon=S/XS ) can achieve higher relative improvement using the larger-context training, which holds for both NER and CWS tasks.",
"2) NER task has achieved more gains on lower and higher-frequency test spans, while CWS task obtains more gains on lower-frequency test spans.",
"As shown in Fig. 2 (c,d,k,l), in NER task, test spans with higher or lower frequency (NER: eFre=XS/XL;tFre=XS/XL ) will achieve larger improvements with the help of more contextual sentences; while for the CWS task, only the test spans with lower frequency will achieve more gains.",
"3) Test spans of NER task with lower entity density have obtained larger improvement with the help of a larger-context training.",
"In terms of entity density shown in Fig. 3",
"(e), an evaluation attribute specific to the NER task, the larger-context training is not good at dealing with the test spans with high entity density (NER: eDen=XL/L ), while doing well in test spans with low entity density (NER: eDen=XS/S ).",
"4) Larger-context training can achieve more gains on short entities in NER task while long words in CWS task.",
"As shown in Fig. 3 (f,m), the dark blue boxes can be seen in the short entities ( eLen=XS/S ) of NER task, and long words ( wLen=XL/L ) of CWS task.",
"5) Both NER and CWS tasks will achieve more gains on spans with higher OOV density.",
"For the OOV density shown in Fig. 2 (h,o), the test spans with higher OOV density (NER,CWS: dOov=L/XL ) will achieve more gains",
"from the larger-context training, which holds for both NER and CWS tasks.",
"Different datasets (e.g. CN03 ) may match different information aggregators (e.g. cPre-seq ).",
"Figuring out how different datasets influence the choices of aggregators is a challenging task.",
"We try to approach this goal by",
"(i) designing diverse measures that can characterize a given dataset from different perspectives,",
"(ii) analyzing the correlation between different dataset properties and improvements brought by different aggregators.",
"Dataset-level Measure Given a dataset E and an attribute p as defined in Sec. 5.1, the data-level measure can be defined as: p ( E ) = 1 |E te | (cid:88) E te p ( ) , (12) where E te E is a test set that contains enti-ties/tokens in the NER task or word/character in the CWS task.",
"p ( ) is a function (as defined in Sec. 5.1) that computes the attribute value for a given span.",
"For example, sLen ( CN03 ) represents the average sentence length of CN03 's test set.",
"Correlation Measure Statistically, we define a variable of to quantify the correlation between a dataset-level attribute and the relative improvement of an aggregator: = Spearman( p , f y ) , where Spearman denotes the Spearman's rank correlation coefficient (Mukaka, 2012).",
"p represents dataset-level attribute values on all datasets with respect to attribute p (e.g., eLen ) while f y denotes the relative improvements of larger-context training on corresponding datasets with respect to a given aggregator y (e.g., cPre-seq ).",
"Results Tab.",
"2 displays (using spider charts) measure p 5 of seven datasets with respect to diverse attributes, and correlation measure in the NER task.",
"6 Based on these correlations, which passed significantly test ( p < 0 . 05 ), between dataset-level measure (w.r.t a certain attribute, e.g. eCon ) and gains from larger-context training (w.r.t an aggregator, e.g. seq ), we can obtain that: (1) Regarding the cPre-seq aggregator, it negatively correlated with eCon , tCon , eFre , and eDen with larger correlation values.",
"Therefore, the cPre-seq aggregator is more appropriate to deal with WB , TC , BC and NW datasets, since these four datasets have a lower value of p with respect to the attribute eCon ( TC,WB ), tCon ( TC , WB ), eFre ( NW , TC ), and eDen ( BC , WB , TC ).",
"Additionally, since the cPre-seq aggregator obtains the highest positive correlation with dOov , and dOov ( CN03 ) , as well as dOov ( BC ) , achieve the highest value, cPre-seq aggregator is suitable for CN03 and BC .",
"(2) Regarding the seq aggregator, it negatively correlated with eCon , tCon , and eDen .",
"Therefore, the seq aggregator is better at dealing with datasets WB , TC , and BC , since these datasets are with lower p value on one of the attributes ( eCon , tCon , and eDen ).",
"Takeaways: We can conduct a similar analysis for bow and graph aggregators.",
"Due to limited pages, we detail them in our appendix and highlight the suitable NER datasets for each aggregator as follows.",
"(1) bow : WB , TC , NW , MZ , BC .",
"(2) graph : WB , TC , BN , CN03 .",
"(3) seq : WB , TC , BC .",
"(4) cPre-seq : CN03 , WB , TC , BC , NW .",
"Beyond the above quantitative and qualitative analysis of our instantiated typical tagging models (Sec.2.3), we are also curious about how well modern top-scoring tagging systems perform when equipped with larger-context training.",
"To this end, we choose the NER task as a case study and first re-implement existing top-performing models for different NER datasets separately, and then adapt larger-context approach to them based on the seq or cPre-seq aggregator, 7 which has shown superior performance in our above analysis.",
"are most recently proposed 8 .",
"Among these five models, regarding Akbik et al. (2018), we use cPre-seq aggregator for the larger-context training, since this model originally relies on a contextualized pre-trained layer.",
"Besides, from above analysis in Sec. 5.4 we know the suitable datasets for cPre-seq aggregator: CN03 , WB , TC , BC , and NW .",
"Regarding the other four models, we use the seq aggregator for the larger-context training and the matched datasets are: WB , TC , and BC .",
"Results Tab.",
"3 shows the relative improvement of larger-context training on five modern top-scoring models in the NER task.",
"We observe that the larger-context training has achieved consistent gains on all chosen datasets, which holds for both seq and cPre-seq aggregators.",
"Notably, the larger-cotext training achieves sharp improvement on WB , which holds for all the five top-scoring models.",
"For example, with the help of larger-context training, the performance can be improved significantly using Akbik et al. (2018) and 7.18 F 1 score using Luo et al. (2020).",
"This suggests that modern top-scoring NER systems can also benefit from larger-context training.",
"Our work touches the following research topics for tagging tasks.",
"Sentence-level Tagging Existing works have achieved impressive performance at sentence-level tagging by extensive structural explorations with different types of neural components.",
"Regarding sentence encoders, recurrent neural nets (Huang et al., 2015; Chiu and Nichols, 2015; Ma and Hovy, 8 We originally aimed to select more (10 systems) but suffer from reproducibility problems (Pineau et al., 2020), even after contacting the first authors.",
"2016; Lample et al., 2016; Li et al., 2019; Lin et al., 2020) and convolutional neural nets (Strubell et al., 2017; Yang et al., 2018; Chen et al., 2019; Fu et al., 2020a) were widely used while transformer were also studied to get sentential representations (Yan et al., 2019; Yu et al., 2020).",
"Some recent works consider the NER as a span classification (Li et al., 2019; Jiang et al., 2019; Mengge et al., 2020; Ouchi et al., 2020) task, unlike most works that view it as a sequence labeling task.",
"To capture morphological information, some previous works introduced a character or subword-aware encoders with unsupervised pre-trained knowledge (Peters et al., 2018; Akbik et al., 2018; Devlin et al., 2018; Akbik et al., 2019; Yang et al., 2019; Lan et al., 2019).",
"Document-level Tagging Document-level tagging introduced more contextual features to improve the performance of tagging.",
"Some early works introduced non-local information (Finkel et al., 2005; Krishnan and Manning, 2006) to enhance traditional machine learning methods (e.g., CRF (Laf-ferty et al., 2001)) and achieved impressive results.",
"Qian et al. (2018); Wadden et al. (2019) built graph representation based on the broad dependencies between words and sentences.",
"Luo et al. (2020) proposed to use a memory network to record the document-aware information.",
"Besides, document-level features was introduced by different domains to alleviate label inconsistency problems, such as news NER (Hu et al., 2020, 2019), chemical NER (Luo et al., 2018), disease NER (Xu et al., 2019), and Chinese patent (Li and Xue, 2014, 2016).",
"Compared with these works, instead of proposing a novel model, we focus on investigating when and why the larger-context training, as a general strategy, can work.",
"Interpretability and Robustness of Sequence Labeling Systems Recently, there is a popular trend that aims to",
"(i) perform a glass-box analysis of sequence labeling systems (Fu et al., 2020b; Agarwal et al., 2020), understanding their generalization ability and quantify robustness (Fu et al., 2020c),",
"(ii) interpretable evaluation of them (Fu et al., 2020a), making it possible to know what a system is good/bad at and where a system outperforms another,",
"(iii) reliable analysis (Ye et al., 2021) for test set with fewer samples.",
"Our work is based on the technique of interpretable evaluation, which provides a convenient way for us to diagnose different systems.",
"We summarize the main observations from our experiments and try to provide preliminary answers to our proposed research questions:",
"(i) How do different integration ways of larger-context information influence the system's performance?",
"Overall, introducing larger-context information will bring gains regardless of the ways how to introduce it (e.g., seq , graph ).",
"Particularly, larger-context training with seq aggregator can achieve better performance at lower training cost compared with graph and bow aggregators (Sec. 4.1).",
"(ii) Can the larger-context training easily play to its strengths with the help of contextualized pre-trained models?",
"Yes for all datasets on NER, Chunk, and POS tasks.",
"By contrast, for CWS tasks, the aggregator without BERT (e.g., seq ) can achieve better improvement (Sec. 4.2).",
"(iii) Where does the gain of larger-context training come?",
"And how do different characteristics of datasets affect the amount of gain?",
"The source of gains, though, is datasetand aggregator-dependent, a relatively consensus observation is that text spans with lower label consistency and higher OOV density can benefit a lot from larger-context training (Sec. 5.3).",
"Regarding different datasets, diverse aggregators are recommended in Sec. 5.4.",
"We would like to thank the anonymous reviewers for their valuable comments.",
"This work was supported by China National Key R & D Program (No.2018YFC0831105)."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"objective",
"result",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"The advent of large pre-trained language models has given rise to rapid progress in the field of Natural Language Processing (NLP).",
"While the performance of these models on standard benchmarks has scaled with size, compression techniques such as knowledge distillation have been key in making them practical.",
"We present MATE-KD, a novel text-based adversarial training algorithm which improves the performance of knowledge distillation.",
"MATE-KD first trains a masked language model-based generator to perturb text by maximizing the divergence between teacher and student logits.",
"Then using knowledge distillation a student is trained on both the original and the perturbed training samples.",
"We evaluate our algorithm, using BERT-based models, on the GLUE benchmark and demonstrate that MATE-KD outperforms competitive adversarial learning and data augmentation baselines.",
"On the GLUE test set our 6 layer RoBERTa based model outperforms BERTLARGE .",
"Transformers (Vaswani et al., 2017) and transformer-based Pre-trained Language Models (PLMs) (Devlin et al., 2019) are ubiquitous in applications of NLP.",
"They are highly parallelizable and their performance scales well with an increase in model parameters and data.",
"Increasing model parameters depends on the availability of computational resources and PLMs are typically trained on unlabeled data which is cheaper to obtain.",
"Recently, the trillion parameter mark has been breached for PLMs (Fedus et al., 2021) amid serious environmental concerns (Strubell et al., 2019).",
"However, without a change in our current training Equal Contribution Work done during an internship at Huawei Noah's Ark Lab.",
"paradigm , training larger models may be unavoidable (Li et al., 2020).",
"In order to deploy these models for practical applications such as for virtual personal assistants, recommendation systems, e-commerce platforms etc. model compression is necessary.",
"Knowledge Distillation (KD) (Bucilua et al., 2006; Hinton et al., 2015) is a simple, yet powerful knowledge transfer algorithm which is used for neural model compression (Jiao et al., 2019; Sanh et al., 2019), ensembling (Hinton et al., 2015) and multi-task learning (Clark et al., 2019).",
"In NLP, KD for compression has received renewed interest in the last few years.",
"It is one of the most widely researched algorithms for the compression of transformer-based PLMs (Rogers et al., 2020).",
"One key feature which makes KD attractive is that it only requires access to the teacher's output or logits and not the weights themselves.",
"Therefore, if a trillion parameter model resides on the cloud, an API level access to the teacher's output is sufficient for KD.",
"Consequently, the algorithm is architecture agnostic, i.e., it can work for any deep learning model and the student can be a different model from the teacher.",
"Recent works on KD for transfer learning with PLMs extend the algorithm in two main directions.",
"The first is towards model distillation (Sun et al., 2019; Wang et al., 2020; Jiao et al., 2019) i.e. distilling the intermediate weights such as the attention weights or the intermediate layer output of transformers.",
"The second direction is towards curriculum-based or progressive KD (Sun et al., 2020; Mirzadeh et al., 2019; Jafari et al., 2021) where the student learns one layer at a time or from an intermediary teacher, known as a teacher assistant.",
"While these works have shown accuracy gains over standard KD, they have come at the cost of architectural assumptions, least of them a common architecture between student and teacher, and greater access to teacher parameters and intermediate outputs.",
"Another issue is that the decision to distill one teacher layer and to skip another is arbitrary.",
"Still the teacher typically demonstrates better generalization We are interested in KD for model compression and study the use of adversarial training (Good-fellow et al., 2014) to improve student accuracy using just the logits of the teacher as in standard KD.",
"Specifically, our work makes the following contributions: We present a text-based adversarial algorithm, MATE-KD, which increases the accuracy of the student model using KD.",
"Our algorithm only requires access to the teacher's logits and thus keeps the teacher and student architecture independent.",
"We evaluate our algorithm on the GLUE (Wang et al., 2018) benchmark and demonstrate improvement over competitive baselines.",
"On the GLUE test set, we achieve a score of 80.9, which is higher than BERTLARGE We also demonstrate improvement on out-of-domain (OOD) evaluation.",
"We can summarize the knowledge distillation loss, L , as following:",
"LCE = HCE (cid:0) y, S ( X )) (cid:1) LKD = T 2 DKL (cid:16) ( z t ( X ) T ) , ( z s ( X ) T ) (cid:17) L = (1 ) LCE + LKD (1)",
"where HCE represents the cross entropy between the true label y and the student network prediction S ( X ) for a given input X , DKL is the KL divergence between the teacher and student predictions softened using the temperature parameter T , z ( X ) is the network output before the softmax layer (log-its), and ( . ) indicates the softmax function.",
"The term in the above equation is a hyper-parameter which controls the amount of contribution from the cross entropy and KD loss.",
"Patient KD (Sun et al., 2019) introduces an additional loss to KD which distills the intermediate layer information onto the student network.",
"Due to a difference in the number of student and teacher layers they propose either skipping alternate layers or distilling only the last few layers.",
"TinyBERT (Jiao et al., 2019) applies embedding distillation and intermediate layer distillation which includes hidden state distillation and attention weight distillation.",
"Although it achieves strong results on the GLUE benchmark, this approach is infeasible for very large teachers.",
"MiniLM (Wang et al., 2020) proposed an interesting alternative whereby they distill the key, query and value matrices of the final layer of the teacher.",
"Adversarial examples are small perturbations to training samples indistinguishable to humans but enough to produce misclassifications by a trained neural network.",
"Goodfellow et al. (2014) showed that adding these examples to the training set can make a neural network model robust to perturbations.",
"Miyato et al. (2016) adapt adversarial training to text classification and improve performance on a few supervised and semi-supervised text classification tasks.",
"In NLP, adversarial training has surprisingly been shown to improve generalization as well (Cheng et al., 2019; Zhu et al., 2019).",
"Cheng et al. (2019) study machine translation and propose making the model robust to both source and target perturbations, generated by swapping the embedding of a word with that of its synonym.",
"They model small perturbations by considering word swaps which cause the smallest increase in the loss gradient.",
"They achieve a higher BLEU score on Chinese-English and English-German translation compared to the baseline.",
"Zhu et al. (2019) propose a novel adversarial training algorithm, FreeLB, to make gradient-based adversarial training efficient by updating both embedding perturbations and model parameters simultaneously during the backward pass of training.",
"They show improvements on multiple language models on the GLUE benchmark.",
"Embedding perturbations are attractive because they produce stronger adversaries (Zhu et al., 2019) and keep the system end-to-end differentiable as the embeddings are continuous.",
"The salient features of adversarial training for NLP are",
"a) a minimax formulation where adversarial examples are generated to maximize a loss function and the model is trained to minimize the loss function and",
"b) a way of keeping the perturbations small such as a norm-bound on the gradient (Zhu et al., 2019) or replacing words by their synonyms (Cheng et al., 2019).",
"If these algorithms are adapted to KD one key challenge is the embedding mismatch between the teacher and student.",
"Even if the embedding size is the same, the student embedding needs to be frozen to match the teacher embedding and freezing embeddings typically leads to lower performance.",
"If we adapt adversarial training to KD, one key advantage is that access to the teacher distribution relaxes the requirement of generating label preserving perturbations.",
"These considerations have prompted us to design an adversarial algorithm where we perturb the actual text instead of the embedding.",
"Rashid et al. (2020) also propose a text-based adversarial algorithm for the problem of zero-shot KD (where the teacher's training data is unavailable), but their generator instead of perturbing text generates new samples and requires additional losses and pre-training to work well.",
"One of the first works on BERT compression (Tang et al., 2019) used KD and proposed data augmentation using heuristics such as part-of-speech guided word replacement.",
"They demonstrated improvement on three GLUE tasks.",
"One limitation of this approach is that the heuristics are task specific.",
"Jiao et al. (2019) present an ablation study in their work whereby they demonstrate a strong contribution of data augmentation to their KD algorithm performance.",
"They augment the data by randomly selecting a few words of a training sentence and replacing them with words with the closest embedding under cosine distance.",
"Our adversarial learning algorithm can be interpreted as a data augmentation algorithm, but instead of a heuristic approach we propose a principled end-to-end differentiable augmentation method based on adversarial learning.",
"Khashabi et al. (2020) presented a data augmentation technique for question answering whereby they took seed questions and asked humans to perturb only a few tokens to generate new ones.",
"The human annotators could modify the label if needed.",
"They demonstrated improved generalization and robustness with the augmented data.",
"We will demonstrate that our algorithm is built on similar principles but does not require humans in the loop.",
"Instead of human annotators to modify the labels we use the teacher.",
"We propose an algorithm that involves co-training and deploy an adversarial text generator while training a student network using KD.",
"Figure 1 gives an illustration of our architecture.",
"We can frame our technique in a minimax regime such that in the maximization step of each iteration, we feed the generator with a training sample with few of the tokens replaced by masks.",
"We fix the rest of the sentence and replace the masked tokens with the generator output to construct a pseudo training sample X (cid:48) .",
"This pseudo sample is fed to both the teacher and the student models and the generator is trained to maximize the divergence between the teacher and the student.",
"We present an example of the masked generation process in Figure",
"2. The student is trained during the minimization step.",
"The generator is trained to generate pseudo samples by maximizing the following loss function:",
"max LG ( ) = DKL (cid:16) T (cid:0) G ( X m ) (cid:1) , S (cid:0) G ( X m ) (cid:1)(cid:17) , (2)",
"where DKL is the KL divergence, G (.) is the text generator network with parameters , T ( ) and S ( ) are the teacher and student networks respectively, and X m is a randomly masked version of the input X = [ x 1 , x 2 , ..., x n ] with n tokens.",
"where unif (0 , 1) represents the uniform distribution, and the Mask( ) function masks the tokens of inputs sampled from the data distribution D with the probability of .",
"The term can be treated as a hyper-parameter in our technique.",
"In summary, for each training sample, we randomly mask some tokens according to the samples derived from the uniform distribution and the threshold value of .",
"Then in the forward pass, the masked sample, X m , is fed to the generator to obtain the output pseudo text based on the generator predictions of the mask tokens.",
"The generator needs to output a one-hot representation but using an argmax inside the generator would lead to non-differentiability.",
"Instead we apply the Gumbel-Softmax (Jang et al., 2016), which, is an approximation to sampling from the argmax .",
"Using the straight through estimator (Bengio et al., 2013) we can still apply argmax in the forward pass and can obtain text, X (cid:48) from the network outputs: X (cid:48) = G ( X m ) FORWARD = argmax (cid:0) Gumbel ( z ( X m ) (cid:1) (4) where Gumbel ( z i ) = exp (cid:16)(cid:0) log( z i ) + g i (cid:1) / (cid:17) Kj =1 exp (cid:16)(cid:0) log( z j ) + g j (cid:1) / (cid:17) (5) g i Gumbel (0 , 1) and z ( . ) returns the logits produced by the generator for a given input.",
"is the temperature in equation 5.",
"In the minimization step, the student network is trained to minimize the gap between the teacher and student predictions and match the hard labels from the training data by minimizing the following loss equation:",
"min LMATE-KD ( ) = 1 3 LCE ( ) + 1 3 LKD ( ) + 1 3 LADV ( ) (7) where (cid:16) (cid:17)",
"In Equation 7, the terms LKD and LCE are the same as Equation 1, LKD ( ) and LADV ( ) are used to match the student with the teacher, and LCE ( ) is used for the student to follow the ground-truth labels y .",
"Bear in mind that our LMATE-KD ( ) loss is different from the regular KD loss in two aspects: first, it has the additional adversarial loss, LADV to minimize the gap between the predictions of the student and the teacher with respect to the generated masked adversarial text samples, X (cid:48) , in the maximization step; second, we do not have the weight term form KD in our technique any more (i.e. we consider equal weights for the three loss terms in LMATE-KD ).",
"The rationale behind generating partially masked adversarial texts instead of generating adversarial texts from scratch (that is equivalent to masking the input of the text generator entirely) is three-fold:",
"1. Partial masking is able to generate more realistic sentences compared to generating them from scratch when trained only to increase teacher and student divergence.",
"We present a few generated sentences in section 4.6",
"2. Generating text from scratch increases the chance of generating OOD data.",
"Feeding OOD data to the KD algorithm leads to matching the teacher and student functions across input domains that the teacher is not trained on.",
"3. By masking and changing only a few tokens of the original text, we constrain the amount of perturbation as is required for adversarial training.",
"In our MATE-KD technique, we can tweak the to control our divergence from the data distribution and find the sweet spot which gives rise to maximum improvement for KD.",
"We also present an ablation on the effect of this parameter on downstream performance in section 4.5.",
"We evaluated MATE-KD on all nine datasets of the General Language Understanding Evaluation (GLUE) (Wang et al., 2018) benchmark which include classification and regression.",
"These datasets can be broadly divided into 3 families of problems.",
"Single set tasks which include linguistic acceptability (CoLA) and sentiment analysis (SST-2).",
"Similarity and paraphrasing tasks which include paraphrasing (MRPC and QQP) and a regression task (STS-B).",
"Inference tasks which include Natural Language Inference (MNLI, WNLI, RTE) and Question Answering (QNLI).",
"We evaluate our algorithm on two different setups.",
"On the first the teacher model is RoBERTa LARGE (Liu et al., 2019) and the student is initialized with the weights of DistillRoBERTa (Sanh et al., 2019).",
"RoBERTa LARGE consists of 24 layers with a hidden dimension of 1024 and 16 attention heads and a total of 355 million parameters.",
"We use the pre-trained model from Huggingface (Wolf et al., 2019).",
"The student consists of 6 layers, 768 hidden dimension, 8 attention heads and 82 million parameters.",
"Both models have a vocabulary size of 50,265 extracted using the Byte Pair Encoding (BPE) (Sen-nrich et al., 2016) tokenization method.",
"On our second setup, the teacher model is BERTBASE (Devlin et al., 2019) and the student model is initialized with the weights of DistilBERT which consists of 6 layers with a hidden dimension of 768 and 8 attention heads.",
"The pre-trained models are taken from the authors' release.",
"The teacher and the student are 110M and 66M parameters respectively with a vocabulary size of 30,522 extracted using BPE.",
"Hyper-parameters We fine-tuned the RoBERTa student model and picked the best checkpoint that gave the highest score on the dev set of GLUE.",
"These hyper-parameters were fixed for the GLUE test submissions as well as the BERT experiments.",
"We used the AdamW (Loshchilov and Hutter, 2017) optimizer with the default values.",
"In addition, we used a linear decay learning rate scheduler with no warmup steps.",
"We set the masking probability p to be 0.3.",
"Additionally, we set the value n G to 10 and n S to 100.",
"The learning rate, number of epochs, and other hyper-parameters are presented on table 8 of Appendix A. Hardware Details We trained all models using a single NVIDIA V100 GPU.",
"We used mixed-precision training (Micikevicius et al., 2018) to expedite the training procedure.",
"All experiments were run using the PyTorch 1 framework.",
"Table 1 presents the results of MATE-KD on the GLUE dev set.",
"Even though the datasets have different evaluation metrics, we present the average of all scores as well, which is used to rank the submissions to GLUE.",
"Our first baseline is the fine-tuned DistilRoBERTa and then we compare with KD, FreeLB, FreeLB plus KD, and TinyBERT (Jiao et al., 2019) data augmentation plus KD.",
"We observe that FreeLB (Zhu et al., 2019) significantly improves the fine-tuned student by around 1.2 points on average.",
"However, when we apply both FreeLB + KD, we do not see any further improvement whereas applying KD alone improves 1 https://pytorch.org/ Method CoLA SST-2 MRPC STS-B QQP MNLI QNLI RTE Score RoBERTa Large (teacher) 68.1 96.4 91.9 92.3 91.5 90.2 94.6 86.3 85.28 DistilRoBERTa (student) 56.6 92.7 89.5 87.2 90.8 84.1 91.3 65.7 78.78 Student + FreeLB 58.1 93.1 90.1 88.8 90.9 84.0 91.0 67.8 80.01 Student + FreeLB + KD 58.1 93.2 90.5 88.6 91.2 83.7 90.8 68.2 80.06 Student + KD 60.9 92.5 90.2 89.0 91.6 84.1 91.3 71.1 80.77 Student + TinyBERT Aug + KD 61.3 93.3 90.4 88.6 91.7 84.4 91.6 72.5 81.12 Student + MATE-KD (Ours) 65.9 94.1 91.9 90.4 91.9 85.8 92.5 75.0 82.64 Table 1: Dev Set results using DistilRoBERTa as the student on the GLUE benchmark.",
"the score by about 2 points.",
"This is so because FreeLB relies on the model (student) output rather than the teacher output to generate adversarial perturbation and therefore cannot benefit from KD.",
"As previously discussed, FreeLB relies on embedding perturbation and in order to generate the teacher output on the perturbed student, both the embeddings need to be tied together, which is infeasible due to the size and training requirements.",
"We also compared against the data augmentation algorithm of TinyBERT.",
"We ran their code to generate the augmented data offline.",
"Although they augment the data about 20 times depending on the GLUE task, we observed poor results if we use all this data to fine-tune with KD.",
"We only generated 1x augmented data and saw an average improvement of 0.35 score over KD.",
"MATE-KD achieves the best result among the student models on all GLUE tasks and achieves an average improvement of 1.87 over just KD.",
"We also generated the same number of adversarial samples as the training data.",
"We present the results on the test set of GLUE on Table",
"2. We list the number of parameters for each model.",
"The results of BERTBASE , BERTLARGE (Devlin et al., 2019), TinyBERT and MobileBERT (Sun et al., 2020) are taken from the GLUE leaderboard 2 .",
"The KD models have RoBERTa Large , fine-tuned without ensembling as the teacher.",
"TinyBERT and MobileBERT are the current state-of-the-art 6 layer transformer models on the GLUE leaderboard.",
"We include them in this comparison although their teacher is BERTBASE as opposed to RoBERTa Large .",
"We make the case that one reason we can train with a larger and more powerful teacher is that we only require the logits of the teacher while training.",
"Most of the works in the literature proposing intermediate layer distillation (Jiao et al., 2019; Sun et al., 2020, 2019) are trained 2 https://gluebenchmark.com/leaderboard on 12 layer BERT teachers.",
"As PLMs get bigger in size, feasible approaches to KD will involve algorithms which rely on only minimal access to teachers.",
"We apply a standard trick to boost the performance of STS-B and RTE, i.e., we initialize these models with the trained checkpoint of MNLI (Liu et al., 2019).",
"This was not done for the dev results.",
"The WNLI score is the same for all the models and although, not displayed on the table, is part of the average score.",
"We make a few observations from this table.",
"Firstly, using KD a student with a powerful teacher can overcome a significant difference in parameters between competitive models.",
"Secondly, our algorithm significantly improves KD with an average 2 point increase on the unseen GLUE testset.",
"Our model is able to achieve state-of-the-art results for a 6 layer transformer model on the GLUE leaderboard.",
"We also evaluate our algorithm using BERTBASE as teacher and DistilBERT as student on GLUE benchmark.",
"WNLI results are the same for all and they are used to calculate the average.",
"We compare against the teacher, student, and KD plus TinyBERT augmentation.",
"Here, remarkably MATE-KD can beat the teacher performance on average.",
"On the two largest datasets in GLUE, QQP and MNLI, we beat and match the teacher performance respectively.",
"We observe that MATE-KD outperforms its competitors when both the teacher is twice the size and four times the size of the student.",
"This may be because the algorithm generates adversarial examples based on the teacher's distribution.",
"A well designed adversarial algorithm can help us probe parts of the teacher's distribution not spanned by the training data leading to better generalization.",
"It has been shown that strong NLU models tend to learn spurious surface level patterns from the dataset (Poliak et al., 2018; Gururangan et al., 2018) and may perform poorly on carefully constructed OOD datasets.",
"In Table 4 we present the evaluation of MATE-KD (RoBERTa-based) trained on MNLI and QQP on the HANS (McCoy et al., 2019) and the PAWS (Zhang et al., 2019) evaluation sets respectively.",
"We use the same model checkpoint as the one presented in Table 1 and compare against DistilRoBERTa.",
"We observe that MATE-KD improves the baseline performance on both evaluation datasets.",
"The performance increase on HANS is larger.",
"We can conclude that the algorithm improvements are not due to learning spurious correlations and biases in the dataset.",
"Table 5 presents the contribution of the generator and adversarial learning to MATE-KD.",
"We first present the result of MATE-KD on all the GLUE datasets (except WNLI) and compare against the effect of removing the adversarial training and then the generator altogether.",
"When we remove the adversarial training, we essentially remove the maximization step and do not train the generator.",
"The generator in this setting is a pre-trained masked language model.",
"In the minimization step, we still generate pseudo samples and apply all losses.",
"The setting where we remove the generator is akin to a simple KD.",
"We observe that the generator improves KD by an average of 1.3 and the adversarial training increases the score further by 0.6.",
"Our algorithm does not require the loss interpolation weight of KD but instead relies on one additional parameter, , which is the probability of masking a given token.",
"We present the effect of changing in Table 7 on MNLI and RTE dev set results fixing all other hyper-parameters.",
"We selected MNLI and RTE because they are part of Natural Language Inference, which is one of the hardest tasks on GLUE.",
"Moreover, in the RoBERTa experiments we see the largest drop in student scores for these two datasets.",
"We can observe that for MNLI the best result is for 30% followed by 20% and for RTE the best choice is 40% followed by 30%.",
"This corresponds to the heuristic based data augmentation works where they typically modify tokens with a 30% to 40% probability.",
"We set this parameter to 30% for all the experiments and did not tune this for each dataset or each architecture.",
"We present a few selected samples that our generator produced during training for the SST-2 dataset on table 6.",
"SST-2 is a binary sentiment analysis dataset.",
"The data consist of movie reviews and is both at the phrase and sentence level.",
"We observe that we only modify a few tokens in the generated text.",
"However, one of three things happens if the text is semantically plausible.",
"Either the generated sentence keeps the same sentiment as in Examples 2 and 3, or it changes the sentiment as in Examples 1 and 4 or the text has ambiguous sentiment as in Example 5.",
"We can use all of these for training since we do not rely on the original label but obtain the teacher's output.",
"We have presented MATE-KD, a novel text-based adversarial training algorithm which improves the student model in KD by generating adversarial examples while accessing the logits of the teacher",
"only.",
"This approach is architecture agnostic and can be easily adapted to other applications of KD such as model ensembling and multi-task learning.",
"We demonstrate the need for an adversarial training algorithm for KD based on text rather than embedding perturbation.",
"Moreover, we demonstrate the importance of masking for our algorithm.",
"One key theme that we have presented in this work is that as PLMs inevitably increase in size and number of parameters, techniques that rely on access to the various layers and intermediate parameters of the teacher will be more difficult to train.",
"In contrast, algorithms which are well-motivated and require minimal access to the teacher may learn from more powerful teachers and would be more useful.",
"An example of such an algorithm is the KD algorithm itself.",
"Future work will consider",
"a) using label information and a measure of semantic quality to filter the generated sentences",
"b) exploring the application of our algorithm to continuous data such as speech and images and",
"c) exploring other applications of KD.",
"We thank MindSpore 3 , a new deep learning computing framework, for the partial support of this work",
"3 https://www.mindspore.cn/",
"Our research primarily deals with deploying high quality NLP applications to a wide audience around the globe.",
"We contend that these technologies can simplify many of our mundane tasks and free up our time to pursue more pleasurable work."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"objective",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain"
] |
[
"Static word embeddings that represent words by a single vector cannot capture the variability of word meaning in different linguistic and extralinguistic contexts.",
"Building on prior work on contextualized and dynamic word embeddings, we introduce dynamic contextualized word embeddings that represent words as a function of both linguistic and extralinguistic context.",
"Based on a pretrained language model (PLM), dynamic contextualized word embeddings model time and social space jointly, which makes them attractive for a range of NLP tasks involving semantic variability.",
"We highlight potential application scenarios by means of qualitative and quantitative analyses on four English datasets.",
"Over the last decade, word embeddings have revolutionized the field of NLP.",
"Traditional methods such as LSA (Deerwester et al., 1990), word2vec (Mikolov et al., 2013a,b), GloVe (Pennington et al., 2014), and fastText (Bojanowski et al., 2017) compute static word embeddings, i.e., they represent words as a single vector.",
"From a theoretical standpoint, this way of modeling lexical semantics is problematic since it ignores the variability of word meaning in different linguistic contexts (e.g., polysemy) as well as different extralinguistic contexts (e.g., temporal and social variation).",
"The first shortcoming was addressed by the introduction of contextualized word embeddings that represent words as vectors varying across linguistic contexts.",
"This allows them to capture more complex characteristics of word meaning, including polysemy.",
"Contextualized word embeddings are widely used in NLP, constituting the semantic backbone of pretrained language models (PLMs) such as ELMo (Peters et al., 2018a), BERT (Devlin et al., 2019), GPT-2 (Radford et al., 2019), XLNet ( k ) ij e ( k ) ij e ( k ) d Figure 1: Dynamic contextualized word embeddings.",
"(Yang et al., 2019), ELECTRA (Clark et al., 2020), and T5 (Raffel et al., 2020).",
"A concurrent line of work focused on the second shortcoming of static word embeddings, resulting in various types of dynamic word embeddings.",
"Dynamic word embeddings represent words as vectors varying across extralinguistic contexts, in particular time (e.g., Rudolph and Blei, 2018) and social space (e.g., Zeng et al., 2018).",
"In this paper, we introduce dynamic contextualized word embeddings that combine the strengths of contextualized word embeddings with the flex-ibility of dynamic word embeddings.",
"Dynamic contextualized word embeddings mark a departure from existing contextualized word embeddings (which are not dynamic) as well as existing dynamic word embeddings (which are not contextu-alized).",
"Furthermore, as opposed to all existing dynamic word embedding types, they represent time and social space jointly.",
"While our general framework for training dynamic contextualized word embeddings is model-agnostic (Figure 1), we present a version using a PLM (BERT) as the contextualizer, which allows for an easy integration within existing architectures.",
"Dynamic contextualized word embeddings can serve as an analytical tool (e.g., to track the emergence and spread of semantic changes in online communities) or be employed for downstream tasks (e.g., to build temporally and socially aware text classification models), making them beneficial for various areas in NLP that face semantic variability.",
"We illustrate application scenarios by performing exploratory experiments on English data from ArXiv, Ciao, Reddit, and YELP.",
"Contributions.",
"We introduce dynamic contextualized word embeddings that represent words as a function of both linguistic and extralinguistic context.",
"Based on a PLM, dynamic contextualized word embeddings model time and social space jointly, which makes them attractive for a range of NLP tasks.",
"We showcase potential applications by means of qualitative and quantitative analyses.",
"1 2 Related Work 2.1 Contextualized Word Embeddings The distinction between the non-contextualized core meaning of a word and the senses that are realized in specific linguistic contexts lies at the heart of lexical-semantic scholarship (Geeraerts, 2010), going back to at least Paul (1880).",
"In NLP, this is reflected by contextualized word embeddings that map type-level representations to token-level representations as a function of the linguistic context (McCann et al., 2017).",
"As part of PLMs (Peters et al., 2018a; Devlin et al., 2019; Radford et al., 2019; Yang et al., 2019; Clark et al., 2020; Raffel et al., 2020), contextualized word embeddings have led to substantial performance gains on a variety of tasks compared to static word embeddings that only have type-level representations (Deerwester et al., 1990; Mikolov et al., 2013a,b; Pennington et al., 2014; Bojanowski et al., 2017).",
"Since their introduction, several studies have analyzed the linguistic properties of contextualized word embeddings (Peters et al., 2018b; Goldberg, 2019; Hewitt and Manning, 2019; Jawahar et al., 2019; Lin et al., 2019; Liu et al., 2019; Tenney et al., 2019; Edmiston, 2020; Ettinger, 2020; Hof-1 We make our code publicly available at https:// github.com/valentinhofmann/dcwe . mann et al., 2020; Rogers et al., 2020).",
"Regarding lexical semantics, this line of research has shown that contextualized word embeddings are more context-specific in the upper layers of a contextualizer (Ethayarajh, 2019; Mickus et al., 2020; Vulic et al., 2020) and represent different word senses as separated clusters (Peters et al., 2018a; Coenen et al., 2019; Wiedemann et al., 2019).",
"The meaning of a word can also vary across extralinguistic contexts such as time (Bybee, 2015; Koch, 2016) and social space (Robinson, 2010, 2012; Geeraerts, 2018).",
"To capture these phenomena, various types of dynamic word embeddings have been proposed: diachronic word embeddings for temporal semantic change (Bamler and Mandt, 2017; Rosenfeld and Erk, 2018; Rudolph and Blei, 2018; Yao et al., 2018; Gong et al., 2020) and personalized word embeddings for social semantic variation (Zeng et al., 2017, 2018; Oba et al., 2019; Welch et al., 2020a,b; Yao et al., 2020).",
"Other studies have demonstrated that performance on a diverse set of tasks can be increased by including temporal (Jaidka et al., 2018; Lukes and Sgaard, 2018) and social information (Amir et al., 2016; Hamilton et al., 2016a; Yang et al., 2016; Yang and Eisenstein, 2017; Hazarika et al., 2018; Mishra et al., 2018; del Tredici et al., 2019b; Li and Goldwasser, 2019; Mishra et al., 2019).",
"The relevance of dynamic (specifically diachronic) word embeddings is also reflected by the emergence of lexical semantic change detection as an established task in NLP (Kutuzov et al., 2018; Schlechtweg et al., 2018; Tahmasebi et al., 2018; Dubossarsky et al., 2019; Schlechtweg et al., 2019; Asgari et al., 2020; Pomsl and Lyapin, 2020; Prazak et al., 2020; Schlechtweg and Schulte im Walde, 2020; Schlechtweg et al., 2020).",
"Besides dynamic word embeddings, many studies on lexical semantic change detection use methods based on static word embeddings (Kim et al., 2014; Kulkarni et al., 2015), e.g., the alignment of static word embedding spaces (Hamilton et al., 2016b).",
"However, such approaches come at the cost of modeling disadvantages (Bamler and Mandt, 2017).",
"Sociolinguistics has shown that temporal and social variation in language are tightly interwoven: innovations such as a new word sense in the case of lexical semantics spread through the language community along social ties (Milroy, 1980, 1992; Labov, 2001; Pierrehumbert, 2012).",
"However, most proposed dynamic word embedding types cannot capture more than one dimension of variation.",
"Recently, a few studies have taken first steps in this direction by using genre information within a Bayesian model of semantic change (Frermann and Lapata, 2016; Perrone et al., 2019) and including social variables in training diachronic word embeddings (Jawahar and Seddah, 2019).",
"In addition, to capture the full range of lexical-semantic variability, dynamic word embeddings should also be contextualized.",
"Crucially, while contextualized word embeddings have been used to investigate semantic change (Giulianelli, 2019; Hu et al., 2019; Giulianelli et al., 2020; Kutuzov and Giulianelli, 2020; Martinc et al., 2020a,b), the word embeddings employed in these studies are not dynamic, i.e., they represent a word in a specific linguistic context by the same contextualized word embedding independent of extralinguistic context or are fit to different time periods as separate models.",
"2 3 Model 3.1 Model Overview Given a sequence of words X = (cid:2) x (1) , . . . , x ( K ) (cid:3) and corresponding non-contextualized embeddings E = (cid:2) e (1) , . . . , e ( K ) (cid:3) , contextualizing language models compute the contextualized embedding of a particular word x ( k ) , h ( k ) , as a function c of its non-contextualized embedding, e ( k ) , and the non-contextualized embeddings of words in the left context X ( <k ) and the right context X ( >k ) , 3 h ( k ) = c (cid:16) e ( k ) , E ( <k ) , E ( >k ) (cid:17) .",
"Crucially, while h ( k ) is a token-level representation, e ( k ) is a type-level representation and is modeled as a simple embedding look-up.",
"Here, in order to take the variability of word meaning in different extralinguistic contexts into account, we depart from this practice and model e ( k ) as a function d that depends not only on the identity of x ( k ) but also on the social context s i and the temporal context t j in which the sequence X occurred, e ( k ) ij = d (cid:16) x ( k ) , s i , t j (cid:17) .",
"2 It is interesting to notice that contextualized word embeddings so far have performed worse than non-contextualized word embeddings on the task of lexical semantic change detection (Kaiser et al., 2020; Schlechtweg et al., 2020).",
"3 Some contextualizing language models such as GPT-2 (Radford et al., 2019) only operate on X ( <k ) .",
"Dynamic contextualized word embeddings are hence computed in two stages: words are first mapped to dynamic type-level representations by d and then to contextualized token-level representations by c (Figures 1 and 2).",
"This two-stage structure follows work in cognitive science and linguistics that indicates that extralinguistic information is processed before linguistic information by human speakers (Hay et al., 2006).",
"Since many words in the core vocabulary are semantically stable across social and temporal contexts, we place a Gaussian prior on e ( k ) ij , e ( k ) ij N (cid:16) e ( k ) , 1 a I (cid:17) , (3) where e ( k ) denotes a non-dynamic representation of x ( k ) .",
"where o ( k ) ij denotes the vector offset from x ( k ) 's non-dynamic embedding e ( k ) , which is stable across social and temporal contexts, to its dynamic embedding e ( k ) ij , which is specific to s i and t j .",
"The distribution of o ( k ) ij then follows a Gaussian with o ( k ) ij N (cid:0) 0 , 1 a I (cid:1) .",
"We enforce Equation 5 by including a regularization term in the objective function (Section 3.4).",
"We leverage a PLM for the function c , specifically BERT (Devlin et al., 2019).",
"Denoting with E ij the sequence of dynamic embeddings corresponding to X in s i and t j , the dynamic version of Equation 1 becomes h ( k ) ij = BERT (cid:16) e ( k ) ij , E ( <k ) ij , E ( >k ) ij (cid:17) .",
"We also use BERT, specifically its pretrained input embeddings, to initialize the non-dynamic embeddings e ( k ) , which are summed with the vector ( k )",
"offsets o ij (Equation 4) and fed into BERT.",
"Using a PLM for c has the advantage of making it easy to employ dynamic contextualized word embeddings for downstream tasks by adding a task-specific layer on top of the PLM.",
"We model the vector offset o ( k ) ij as a function of the word x ( k ) , which we represent by its non-dynamic embedding e ( k ) , as well as the social context s i , which we represent by a time-specific embedding s ij .",
"We use BERT's pretrained input embeddings for e ( k ) .",
"4 We combine these representations in a time-specific feed-forward network, o ( k ) ij = FFN j (cid:16) e ( k ) (cid:107) s ij (cid:17) , (7) where (cid:107) denotes concatenation.",
"To compute the social embedding s ij , we follow common practice in the computational social sciences and represent the social community as a graph G = ( S , E ) , where S is the set of social units s i , and E is the set of edges between them (Section 4).",
"We use a time-specific graph attention network (GAT) as proposed by Velickovic et al. (2018) to encode G , 5 s ij = GAT j ( s i , G ) .",
"To model the temporal drift of the dynamic embeddings e ( k ) ij , we follow previous work on dynamic word embeddings (Bamler and Mandt, 2017; Rudolph and Blei, 2018) and impose a random walk prior over o ( k ) ij , o ( k ) ij N (cid:16) o ( k ) ij (cid:48) , 1 w I (cid:17) , (9) 4 We also tried to learn separate embeddings in the dynamic component, but this led to worse performance.",
"with j (cid:48) = j 1 .",
"This type of Gaussian process is known as Ornstein-Uhlenbeck process (Uhlen-beck and Ornstein, 1930) and is commonly used to model time series (Roberts et al., 2013).",
"The random walk prior enforces that the dynamic embeddings e ( k ) ij change smoothly over time.",
"The combination with BERT makes dynamic contextualized word embeddings easily applicable to different tasks by adding a task-specific layer on top of the contextualizing component.",
"For training the model, the overall loss is L total = L task + L prior a + L prior w , (10) where L task is the task-specific loss, and L prior a and L prior w are the regularization terms that impose the anchoring and random walk priors on the type-level offset vectors, L prior a = a KK (cid:88) k =1 (cid:107) o ( k ) ij (cid:107) 22 (11) L prior w = w KK (cid:88) k =1 (cid:107) o ( k ) ij o ( k ) ij (cid:48) (cid:107) 22 .",
"It is common practice to set a (cid:28) w (Bamler and Mandt, 2017; Rudolph and Blei, 2018).",
"Here, we set a = 10 3 w , which reduces the number of tunable hyperparameters.",
"We place the priors only on frequent words in the vocabulary (Section 5.1), taking into account the observation that the vocabulary core constitutes the best basis for dynamic word embeddings (Hamilton et al., 2016b).",
"We fit dynamic contextualized word embeddings to four datasets with different linguistic, social, and temporal characteristics, which allows us to investigate factors impacting their utility.",
"Each dataset D consists of a set of texts (e.g., reviews) written by a set of social units S (e.g., users) over a sequence of time periods T (e.g., years).",
"Furthermore, the social units are connected by a set of edges E within a social network G .",
"Table 1 provides summary statistics of the four datasets.",
"ArXiv.",
"ArXiv is an open-access distribution ser-vice for scientific articles.",
"Recently, a dataset of all papers published on ArXiv with corresponding metadata was released.",
"6 For this study, we 6 https://www.kaggle.com/ Cornell-University/arxiv Linguistic Social Temporal Dataset |D| Unit | X | Unit |S| |E| d Unit |T | t 1 t |T | ArXiv 972,369 Abstract 118.10 Subject 535 5,165 19.34 3.48 .036 Year 20 [01/]2001 [10/]2020 Ciao 269,807 Review 684.68 User 10,880 129,900 18.20 3.65 .002 Year 12 [05/]2000 [09/]2011 Reddit 915,663 Comment 43.50 Subreddit 5,728 61,796 23.99 4.69 .005 Month 8 09/2019 04/2020 YELP 795,661 Review 151.59 User 5,203 223,254 45.17 2.83 .009 Year 10 [01/]2010 [12/]2019 Table 1: Dataset statistics.",
"use ArXiv's subject classes (e.g., cs.CL ) as social units and extract the abstracts of papers published between 2001 and 2020 for subjects with at least 100 publications in that time.",
"7 To create the network, we measure the overlap in authors between subject classes as the Jaccard similarity of corresponding author sets, resulting in a similarity matrix S .",
"Based on S , we define the adjacency matrix G of G , whose elements are G ij = (cid:6) S ij (cid:7) , (13) i.e., there is an edge between subject classes i and j if the Jaccard similarity of author sets is greater than .",
"We set to 0.01.",
"8 Ciao.",
"Ciao is a product review site on which users can mark explicit trust relations towards other users (e.g., if they find their reviews helpful).",
"A dataset containing reviews covering the time period from 2000 to 2011 has been made publicly available (Tang et al., 2012).",
"9 We use the trust relations to create a directed graph.",
"Since we also perform sentiment analysis on the dataset, we follow Yang and Eisenstein (2017) in converting the five-star rating range into two classes by discarding three-star reviews and treating four/five stars as positive and one/two stars as negative.",
"Reddit.",
"Reddit is a social media platform hosting discussions about a variety of topics.",
"It is divided into smaller communities, so-called subreddits, which have been shown to be highly conducive to linguistic dynamics (del Tredici and Fernandez, 2018; del Tredici et al., 2019a).",
"A full dump of pub-lic Reddit posts is available online.",
"10 We retrieve all comments between September 2019 and April 7 We treat subject class combinations passing the frequency threshold (e.g., cs.CL&cs.AI ) as individual units.",
"2020, which allows us to examine the effects of the rising Covid-19 pandemic on lexical usage patterns.",
"We remove subreddits with fewer than 10,000 comments in the examined time period and sample 20 comments per subreddit and month.",
"For each subreddit, we compute the set of users with at least 10 comments in the examined time period.",
"Based on this, we use the same strategy as for ArXiv to create a network based on user overlap.",
"YELP.",
"Similarly to Ciao, YELP is a product review site on which users can mark explicit friendship relations.",
"A subset of the data has been released online.",
"11 We use the friendship relations to create a directed graph between users.",
"Since we also use the dataset for sentiment analysis, we again discard three-star reviews and convert the five-star rating range into two classes.",
"The fact that the datasets differ in terms of their social and temporal characteristics allows us to examine which factors impact the utility of dynamic contextualized word embeddings.",
"We highlight, e.g., that the datasets differ in the nature of their social units, cover different time periods, and exhibit different levels of temporal granularity.",
"We randomly split all datasets into 70% training, 10% development, and 20% test.",
"We apply stratified sampling to make sure the model sees data from all time points during training.",
"See Appendix A.1 for details about data preprocessing.",
"We fit dynamic contextualized word embeddings to all four datasets, using BERTBASE (uncased) as the contextualizer and masked language modeling as the training objective (Devlin et al., 2019), i.e., we",
"add a language modeling head on top of BERT.",
"12 To estimate the goodness of fit, we measure masked language modeling perplexity and compare against finetuned (non-dynamic) contextualized word embeddings, specifically BERTBASE (uncased).",
"See Appendix A.2 for details about implementation, hyperparameter tuning, and runtime.",
"Dynamic contextualized word embeddings (DCWE) yield fits to the data similar to and (some-times significantly) better than non-dynamic contextualized word embeddings (CWE), which indicates that they successfully combine extralinguistic with linguistic information (Table 2).",
"13 5.2 Ablation Study To examine the relative importance of temporal and social information for dynamic contextualized word embeddings, we perform two experiments in which we ablate social context and time (Figure 3).",
"In social ablation (SA), we train dynamic contextualized word embeddings where the vector offset depends only on word identity and time, not social context, keeping the random walk prior between subsequent time slices.",
"In temporal ablation (TA), we use one social component for all time slices.",
"See Appendix A.3 for details about implementation, hyperparameter tuning, and runtime.",
"Temporal ablation has more severe consequences than social ablation (Table 3).",
"On Ciao, the social component does not yield better fits on the data at all, which might be related to the fact that many users in this dataset only have one review, and that its social network has the lowest density as well as the smallest average node degree out of all considered datasets (Table 1).",
"12 For a given dataset, we only compute dynamic embeddings for tokens in BERT's input vocabulary that are among the 100,000 most frequent words.",
"For less frequent tokens, we input the non-dynamic BERT embedding.",
"13 Statistical significance is tested with a Wilcoxon signed-rank test (Wilcoxon, 1945; Dror et al., 2018).",
"Do dynamic contextualized word embeddings indeed capture interpretable dynamics in word meaning?",
"To examine this question qualitatively, we define as sim ( k ) ij the cosine similarity between the non-dynamic embedding of x ( k ) , e ( k ) , and the dynamic embeddings of x ( k ) given social and temporal contexts s i and t j , e ( k ) ij , sim ( k ) ij = cos ( k ) ij , (14) where ( k ) ij is the angle between e ( k ) and e ( k ) ij (Fig-ure 1).",
"14 To find words with a high degree of variability, we compute the standard deviation of sim ( k ) ij based on all s i and t j in which a given word x ( k ) occurs in the data, ( k ) sim = (cid:16) { sim ( k ) ij | ( x ( k ) , s i , t j ) D} (cid:17) , (15) where we take the development set for D .",
"Looking at the top-ranked words according to ( k ) sim , we observe that they exhibit pronounced 14 In cases where x ( k ) is split into several WordPiece tokens by BERT, we follow previous work (Pinter et al., 2020; Sia et al., 2020) and average the subword embeddings.",
"extralinguistically-driven semantic dynamics in the data.",
"For Reddit, e.g., many of the top-ranked words have experienced a sudden shift in their dominant sense during the Covid-19 pandemic such as isolating and testing (Table 4).",
"Social and temporal contexts in which the sense related to Covid-19 is dominant have smaller values of sim ( k ) ij (i.e., the cosine distance is larger) than the ones in which the more general sense is dominant.",
"Such short-term semantic shifts, which have attracted growing interest in NLP recently (Stewart et al., 2017; del Tredici et al., 2019a; Powell and Sentz, 2020), can result in lasting semantic narrowing if speakers become reluctant to use the word outside of the more specialized sense (Anttila, 1989; Croft, 2000; Robinson, 2012; Bybee, 2015).",
"Thus, the qualitative analysis suggests that the dynamic component indeed captures extralinguistically-driven variability in word meaning.",
"In Sections 5.4 and 5.5, we will demonstrate by means of two example applications how this property can be beneficial in practice.",
"We will now provide a more in-depth analysis of social and temporal dynamics in word meaning to showcase the potential of dynamic contextualized word embeddings as an analytical tool.",
"Specifically, we will analyze how changes in the dominant sense of a word diffuse through the social networks of ArXiv and Reddit.",
"For ArXiv, we will examine the deep learning sense of the word network.",
"For Reddit, we will focus on the medical sense of the word mask.",
"We know that these senses have become more widespread over the last few years (ArXiv) and months (Reddit), but we want to test if dynamic contextualized word embeddings can capture this spread, and if they allow us to gain new insights about the spread of semantic associations through social networks in general.",
"To perform this analysis, let r ( k,k (cid:48) ) ij be the rank of x ( k (cid:48) ) 's embedding among the N nearest neighbors of x ( k ) 's embedding, given social and temporal contexts s i and t j .",
"We then define as r ( k,k (cid:48) ) ij = N r ( k,k (cid:48) ) ij + 1 (16) a semantic similarity score between x ( k ) and x ( k (cid:48) ) .",
"r ( k,k (cid:48) ) ij is maximal when x ( k (cid:48) ) 's embedding is closest to x ( k ) 's embedding.",
"We set r ( k,k (cid:48) ) ij = 0 if x ( k (cid:48) ) is not among the N nearest neighbors of x ( k ) .",
"We set N = 100 .",
"Using r ( k,k (cid:48) ) ij , we measure dynamics in the semantic similarity between network and learning (representing the deep learning sense of network) as well as mask and vaccine (representing the medical sense of mask).",
"For all social and temporal contexts in which network and mask occur, we compute r ( k,k (cid:48) ) ij between their socially and temporally dynamic embeddings on the one hand and time-specific centroids of learning and vaccine averaged over social contexts on the other, employing contextualized versions of the dynamic embeddings.",
"15 In cases where network or mask occur more than once in a certain social and temporal context, we take the mean of r ( k,k (cid:48) ) ij .",
"The dynamics of r ( k,k (cid:48) ) ij reflect how the changes in the dominant sense of network and mask spread through the social networks (Figure 4).",
"For network, we see that the deep learning sense was already present in computer science and physics in 2013, where neural networks have been used since the 1980s.",
"It then gradually spread from these two epicenters, with a major intensification after 2016.",
"For mask, we also see a gradual diffusion, with a major intensification after 03/2020.",
"15 We average the first six layers of the contextualizer since they have been shown to contain the core of lexical and semantic information (Vuli c et al., 2020).",
"mask, the change towards the medical sense also spread",
"On what paths do new semantic associations spread through the social network?",
"In complex systems theory, there are two basic types of random motion on networks: random walks, which consist of a series of consecutive random steps, and random flights, where step lengths are drawn from the Levy distribution (Masuda et al., 2017).",
"To probe whether there is a dominant type of spread for the two examples, we compute for each time slice t j what proportion of nodes that have r ( k,k (cid:48) ) ij > 0 for the first time at t j (i.e., the change in the dominant sense has just arrived) are neighbors of nodes that already had r ( k,k (cid:48) ) ij > 0 before t j .",
"This analysis shows that random walks are the dominant type of spread for network, but random flights for mask (Figure 5).",
"Intuitively, it makes sense that a technical concept such as neural networks spreads through the direct contact of collaborating scientists rather than through more distant forms of reception (e.g., the reading of articles).",
"In the case of facial masks, on the other hand, the exogenous factor of the worsening Covid-19 pandemic and the accompanying publicity was a driver of semantic dynamics irrespective of node position.",
"As a second testbed, we apply dynamic contextualized word embeddings on a task for which social and temporal information is known to be important (Yang and Eisenstein, 2017): sentiment analysis.",
"We use the Ciao and YELP datasets and train dynamic contextualized word embeddings by adding a two-layer feed-forward network on top of BERTBASE (uncased) and finetuning it for the task of sentiment classification.",
"16 We again compare 16 We finetune directly on sentiment analysis without prior finetuning on masked language modeling.",
"against contextualized word embeddings, specifically BERTBASE (uncased), which is finetuned without the dynamic component.",
"See Appendix A.4 for details about implementation, hyperparameter tuning, and runtime.",
"Dynamic contextualized word embeddings achieve slight but significant improvements over the already strong performance of non-dynamic BERT (Table 5).",
"17 This provides further evidence that infusing social and temporal information on the lexical level can be useful for NLP tasks.",
"We have introduced dynamic contextualized word embeddings that represent words as a function of both linguistic and extralinguistic context.",
"Based on a PLM, specifically BERT, dynamic contextualized word embeddings model time and social space jointly, which makes them advantageous for various areas in NLP.",
"We have trained dynamic contextualized word embeddings on four datasets and showed that they are capable of tracking social and temporal variability in word meaning.",
"Besides serving as an analytical tool, dynamic contextualized word embeddings can also be of benefit for downstream tasks such as sentiment analysis.",
"This work was funded by the European Research Council (#740516) as well as the Engineering and Physical Sciences Research Council (EP/T023333/1).",
"The first author was also supported by the German Academic Scholarship Foundation and the Arts and Humanities Research Council.",
"We thank the anonymous reviewers for their detailed and extremely helpful comments."
] | [
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"other",
"other"
] |
[
"In relation extraction for knowledge-based question answering, searching from one entity to another entity via a single relation is called one hop.",
"In related work, an exhaustive search from all one-hop relations, two-hop relations, and so on to the max-hop relations in the knowledge graph is necessary but expensive.",
"Therefore, the number of hops is generally restricted to two or three.",
"In this paper, we propose UHop, an unrestricted-hop framework which relaxes this restriction by use of a transition-based search framework to replace the relation-chain-based search one.",
"We conduct experiments on conventional 1and 2-hop questions as well as lengthy questions, including datasets such as WebQSP, PathQuestion, and Grid World.",
"Results show that the proposed framework enables the ability to halt, works well with state-of-the-art models, achieves competitive performance without exhaustive searches, and opens the performance gap for long relation paths.",
"A knowledge graph (KG) is a powerful graph structure that encodes knowledge to save and organize it, and to provide users with direct access to this knowledge via various applications, one of which is question answering, or knowledge-based question answering (KBQA).",
"In the knowledge graph, beliefs are commonly represented by triples showing relations between two entities, such as LocatedIn(NewOrleans, Louisiana), where the two entities are nodes and their relation is the edge connecting them in the knowledge graph.",
"Given a natural language question, a KBQA system returns its answer if it is included in the knowledge graph; the process of answering a question can be transformed into a traversal that starts from the question (topic) entity and searches for the appropriate path to the answer entity.",
"In the literature (Yu et al., 2017; Yin et al., 2016; Yih et al., 2015) KBQA is decomposed into topic entity linking, which determines the starting entity corresponding to the question, and relation extraction, which finds the path to the answer node(s).",
"Theoretically, relation extraction finds paths of any length, that is, paths that contain any number of relation links, or hops (between two nodes), as long as it reaches the answer node.",
"In previous work, models consider all relation paths starting from the topic entity (Yu et al., 2017; Yin et al., 2016; Yih et al., 2015); we call these relation-chain-based methods.",
"Two main difficul-ties for these methods are that processing through all relations in a KG is not practical as the combination of these relations is nearly infinite, and that the number of candidate paths grows exponentially with the path length and quickly becomes intractable for large knowledge graphs.",
"As a result, current relation-chain-based methods set the maximum length of candidate paths to 1, 2 or 3.",
"However, under this framework we cannot find answer entities for indirect or complicated questions.",
"Most importantly, even given a larger maximum length, it is unrealistic to expect to know in advance the maximum number of hops for real-world applications.",
"Thus even with exhaustive searches, if the answer entity is still too distant or lies outside of the search space, it is not reachable or answerable.",
"In addition, setting a large maximum number of hops necessitates lengthy training instances, which is especially difficult.",
"In this paper, we propose UHop, an unrestricted-hop relation extraction framework to relax restrictions on candidate path length.",
"We decompose the task of relation extraction in the knowledge graph into two subtasks: knowing where to go, and knowing when to stop (or to halt).",
"That is, single-hop relation extraction and termination decision.",
"Our contribution is threefold: (1) No predefined maximum hop number is required in UHop, as it enables models within the framework to halt; (2) UHop reduces the search space complexity from exponential to polynomial while maintaining comparable results; (3) UHop facilitates the use of different models, including state-of-the-art models.",
"State-of-the-art KBQA methods are in general based on either semantic parsing, or on embedding (Zhou et al., 2018).",
"Semantic parsing methods learn semantic parsers which parse natural language input queries into logical forms, and then use the logical forms to query the KG for answers (Berant et al., 2013; Yih et al., 2015, 2016; Krishnamurthy et al., 2017; Iyyer et al., 2017; Peng et al., 2017; Sorokin and Gurevych, 2018).",
"These systems are effective and provide deep interpretation of the question, but require expensive data annotation, or require training using reinforcement learning.",
"Embedding-based methods first allocate candidates from the knowledge graph, represent these candidates as distributed embedding vectors, and choose or rank these vectors.",
"Here the candidates can be either entities or relations.",
"Some use embedding-based models to predict answers directly (Dong et al., 2015; Bast and Haussmann, 2015; Hao et al., 2017; Zhou et al., 2018; Lukovnikov et al., 2017), whereas others focus on extracting relation paths and require further procedures to select the answer entity (Bordes et al., 2015; Xu et al., 2016; Yin et al., 2016; Yu et al., 2017; Zhang et al., 2018a; Yu et al., 2018; Chen et al., 2018a; Shen et al., 2018).",
"Our work follows the latter methods in focusing on predicting relation paths, but we seek to eliminate the need to assume in advance a maximum number of hops.",
"For the solution, we turn to the field of multihop knowledge based reasoning.",
"Early methods include the Path-Ranking Algorithm and its variants.",
"(Lao et al., 2011; Gardner et al., 2014, 2013; Toutanova et al., 2015)",
"The drawback of these methods is that they use random walks independent of the type of input.",
"DeepPath (Xiong et al., 2017) and MINERVA (Das et al., 2017) tackle this issue by framing the multi-hop reasoning problem as a Markov decision process, efficiently searching for paths using reinforcement learning; others propose an algorithm (Yang et al., 2017) for learning logical rules, a variational auto-encoder view of the knowledge graph (Chen et al., 2018b; Zhang et al., 2018b), and reward shaping technique (Lin et al., 2018) for further improvement.",
"The major difference between UHop and these methods is that they do not utilize annotated relations and hence require REINFORCE training (Williams, 1992) for optimization.",
"As some datasets are already annotated with relations and paths, direct learning using an intermediate reward is more reasonable.",
"Hence UHop adopts a novel comparative termination decision module to control the search process of the relation path.",
"The most related approach is the IRN model (Zhou et al., 2018), composed of an input module, a memory-based reasoning module, and an answer module.",
"At each hop, it predicts a relation path using the reasoning module, and also optimizes it using intermediate results.",
"However, UHop has demonstrated the ability to process large-scale knowledge graphs in experiments conducted on Freebase (Bordes et al., 2015).",
"In contrast, IRN consumes memory linearly to the size of the knowledge graph, resulting in a limited workspace, e.g., they use a subset of Freebase in their experiments.",
"Also, IRN still uses a constraint for the number of maximum hops in the experiments, while UHop needs no such limit.",
"Most importantly, as UHop is a framework which facilitates the use of different models, we can expect the performance of UHop to remain competitive with the state of the art over time.",
"With UHop, we aim to handle unrestricted relation hops and to be compatible with existing relation extraction models.",
"UHop breaks down unrestricted-hop relation extraction into two major subtasks: single-hop relation extraction and comparative termination decision.",
"Algorithm 1 illustrates how we perform these two tasks in the UHop framework.",
"Given a question Q and the topic entity e extracted by an existing entity linking method such as S-MART (Yang and Chang, 2015), we first query the knowledge graph for the candidate outbound relations R that are connected to e .",
"For all relations R , we extract single-hop relations in order to choose one relation to transit to the next entity e (cid:48) .",
"After transition ( e e (cid:48) ), we decide whether to terminate, that is, we determine whether the process should proceed Algorithm 1: Unrestricted-hop relation extraction.",
"e denotes the extracted topic entity, : ' is the concatenation operation, and the termination decision returns True if the framework decides to stop.",
"through another iteration to extract the next relation in the relation path.",
"If the decision to terminate is false, we search the KB again for outbound relations of the new e , after which the search process starts again.",
"Note that starting from the second iteration, candidate relations are concatenated with the previously selected relations to remember the history and consider them as a whole.",
"We continue this loop until the process decides to terminate.",
"The termination decision thus enables UHop to learn when to stop searching for relations to extract: it determines the number of hops needed to reach the correct target entity.",
"Upon termination, UHop returns the extracted relation(s).",
"In the UHop framework, the model is trained to favor the correct relation over incorrect relations.",
"That is, to select the correct outbound single-hop relations from current entity e , the model prefers the correct r over the other relations R r of e ; to terminate at the current entity e , the model favors the correct relation r linked to the current entity e over the outbound R relations from e .",
"To continue the iteration, it proceeds likewise.",
"In UHop, we successfully utilize this preference over relations to train the same model to perform both single-hop relation extraction and termination decision.",
"Figure 1 shows the difference between previous work and our model in the scenario of multi-hop KBQA task with an simplified knowledge graph and the question Who published the novel adapted into A Study in Pink ? as example.",
"Single-hop relation extraction can be modeled as pairwise classification of the set of candidate relations.",
"Given a question Q , the candidate relation set R , and a pairwise classification model F , single-hop relation extraction is illustrated as r = arg max r RF ( Q, r ) .",
"Hinge loss, used for optimization, is defined as",
"where s r , s r are scores of the true relation and the candidate relations respectively.",
"The margin, M , is an arbitrary value in the range (0 , 1] , where the goal of the loss function is to maximize the margin between the scores of the correct and the incorrect predictions.",
"Note that this relation extraction process and those proposed in related work are compatible, which facilitates the installation of state-of-the-art models in the UHop framework.",
"In the UHop framework, as we hope to easily replace the used model by state-of-the-art models, we make the termination decision using the same model for single-hop relation extraction so that no additional model is needed.",
"Therefore, we propose a progressive method which treats the termination decision as a comparison.",
"That is, the model stops when it cannot extract any relation better than that from its previous hop.",
"What is different here is R , the relations to be compared against r , are the concatenation of extracted relation and all the relation starting from the new current entity e ; recall that we update e e (cid:48) before we step into termination decision.",
"If the score s r is higher than all the compared relations, the searching process terminates; otherwise, it continues.",
"Given a question Q , an extracted relation r from the previous entity, the candidate relation set R from the new current entity e , and the same model F as in the single hop relation extraction, the procedure can be formulated as stop = (cid:40) True , F ( Q, r ) > F ( Q, r ) r R False , F ( Q, r ) < F ( Q, r ) r R (3) Figure 1:",
"Loss is defined depending on the flag stop .",
"If the process should continue, i.e., stop is false, loss is defined as LTD = max(0 , ( s r (cid:48) s r ) + margin ) , (4) where score s r (cid:48) is the score of the question paired with the gold relation r (cid:48) in the next hop and s r is the score of the question paired with the extracted relation r .",
"In contrast, if the process should terminate, we optimize the model by LTD = (cid:80) r R max(0 , ( s r s r ) + M ) | R | .",
"The model thus learns to infer s r is greater than s r , resulting in the termination of relation extraction.",
"While UHop inferences hop by hop, it is straightforward to enforce the focus at different aspects of the question.",
"For this purpose, we update the question representation for each hop by defining a dynamic question representation generation function G .",
"Given the previously selected relation path P and the original question Q , G generates the new question representation as Q (cid:48) = G ( Q, P ) .",
"Our assumption is that since the current relation has been selected, its related information in the question loses importance when extracting the next relation.",
"Inspired by both supervised attention (Mi et al., 2016; Liu et al., 2016; Kamigaito et al., 2017), which is lacking in our datasets, and the coverage loss design for summarization (See et al., 2017), we de-focus the selected relation by manipulating weights in the question representation.",
"We propose two ways of updating the question representation, taking into account the existence of the attention layer in the model's architecture.",
"For attentive models, we directly utilize the attention weight as part of our dynamic question representation generation function by G ( Q, P ) = W ( Q attention ( Q, P )) + B. (6) For non-attentive models, we apply a linear transformation function as G on the concatenation of the previously selected relation and the question representation to yield the new representation: G ( Q, P ) = W [ Q : P ] + B, (7) where W and B are weight matrices to be optimized during training.",
"In training, we jointly optimize the two subtasks of UHop.",
"For each question and its candidates, the loss function is defined as L = H (cid:88) i ( L ( i ) RE + L ( i ) TD ) , (8) where H is the number of hops in the gold relation path; L ( i ) RE and L ( i ) TD are the loss of the two subtasks at the i -th hop respectively.",
"In this section, we illustrate the performance UHop achieves while reducing the search space, and its relation inference power for multi-hop questions.",
"Performances of the state of the art models are listed as the upper-bound.",
"For our benchmarking evaluation materials, we selected WebQSP (WQ) (Yih et al., 2016), as it is used in most related work.",
"WebQSP is the annotated version of WebQuestions (Berant et al., 2013), which contains questions that require a 1-or 2-hop relation path to arrive at the answer entity.",
"More specifically, about 40% of the questions require a 2-hop relation to reach the answer.",
"This dataset is based on the Freebase knowledge graph (Bordes et al., 2015).",
"For questions with multiple answers, we use each answer to construct a question-answer pair.",
"Every question is annotated with its inferential relation chain (i.e., a rela-tion), topic entity, and answer entity.",
"The statistics for these two datasets are shown in Table 1.",
"As WQ contains only questions with 1and 2-hop answers that are still short, we also conduct experiments for path length related analysis on the PathQuestion dataset (Zhou et al., 2018), which includes questions requiring 3-hop answers.",
"To the best of our knowledge, this is the only available general-KB dataset containing 3-hop questions.",
"PathQuestion provides two datasets: PathQuestion (PQ) and PathQuestion-Large (PQL).",
"These both contain 2-hop (PQ2/PQL2) and 3-hop (PQ3/PQL3) questions respectively, and both use a subset of Freebase as their knowledge graph.",
"Note that for both PQ and PQL, questions are generated using templates, paraphrasing, and synonyms.",
"PQL is more challenging than PQ because it utilizes a larger subset of Freebase, and provides fewer training instances.",
"Table 1 shows statistics of these datasets.",
"The above datasets serve to show that the UHop framework yields performance competitive with state-of-the-art KBRE models.",
"Further, we seek to demonstrate that UHop reduces the search space when required reasoning paths are even longer, i.e., longer than 3 hops, and that UHop works for different kinds of relations.",
"For this we use Grid World (Yang et al., 2017), a synthetic dataset with questions requiring lengthy up to 10 hops relation paths to answer.",
"We select it to demonstrate that UHop works for long as well as task-specific relations.",
"In Grid World, the input is the starting node, a sequence of navigation instructions, and a 16-by-16 fully connected grid.",
"The model must follow the instructions to arrive at the destination node.",
"Specifically, the task is to navigate to an answer cell (answer entity) starting from a random cell (topic entity) given a sequence of instructions (questions).",
"The KB consists of triples such as ((4, 1), South, (5, 1)), which indicates that the entity (5, 1) is south of the entity (4, 1); questions are sequences of directions such as (North, NorthEast, South).",
"Samples in Grid World are classified into 4 buckets [24], [46], [68], and [810] according to their reasoning path length.",
"Unlike relations included in general knowledge bases like Freebase, relations in Grid World are the relative directions of two nodes.",
"MetaQA (Zhang et al., 2018b) and sequence QA are two other multi-hop knowledge-based question-answering datasets which we do not use for experiments in this paper.",
"MetaQA is a multihop dataset for end-to-end KBQA based on a movie knowledge graph with 43k entities.",
"However, it is too simple for discussions as it contains only 6 relations and on average the number of the outbound relations for each node is 3.",
"The Complex Sequential QA dataset (Saha et al., 2018) improves on overly simplistic KBQA datasets.",
"Nevertheless, instead of questions requiring multi-hop relation paths, it provides a sequence of questions, each of which requires a single-hop relation to answer, resulting a different setting.",
"Hence these two datasets are beyond the scope of this paper.",
"We used two state of the art models, HR-BiLSTM (Yu et al., 2017) and ABWIM (Zhang et al., 2018a), as the models for use within the UHop framework.",
"Another state of the art model, MVM (Yu et al., 2018), is not selected here as it requires additional information: the tail entity type.",
"In MVM, to consider each n -th-hop relation, the model searches all related ( n + 1) -th-hop relations to collect enough information; thus further queries are necessary in MVM.",
"This property of MVM causes the UHop to degrade to a relation-chain based model, which we are trying to avoid.",
"We report the results of these two models working within and independent of the UHop framework to evaluate whether relaxing the constraint on the number of hops has any impact on their performance.",
"For comparison, we select BiCNN as baselines and list their results.",
"As there is no predefined validation set in WQ, we randomly select 10% of the training data as the validation set.",
"The best parameters for different models and datasets were set empirically.",
"In all cases we used 300-dimensional pretrained GloVe (Pennington et al., 2014) word embeddings and RMSprop optimization.",
"In ABWIM, following the setting of (Zhang et al., 2018a), we respectively chose 1, 3, 5 as kernel sizes and 150 as the number of filters for its three CNN layers.",
"We tune the following hyperparameters with grid search : (1) the hidden size for all LSTM ([100, 150, 256]); (2) dropout rate ([0, 0.2, 0.4]); (3) margin for Hinge loss ([0.1, 0.3, 0.5, 0.7, 1.0]); (4) learning rate ([0.01, 0.001, 0.0001]).",
"The experimental results are shown in Table 2.",
"As expected, the performance of models within the UHop framework is comparable to those independent of it, with the additional advantage of the unrestricted number of relation hops and a greatly reduced search space.",
"Table 3 lists the average number of candidates the experimental models consider for each question when working within and independent of UHop.",
"For a dataset based on a KB with an average of n relations connected to each entity, the approximate search space without UHop is n ( n 1) ( L 1) , where L is the predefined maximum hop number; with UHop the approximate search space is reduced to n ( L + 1) .",
"The spe-cific number depends on the actual number of outbound relations connected to the entities.",
"Table 3 shows that UHop reduces the search space by 30% for WQ, which translates to lower processing time, less memory consumption, and sometimes slightly improved performance.",
"Following the original paper (Zhou et al., 2018), PQ and PQL are both partitioned into train-ing/validation/testing sets at a ratio of 8:1:1.",
"In addition to the original PQ/PQL dataset, we merge 1 Note that the original paper reported 85.32, but we failed to reproduce such performance.",
"Hence we report our reproduced performance which is the same model adapted in our proposed framework.",
"PQ2 and PQ3, and then PQL2 and PQL3, to create the mixed datasets PQ+ and PQL+ to evaluate if the model terminates correctly instead of always stopping on the majority of the training data length.",
"Again we adopt HR-BiLSTM and ABWIM in this experiment.",
"In addition, the IRN model 2 proposed together with the PQ/PQL dataset was selected as one of the baselines for comparison.",
"For this dataset containing questions of long relation paths, we also applied the dynamic question representations (DQ) in UHop.",
"Results 3 are shown in Table 4.",
"Both HR-BiLSTM and ABWIM either within or independent of UHop outperform IRN and perform nearly perfectly in all datasets, which confirms that UHop is competitive even with longer relation paths.",
"However, as shown in Table 5, the search space reduction for PQ and PQL is not obvious.",
"We find that the knowledge graph used in PQ/PQL (a subset of Freebase) is much smaller and less complicated than the original Freebase used in WQ, i.e., the outbound degree of nodes is relatively small.",
"Nevertheless, UHop still performs comparably with previous work.",
"This indicates that it also works well in small and simple KBs.",
"As all PQ/PQL questions are multi-hop questions, we used dynamic question representations to better reflect transitions in the relation extraction process.",
"Table 4 shows that updating the question representation dynamically (+DQ) in each iteration benefits relation extraction in most cases.",
"2 https://github.com/zmtkeke/IRN.",
"We consulted the authors of the repository, who stated that this version is not the one in their paper, which they did not release publicly.",
"In the Grid World experiments, we used MINERVA (Das et al., 2017) and Neural LP (Yang et al., 2017) as baselines.",
"As understanding questions is not an issue here, we randomly initialized the word embeddings and optimized them during the training process.",
"We set the learning rate to 0.001, the hidden size to 256, the embedding size to 300, and optimized the model using the RMSprop (Hinton et al., 2014) Algorithm.",
"In this experiment, the search space has gone too large to afford for HR-BiLSTM and ABWIM without the assistance of UHop.",
"The results in Figure 2 show that together with the relation extraction model, UHop perfectly solves",
"this problem.",
"In the first place, compared to Neural LP and MINERVA, UHop benefits from the more powerful natural language understanding models HR BiLSTM and ABWIM equipped with sophisticated LSTM models, whereas Neural LP and MINERVA only use multi-layer neural networks as the policy network.",
"This demonstrates UHop's merit of facilitating the use of novel models.",
"In the second place, Figure 2 shows that error propagation leading to poor performance for long-path questions in Neural LP and MINERVA is mitigated by the relation inference power of UHop: it performs well for all four buckets of questions.",
"Also, as Grid World includes paths of up to 10 hops, conducting experiments purely by relation-chain based models themselves like HR-BiLSTM or ABWIM independent of UHop is not feasible: the number of candidate relations in the exhaustive search space grows exponentially.",
"In Grid World, there are 8 directions (relations), and models are allowed to go back and forth.",
"Hence given the path length k , the approximate search space for the models working independently is 8 k , while for models working within UHop is 8 k .",
"We observe that without UHop, the required search space would preclude experiments even on the set containing the shortest paths (Grid World [24]), much less the longer ones.",
"In this section we further compare the experimental multi-hop KBQA datasets WQ, PQ, and Grid World.",
"Grid World contains questions that require the longest reasoning paths.",
"However, they are synthetic, the relations are simply direction tokens, and the questions are just sequences of direction instructions.",
"Therefore in this paper, it is only used to test the model's ability of making long sequential decisions instead of understanding questions.",
"From experiments we have seen that delicate models like HR-BiLSTM and ABWIM cannot work on it without UHop, and other models such as Neural LP and MINERVA perform worse as they are rewarded only by question.",
"On the other hand, in WQ, questions are written in natural language and can be answered by 1-hop or 2-hop reasoning.",
"However, for real-world questions, 2-hop reasoning is still overly simplistic.",
"For example, although WQ questions such as What is the name of Justin Bieber's brother? are challenging for models, humans can easily answer these with a simple Internet search.",
"Noting this problem, the authors of IRN (Zhou et al., 2018) propose PQ and PQL, for which questions require at least 2-hop at most 3-hop relation paths.",
"However, PQ/PQL also has its limitations.",
"First, the KB used in PQ/PQL is smaller than that in WQ, and its relations are repetitive and show little variety.",
"Figure 3 illustrates the relation distributions.",
"Second, PQ/PQL questions are generated by extracting relation paths and filling templates, which can lead to questions with obvious, learnable patterns.",
"This can be observed by comparing results in Tables 2 and 4.",
"However, repeated relations could also help the model to learn better dynamic question representations with respect to these relations.",
"Table 4 shows that updating question representations dynamically (DQ) does improve PQ/PQL performance.",
"To evaluate if the model halts in the search process, we conducted an experiment using PQL3 as the training/validation set and PQL2 as the testing set.",
"The results are shown in Table 6.",
"Within the UHop framework, both models outperform their original version by more than 7%.",
"However, with zero 2-hop samples, it still overfits on the 3-hop length in training data, resulting in accuracies lower than 50%.",
"The interpretability of UHop, i.e., the possibility to analyze each hop, facilitates the analysis of error distributions.",
"We list the percentage of questions for which UHop fails to extract the correct relations by the number of hops for different Dataset/model 1-hop 2-hop 3-hop RE TD RE TD RE TD WQ 1-hop H 17.46 0 --A 20.95 0.1 --2-hop H 16.15 1.03 1.2 0 -A 18.38 0.86 2.06 0.17 -PQ2 H 0.52 0 0 0 -H* 0 0 0 0 -A 2.09 0 0.52 0 -A* 0 0 0 0 -PQ+ 2-hop H 0 0 0 0 -H* 0 0 0 0 -A 0 0 0.52 0 -A* 0 0 0 0 -3-hop H 0 0 0 0 0.38 0 H* 0 0 0 0 0.58 0 A 0 0 0 0 1.15 0 A* 0 0 0 0 0.77 0 PQ3 H 0 0 0.19 0 0.58 0 H* 0 0 0 0 0.38 0 A 0 0 0 0 0.38 0 A* 0 0 0 0 0.38 0 PQL2 H 5.62 0 3.12 0 -H* 2.5 0 2.5 0 -A 5.0 0 3.75 0 -A* 0 0 2.5 0 -PQL+ 2-hop H 0 0 3.12 0 -H* 0 0 3.75 0 -A 0 0 3.75 0 -A* 0 0 2.5 0 -3-hop H 0.48 0 5.31 0 7.73 0 H* 0 0 2.9 0 8.7 0 A 0 0 3.86 1.93 7.25 0 A* 0 0 2.9 0.97 7.73 0 PQL3 H 0 0 3.86 0 7.73 0 H* 0 0 2.9 0 7.73 0 A 0 0 2.9 0.97 7.25 0 A* 0 0 2.9 0 7.73 0 Train3, Test2 H 1.13 2.95 2.7 53.58 -A 0.25 2.38 7.4 40.03 -Table 7: Distribution of error types under UHop framework (in percentage).",
"datasets.",
"The results of HR BiLSTM and ABWIM within the UHop framework are reported in Table 7.",
"Our observations are offered below.",
"First, whether for 1-hop or 2-hop WQ questions, both models suffer in relation extraction in the first hop, whereas there are fewer errors in the second hop and for the termination decision.",
"Second, for the PQ/PQL datasets, as with the WQ dataset, incorrect relation extraction is the major error, and surprisingly there were no errors for termination decision except for a few on PQL3 with ABWIM.",
"After comparing the 2-hop testing data from PQ2/PQL2 and PQ+/PQL+, we also observe that long questions help the learning of short questions.",
"The model predicts better on 2-hop data when trained on both 2-hop and 3-hop data than when trained on 2-hop data only.",
"Here the improvement in relation extraction in the first hop is the main contributor to this improved performance.",
"In contrast, the performance on 3-hop data suffers when trained on 2-hop data.",
"Third, dynamic question representations (noted by *) significantly benefit the relation extraction (RE) for the first hop.",
"As UHop utilizes the same model for relation selection and termination decision, relieving the attention to the previous relation in the later selection process in the training phase decreases the ambiguity in the earlier selection process in the testing phase.",
"Finally, in the experiments trained on 3-hop and tested on 2-hop, the model does not terminate correctly on more than 40% of the PQL2 data even though the relation extraction for 1-hop and 2-hop are both correct.",
"We conclude that having no samples of the predicted length for training still hurts performance.",
"In addition, there are also a few early terminations after the first relation extraction.",
"Due to the different generation processes with different templates for the 2-hop and 3-hop questions in PQL, learning from one may not apply to the other.",
"In this paper, we propose the UHop framework to allow an unrestricted number of hops in knowledge-based relation extraction and to reduce the search space.",
"Results show that running the same model in the UHop framework achieves comparable results in a reduced search space.",
"Moreover, experiments show UHop works well for lengthy relation extraction and can be applied to small, simple KBs with task-specific relations.",
"UHop even facilitates the use of most state-of-the-art models, and its transition-based design naturally supports the dynamic question representation for better performance.",
"These results attest its strong power for knowledge-based relation extraction.",
"The current framework uses a greedy search for each single hop.",
"We expect in the future that incorporating a beam search may further improve performance.",
"This research is partially supported by Ministry of Science and Technology, Taiwan under Grant no.",
"MOST108-2634-F-002-008-, and the Academia Sinica Thematic Project under Grant no. 233b-1070100."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"other"
] |
[
"Knowledge graph (KG) entity typing aims at inferring possible missing entity type instances in KG, which is a very significant but still under-explored subtask of knowledge graph completion.",
"In this paper, we propose a novel approach for KG entity typing which is trained by jointly utilizing local typing knowledge from existing entity type assertions and global triple knowledge from KGs.",
"Specifically, we present two distinct knowledge-driven effective mechanisms of entity type inference.",
"Accordingly, we build two novel embedding models to realize the mechanisms.",
"Afterward, a joint model with them is used to infer missing entity type instances, which favors inferences that agree with both entity type instances and triple knowledge in KGs.",
"Experimental results on two real-world datasets (Freebase and YAGO) demonstrate the effectiveness of our proposed mechanisms and models for improving KG entity typing.",
"The source code and data of this paper can be obtained from: https://github.com/ Adam1679/ConnectE 1 Introduction The past decade has witnessed great thrive in building web-scale knowledge graphs (KGs), such as Freebase (Bollacker et al., 2008), YAGO (Suchanek et al., 2007), Google Knowledge Graph (Dong et al., 2014), which usually consists of a huge amount of triples in the form of ( head entity , relation , tail entity ) (denoted ( e, r, e )).",
"KGs usually suffer from incompleteness and miss important facts, jeopardizing their usefulness in downstream tasks such as question answering (Elsahar et al., 2018), semantic parsing (Berant et al., 2013), relation classification (Zeng et al., 2014).",
"Hence, the task of Equal Contribution.",
"knowledge graph completion (KGC, i.e. completing knowledge graph entries) is extremely significant and attracts wide attention.",
"This paper concentrates on KG entity typing, i.e. inferring missing entity type instances in KGs, which is an important sub-problem of KGC.",
"Entity type instances, each of which is in the formed of ( entity, entity type ) (denoted ( e, t )), are essential entries of KGs and widely used in many NLP tasks such as relation extraction (Zhang et al., 2018; Jain et al., 2018), coreference resolution (Hajishirzi et al., 2013), entity linking (Gupta et al., 2017).",
"Most previous works of KGC focus on inferring missing entities and relationships (Bordes et al., 2013; Wang et al., 2014; Lin et al., 2015; Dettmers et al., 2017; Ding et al., 2018; Nathani et al., 2019), paying less attention to entity type prediction.",
"However, KGs also usually suffer from entity types incompleteness.",
"For instance, 10% of entities in FB15k (Bordes et al., 2013), which have the /mu-sic/artist type, miss the /people/person type (Moon et al., 2017).",
"KG entity type incompleteness leads to some type-involved algorithms in KG-driven tasks grossly inefficient or even unavailable.",
"To solve KG entity type incompleteness issue, in this paper we propose a novel embedding methodology to infer missing entity type instances that employs not only local typing knowledge from entity type assertions, as most conventional models do, but also leverages global triple knowledge from KGs.",
"Accordingly, we build two distinct knowledge-driven type inference mechanisms with these two kinds of structural knowledge.",
"Mechanism",
"1. Missing entity types of an entity can be found from other entities that are close to the entity in the embedding space, using local typing knowledge as in Fig. 1 (Mech.1).",
"Mechanism",
"2. Missing entity types of an (head or tail) entity can be inferred from the types of other (tail or head) entities through their relationships, using global triple knowledge as in Fig. 1 (Mech.2).",
"The main idea behind Mech.",
"1 is based on the observation that the learned entities' embeddings by conventional KG embedding methods (Ji et al., 2016; Xie et al., 2016) cluster well according to their types in vector space.",
"For instance, in Fig. 1 (Mech.1), given an entity Barack Obama , it's missing hierarchical type /people/person can be induced by the given hierarchical type of similar entity Donald Trump .",
"In addition, the key motivation behind Mech.",
"2 is that the relationship shall remain unchanged if the entities in a triple fact are replaced with their corresponding hierarchical types.",
"For instance, given a global triple fact ( Barack Obama, born in, Honolulu ), under this assumption, we can induce a new type triple ( /people/person, born in, /location/location ) 1 .",
"Formally, (cid:126) Honolulu (cid:126) Barack Obama = (cid:126) /location/location (cid:126) /people/person (= (cid:126) born in ), which can be used to infer missing entity types, e.g. ( Barack Obama, type=? ) via (cid:126) Barack Obama (cid:126) Honolulu + (cid:126) /location/location = (cid:126) /people/person , as Mech.",
"2 does.",
"Fig. 1 demonstrates a simple illustration of effective mechanisms of entity type inference.",
"Both mechanisms are utilized to build our final composite model.",
"Specifically, we build two embedding models to realize the two mechanisms respectively.",
"First, considering entities and entity types are completely distinct objects, we build two distinct embedding spaces for them, i.e., entity space and entity type space .",
"Accordingly, we encode ( e, t ) entity type instance by projecting the entity from entity space to entity type space with mapping matrix M , hence we have (1): M e (cid:39) t , called E2T .",
"Moreover, we learn the plausibility of ( t e , r, t e ) global type triple by newly generalizing from ( e, r, e ) global 1 For more clarity, we represent it as ( /location/location, born in 1 , /people/person ) in Fig. 1 (Mech.2).",
"triple fact, even though this type triple is not present originally.",
"Following translating assumption (Bor-des et al., 2013), we have (2): t e r (cid:39) t e , called TRT .",
"E2T and TRT are the implementation models of the two mechanisms.",
"Fig. 2 demonstrates a brief illustration of our models.",
"A ranking-based embedding framework is used to train our models.",
"Thereby, entities, entity hierarchical types, and relationships are all embedded into low-dimensional vector spaces, where the composite energy score of both E2T and TRT are computed and utilized to determine the optimal types for ( entity, entity type =?) incomplete assertions.",
"The experimental results on real-world datasets show that our composite model achieves significant and consistent improvement compared to all baselines in entity type prediction and achieves comparable performance in entity type classification.",
"Our contributions are as follows: We propose a novel framework for inferring missing entity type instances in KGs by connecting entity type instances and global triple information and correspondingly present two effective mechanisms.",
"Under these mechanisms, we propose two novel embedding-based models: one for predicting entity types given entities and another one to encode the interactions among entity types and relationships from KGs.",
"A combination of both models are utilized to conduct entity type inference.",
"We conduct empirical experiments on two real-world datasets for entity type inference, which demonstrate our model can successfully take into account global triple information to improve KG entity typing.",
"Entity typing is valuable for many NLP tasks (Yaghoobzadeh et al., 2018), such as knowledge base population (Zhou et al., 2018), question answering (Elsahar et al., 2018), etc.",
"In recent years, researchers attempt to mine fine-grained entity types (Yogatama et al., 2015; Choi et al., 2018; Xu and Barbosa, 2018; Yuan and Downey, 2018) with external text information, such as web search query logs (Pantel et al., 2012), the textual surface patterns (Yao et al., 2013), context representation (Abhishek et al., 2017), Wikipedia (Zhou et al., Table 1: Entity type embedding models.",
"2018).",
"Despite their success, existing methods rely on additional external sources, which might not be feasible for some KGs.",
"To be more universal, Neelakantan et al. (2015) propose two embedding models, i.e. linear model (LM) and projection embedding model (PEM), which can infer missing entity types only with KG itself.",
"Although PEM has more expressive power than LM, however, both of them ignore global triple knowledge, which could also be helpful for encoding entity type assertions via shared entities' embeddings.",
"To address this issue, Moon et al. (2017) propose a state-of-the-art model (ETE) to combine triple knowledge and entity type instances for entity type prediction, and build two entity type embedding methodologies: (1) Synchronous training: treat ( entity, entity type ) assertions as special triple facts that have a unique relationship rdf:type , e.g. ( Barack Obama, rdf:type, person ), and encode all mixed triple facts (original triple data fused with all generated special ones) by conventional entity relation embedding models, such as RESCAL (Nickel et al., 2011), HOLE (Nickel et al., 2016) and TransE (Bordes et al., 2013).",
"(2) Asynchronous training: first learn the entities' embeddings e by conventional entity relation embedding models mentioned above, and then only update entity types' embeddings t for min (cid:107) e t (cid:107) (cid:96) 1 while keeping e fixed, called RESCAL-ET, HOLE-ET, TransE-ET and ETE.",
"Although these approaches expect to explore global triple knowledge for entity type prediction, they still lack of expressive ability due to its simplicity of embeddings.",
"In addition, they irrationally assume both the embeddings of entities and entity types being in the same latent space ( R ).",
"Since entities and entity types are completely distinct objects, it may not be reasonable to represent them in a common semantic space.",
"In this paper, we introduce an enhanced KG entity type embedding model with better expressing and reasoning capability considering both local entity typing information and global triple knowledge in KGs.",
"Note that incorporating more external information (Jin et al., 2018; Neelakantan et al., 2015) is not the main focus in this paper, as we only consider the internal structural information in KGs instead, which correspondingly makes our work much more challenging but also more universal and flexible due to the limited information.",
"Recently, (Lv et al., 2018; Hao et al., 2019) also attempt to embedding structural information in KG.",
"However, the goals and models are very different from ours.",
"They encodes the concepts, not hierarchical types.",
"On the contrary, we focus on the latter not the former.",
"Table 1 summarizes the energy functions and other different settings of entity type embedding models.",
"We consider a KG containing entity type instances of the form ( e, t ) H ( H is the training set consists of lots of ( entity, entity type ) assertions), where e E ( E is the set of all entities) is an entity in the KG with the type t T ( T is the set of all types).",
"For example, e could be Barack Obama and t could be /people/person .",
"As a single entity can have multiple types, entities in KG often miss some of their types.",
"The aim of this work is to infer missing entity type instances in KGs.",
"which learn low-dimensional vector representations ( embeddings ) of atomic symbols (i.e. entities, entity hierarchical types, relationships ).",
"In this framework, we learn two submodels: (1) one for predicting entity types given entities, and (2) another one to encode the interactions among entity types and relationships from KGs.",
"The joint action of both models in prediction allows us to use the connection between triple knowledge and entity type instances to perform KG entity typing.",
"The first model (E2T) of the framework concerns the learning of a function S e 2 t ( e, t ) with local typing knowledge from entity type instances, which is designed to score the similarity of an entity e and a type t .",
"The main ideas behind this model are as follows: (1) Since the learned entity embeddings cluster well when they have the same or similar types, therefore, it is rather intuitive that the entity type embedding represents the projective common concept representation of a cluster of entities, i.e., f proj ( e ) (cid:39) t e , e E .",
"e ( R ) is the embedding of the entity e , t e ( R (cid:96) ) is the embedding of the type t e .",
"The entity type embedding represents common information of their entities, it thus should have fewer variates, i.e., (cid:96) < .",
"(2) Since the entities and entity types are totally distinct objects, we respectively build two embedding space for them, i.e., entity space and entity type space .",
"(3) Inspired by the previous work TranSparse (Ji et al., 2016) projecting entities from entity space to relation space with operation matrix M , which we adapted, replacing relation space with entity type space, we thus define f proj ( e ) = M e ( (cid:39) t e ) .",
"Therefore, this model consists of first projecting entity embedding into entity type space, and then computing a similarity measure between this projection and an entity type embedding.",
"The scoring function of E2T given ( e, t ) is: S e 2 t ( e, t ) = (cid:107) M e t (cid:107) 2 (cid:96) 2 , (1) where M R (cid:96) is a transfer matrix mapping entity embeddings into entity type space.",
"The score is expected to be lower for a golden entity type instance and higher for an incorrect one.",
"Using only entity type instances for training ignores much of relational knowledge that can leverage from triple facts in KGs.",
"In order to connect this relational data with our model, we propose to learn entity type and relationship embeddings from global triple knowledge from KGs.",
"The key motivations behind this model are: (1) As mentioned above, the entities cluster well according to their types.",
"Therefore, we believe that an essential premise of a triple ( head entity, relationship, tail entity ) holds is that its corresponding entity types should first conform to this relationship.",
"Hence, we can build a new entity type triple ( head type, relationship, tail type ) by replacing both head entity and tail entity with their corresponding types: i.e. ( e, r, e ) replace ( t e , r, t e ) .",
"( e, r, e ) D , D is the training set consists of a lot of triples.",
"r R ( R is the set of relationships).",
"t e and t e stand for the hierarchical types of left entity e and right entity e respectively.",
"(2) Since the relationship r remains unchanged in replacement, we build two differentiated embeddings for the i -th relationship r i in two embedding spaces: r (cid:63)i ( R ) in entity space and r i ( R (cid:96) ) in entity type space.",
"(3) Given entity type triple ( t e , r, t e ) , under translation assumption 2 as in (Bordes et al., 2013), we have: t e r (cid:39) t e .",
"Hence, the scoring function is defined as: S trt ( t e , r, t e ) = (cid:107) t e + r t e (cid:107) 2 (cid:96) 2 , (2) where t e , r , t e R (cid:96) .",
"The model returns a lower score if the two entity types is close under this relationship and a higher one otherwise.",
"Fig. 2 shows an illustration of E2T and TRT.",
"Our framework can be used for entity type prediction in the following way.",
"First, for each entity e 2 We chose TransE in this paper, and it is not difficult for other enhanced translation-based methods to model triple knowledge, such as Trans(H, R, D and G) (Wang et al., 2017).",
"that appears in the testing set, a prediction by E2T is performed with: t e = arg min t T S e 2 t ( e, t ) .",
"In addition, a composite score (E2T+TRT) by connecting entity type instances and entity type triples with embedding model, which we call ConnectE 3 , is defined as follows:",
"S e 2 t + trt ( e, t e ) = S e 2 t ( e, t e )+ (1 ) (cid:110) 1 | P | (cid:88) t e PS trt ( t e , r, t e ) + 1 | Q | (cid:88) t e QS trt ( t e , r, t e ) (cid:111)",
"where is a hyperparameter for the trade-off.",
"P = { t e | t e T , ( e, r, e ) D} (i.e. given e is head entity, P is the set of all corresponding tail entities' types.), and Q = { t e | t e T , ( e, r, e ) D} (i.e. given e is tail entity, Q is the set of all corresponding head entities' types.).",
"| P | and | Q | represent the total number of entity types in P and Q respectively.",
"A prediction is performed with: t e = arg min t e T S e 2 t + trt ( e, t e ) .",
"Hence, our final composite model ConnectE-(E2T+TRT) favors predictions that agree with both entity type instances and global triple information in KGs.",
"We use ranking loss algorithm for training ConnectE-(E2T+TRT), in which the parameter set = { E , T , R (cid:63) , R , M } .",
"E , T stand for the collection of all entities' and types' embeddings respectively.",
"( R (cid:63) , R ) denotes the collections of relationships' differentiated embeddings.",
"The ranking objectives are designed to assign lower scores to true facts (including ( e, r, e ) triple facts, ( e, t ) entity type instances and ( t e , r, t e ) type triples) versus any corrupt ones.",
"We build three sub-objective functions, i.e., J 1 , J 2 , J 3 , and implement dynamic optimization strategy, i.e., fix a partial of parameters and only update the rest when minimizing each function.",
"(1) J 1 : We choose TransE (see Bordes et al. (2013)) to model triple facts as S ( e, r, e ) , in which we update the embeddings of entities ( e E ) and the embeddings of relationships 3 We also call it ConnectE-(E2T+TRT), and use ConnectE-(E2T+0) to denote E2T for uniformity in the experiments.",
"( r (cid:63) R (cid:63) ).",
"(2) J 2 : We only update the embeddings of entity types ( t T ) and projecting matrix M , not the entities' embeddings that have been trained in J 1 .",
"(3) J 3 : We only update the embeddings of relationships ( r R ) while keeping the entity types' embeddings fixed.",
"The training is performed using Adagrad (Kingma and Ba, 2014).",
"All embeddings in are initialized with uniform distribution.",
"The procedure, from J 1 , J 2 to J 3 , is iterated for a given number of iterations.",
"We have: J 1 = (cid:88) D (cid:88) D (cid:48) [ 1 + S ( e, r, e ) S ( e (cid:48) , r, e (cid:48) )] + , J 2 = (cid:88) H (cid:88) H (cid:48) [ 2 + S e 2 t ( e, t e ) S e 2 t ( e (cid:48) , t (cid:48) e )] + , J 3 = (cid:88) Z (cid:88) Z (cid:48) [ 3 + S trt ( t e , r, t e ) S trt ( t (cid:48) e , r, t (cid:48) e )] + 1 , 2 , 3 > 0 are margin hyperparameters, and the corrupted datasets are built as follows: D (cid:48) := { ( e (cid:48) , r, e ) | ( e, r, e ) D , e (cid:48) E , e (cid:48) (cid:54) = e } { ( e, r, e (cid:48) ) | ( e, r, e ) D , e (cid:48) E , e (cid:48) (cid:54) = e } , H (cid:48) := { ( e (cid:48) , t e ) | ( e, t e ) H , e (cid:48) E , e (cid:48) (cid:54) = e } { ( e, t (cid:48) e ) | ( e, t e ) H , t (cid:48) e T , t (cid:48) e (cid:54) = t e } , Z (cid:48) := { ( t (cid:48) e , r, t e ) | ( t e , r, t e ) Z , t (cid:48) e T , t (cid:48) e (cid:54) = t e } { ( t e , r, t (cid:48) e ) | ( t e , r, t e ) Z , t (cid:48) e T , t (cid:48) e (cid:54) = t e } D , H are training datasets of triple facts and entity type instances in KG.",
"Z is the training data of type triples, built by replacing entities in D with their corresponding entity types.",
"We conduct the experiments on two real-world datasets ( D ) widely used in KG embedding literature, i.e. FB15k (Bordes et al., 2013) and YAGO43k (Moon et al., 2017), which are subsets of Freebase (Bollacker et al., 2008) and YAGO (Suchanek et al., 2007) respectively.",
"They consist of triples, each of which is formed as ( left entity, relationship, right entity ).",
"We utilize two entity type data ( H , each of it is formed as ( entity, entity type )) built in (Moon et al., 2017), called FB15kET and YAGO43kET, in which the entity types are mapped to entities from FB15k and YAGO43k respectively.",
"Moreover, we build new type triple datasets ( Z , each one in it is formed as ( head type, relationship, tail type )), to train our model.",
"They are built based on D and H .",
"First, for each triple ( e, r, e ) D , we replace the head and the tail with their types according to H .",
"The generated datasets are called FB15kTRT(full) and YAGO43kTRT(full).",
"Second, considering about the scalability of the proposed approach for full KGs, we further modify the generation method of type triples, which is the ma-jor training bottleneck.",
"We discard newly generated ones with low-frequency (i.e. #frequency = 1).",
"After that the size of both FB15kTRT(full) and YAGO43kTRT(full) decreased by about 90%, called FB15kTRT(disc.) and YAGO43kTRT(disc.) respectively.",
"The statistics of the datasets are showed in Table 2 .",
"For saving space, we put more data processing details (include cleaning H , building Z , etc.) on our github website.",
"This task concentrates to complete a pair ( entity, entity type ) when its type is missing, which aims to verify the capability of our model for inferring missing entity type instances.",
"Evaluation Protocol.",
"We focus on entity type prediction determined by Formula ( 3 ) and (4).",
"We use ranking criteria for evaluation.",
"Firstly for each test pair, we remove the type and replace it by each of the types in T in turn.",
"The function value of the negative pairs would be computed by the related models and then sorted by ascending order.",
"We can obtain the exact rank of the correct type in the candidates.",
"Finally, we use two metrics for comparison: (1) the mean reciprocal rank (MRR), and (2) the proportion of correct entities ranked in the top 1/3/10 (HITS@1/3/10)(%).",
"Since the evaluation setting of Raw is not as accurate as Filter (Bordes et al., 2013), we only report the experimental results with latter setting in this paper.",
"where C is a set of test pairs, and rank i is the rank position of the true entity type for the i -th pair.",
"Implementation.",
"The results of entity type prediction are shown in Table 3 , where the results for the baselines are directly taken from original literature (Moon et al., 2017).",
"We do not choose LM and PEM (Neelakantan et al., 2015) as baselines since they do not utilize triple knowledge, thus it is not fair to compare with them.",
"For training our model, we select the learning rate { 0.1, 0.05, 0.001 } , the margins 1 , 2 , 3 { 0 .",
"5 , 1 , 2 , 5 , 10 } , the embedding dimension pairs ( , (cid:96) ) { (100, 50), (150, 75), (200, 100), (250, 125) } , and the weight { 0.5, 0.65, 0.85, 0.95 } .",
"We use negative sampling, and gradient descent with AdaGrad as our optimization approach to improve convergence performance.",
"During the initialization process, each embedding vector of the entities, entity types and relationships is initialized with a random number following a uniform distribution 6 / ( m + n ) , where n { #Ent, #Type, #Rel } and m { , (cid:96) } .",
"During the whole training process, we normalize the entity embeddings after each epoch.",
"We select the parameters based on MRR in valid dataset.",
"The optimal configurations are: { = 0 .",
"1 , 1 = 2 = 3 = 2 , = 200 , (cid:96) = 100 , = 0 .",
"85 } on FB15k/ET/TRT; { = 0 .",
"1 , 1 = 2 = 3 = 1 , = 250 , (cid:96) = 125 , = 0 .",
"85 } on YAGO43k/ET/TRT.",
"We run 800 epochs on both datasets, and the batch size is 4096.",
"Experimental Results.",
"We can see from Table 3 that our ConnectEs outperform all baselines for entity type prediction in terms of all metrics on FB15kET and YAGO43kET.",
"It confirms the capability of ConnectEs in modeling with local typing and global triple knowledge and inferring missing entity type instances in KGs.",
"The model ConnectE-(E2T+TRT)(full) achieves the highest scores.",
"Analysis.",
"(1) In E2T, we utilize a mapping matrix M which compresses entity embeddings into type embedding space, considering that entity type embedding represents common information of all the entities which belong to this type.",
"The type embedding should be in a sharing subspace of entity embeddings.",
"The experimental results of E2T compared with the baselines demonstrate that this assumption would be quite reasonable.",
"(2) In E2T+TRT, we build new type-relation-type data, and then connect them with entity type instances.",
"This approach provides more direct useful information to (weakly) supervise entity type prediction.",
"For example, given a fact that head entity Barack Obama belongs to type /people/person Table 3: Entity type prediction results.",
"and the relationship born in , we could make the best guess of the type of tail entity Honolulu as /location/location .",
"Hence, the addition of type triples in ConnectE-(E2T+TRT) provides supe-rior performance than ConnectE-(E2T+0).",
"(3) Concerning about the scalability of our approach for big KGs, we utilize FB15kTRT(disc.) and YAGO43kTRT(disc.) for prediction, the training time of which reduced by 90% as the training data size decreased by 90%.",
"Moreover, the results of ConnectE-(E2T+TRT)(disc.) show that it's comparable with the best ConnectE-(E2T+TRT)(full).",
"This task aims to judge whether each entity type instance in testing data holds or not, which could",
"be viewed as a binary classification problem.",
"Evaluation Protocol.",
"Since there are no explicit negative entity type instances in existing KGs, in order to create datasets for classification, we build negative facts by randomly switching type from entity type pairs in validation and testing set with equal number of positive and negative examples.",
"Inspired by the evaluation metric of triple classification in (Socher et al., 2013), we calculate the scores of all entity type instances based on model energy function, and rank all instances in testing set with these scores.",
"Those instances with lower scores are considered to be true.",
"We use precision/recall curves to show the performances of all models.",
"Moreover, we also compare the accuracy among different models.",
"We first use validate set to find best threshold .",
"For instance, if the model score S e 2 t + trt ( e, t e ) in classification, the entity type instance will be classified to be positive, otherwise to be negative.",
"The final accuracy is based on how many facts are classified correctly.",
"Implementation.",
"We utilize the source codes and parameter settings of several baselines provided by (Moon et al., 2017) for this task.",
"The optimal parameter settings for our proposed models are: { = 0 .",
"1 , 1 = 2 = 3 = 2 , = 200 , (cid:96) = 100 , = 0 .",
"85 } on FB15kET; { = 0 .",
"1 , 1 = 2 = 3 = 1 , = 250 , (cid:96) = 125 , = 0 .",
"85 } on YAGO43kET.",
"In both datasets, we learn all the training data for 800 epochs and the batch size is 4096.",
"After training, we firstly draw PR-curves with dynamic thresholds.",
"We select the best threshold based on the accuracy in valid dataset, which is used to calculate the accuracy in test dataset.",
"Experimental Results.",
"We draw the PR-curves for type classification task on both datasets in Fig.3.",
"Note that we only report the results of ConnectE-(E2T+TRT)(disc.) not ConnectE-(E2T+TRT)(full), since the learning speed of the former is much more faster than the latter and its results are close to the best results of the latter.",
"We can see from Fig.3 that when the recall rate is between 0.88 0.97, ConnectE-(E2T+TRT)(disc.) model could achieve the highest precision rate on FB15kET.",
"In other ranges, our ConnectE-(E2T+TRT)(disc.) model also shows comparable performance.",
"The result is consistent on YAGO43kET.",
"Specifically, ConnectE-(E2T+TRT)(disc.) achieves the best F1 score of 94.66% when recall = 94.27% and precision = 95.05% on FB15kET.",
"Also, ConnectE-(E2T+TRT)(disc.) surpasses other models and gets F1 score of 92.13% when precision = 93.18% and recall = 91.11% on YAGO43kET.",
"It confirms the capability of our model, for they could not only infer missing types in KGs, but also perform well in KG entity type classification.",
"Table 4 demonstrates the evaluation accuracy results of entity type classification, from which we can observe that: (1) On FB15kET, ConnectE-(E2T+TRT)(disc.) achieves the best accuracy score (94.49%).",
"Compared to the mostly related model ETE, our model shows 0.48% absolute performance improvement.",
"On YAGO43kET, ConnectE-(E2T+TRT)(disc.) model outperforms other models as well.",
"The improvement of our model com-Figure 3: Entity type classification results (Precision/Recall Curve).",
"pared to ETE is almost 1.51%.",
"(2) Comparing to the improvement on YAGO43kET, the advantage ConnectE-(E2T+TRT)(disc.) has over ConnectE-(E2T+0) in this task on FB15kET seems to be in-significant, which indicates that the type triples in FB15kTRT have fewer contribution on entity type classification than ones in YAGO43kTRT.",
"It may be partially caused by the fact that the number of relations in YAGO43k (#Rel=37) is far less than that in FB15k (#Rel=1,345), which could considerably influence the effectiveness of the type-relation-type training set.",
"Due to the rareness of relationships in YAGO43k, each entity usually connects with a large number of other entities through one single relationships, which means that the magnitude of | P | and | Q | in the composite model scoring function are large.",
"After averaging in ConnectE-(E2T+TRT)(disc.), it could achieve more stable and significant results on YAGO43kET.",
"Table 5 shows the examples of entity type prediction by our model from FB15k/ET/TRT , which demonstrate our motivation of Mech.",
"2 that head type and tail type really maintain the relationship between head entity and tail entity.",
"Given entity Peter Berg , TRT can find HITS@1 type prediction /people/person for it via the existing entity type assertion ( New Youk, /location/location ) and the relationship ( /loc./loc./people born here ) between them, i.e. (cid:126) Peter Berg (cid:126) New York + (cid:126) /location/location = (cid:126) /people/person .",
"In this paper, we described a framework for leveraging global triple knowledge to improve KG entity typing by training not only on ( entity, entity type ) assertions but also using newly generated ( head type, relationship, tail type ) type triples.",
"Specifi-cally, we propose two novel embedding-based models to encode entity type instances and entity type triples respectively.",
"The connection of both models is utilized to infer missing entity type instances.",
"The empirical experiments demonstrate the effectiveness of our proposed model.",
"Our modeling method is general and should apply to other type-oriented tasks.",
"Next, we are considering to use this framework to conduct KG entity type noise detection.",
"The authors would like to thank all anonymous reviewers for their insightful comments.",
"We also want to thank Zhiyuan Liu (Tsinghua University) and Linmei Hu (BUPT) for their useful suggestions and comments on early drafts.",
"This work was supported by the National Natural Science Foundation of China under Grant No.61922085, 61906159, the Sichuan Science and Technology Program under Grant No.2018JY0607, the Fundamental Research Funds for the Central Universities under Grant No.JBK2003008, Fintech Innovation Center, and Financial Intelligence and Financial Engineering Key Laboratory of Sichuan Province."
] | [
"abstain",
"objective",
"method",
"objective",
"abstain",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"abstain",
"abstain",
"method",
"method",
"objective",
"method",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"method",
"abstain",
"result",
"objective",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"other",
"other",
"other",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"objective",
"method",
"method",
"other",
"other",
"other"
] |
[
"In this work, we propose a flow-adapter architecture for unsupervised NMT.",
"It leverages normalizing flows to explicitly model the distributions of sentence-level latent representations, which are subsequently used in conjunction with the attention mechanism for the translation task.",
"The primary novelties of our model are:",
"(a) capturing language-specific sentence representations separately for each language using normalizing flows and",
"(b) using a simple transformation of these latent representations for translating from one language to another.",
"This architecture allows for unsupervised training of each language independently.",
"While there is prior work on latent variables for supervised MT, to the best of our knowledge, this is the first work that uses latent variables and normalizing flows for unsupervised MT. We obtain competitive results on several unsupervised MT benchmarks.",
"Recent advances in deep learning have boosted the development of neural machine translation (NMT).",
"Typical NMT models leverage an encoder-decoder framework (Cho et al., 2014; Sutskever et al., 2014).",
"However, NMT models have been shown to be data-hungry, as the number of parallel sentences significantly influences the performance (Zoph et al., 2016).",
"Unfortunately, large-scale bilingual corpora are limited to a relatively small subset of languages (Al-Onaizan et al., 2002).",
"In contrast to bilingual corpora, monolingual corpora are much easier to obtain.",
"Unsupervised NMT, compared with supervised NMT, aims to train a model without parallel data.",
"Some early works (Irvine and Callison-Burch, 2016; Sennrich et al., 2016b; Cheng et al., 2016) used monolingual corpora to boost performance when parallel data is not abundant.",
"Lample et al. (2018a) and Artetxe et al. (2018) explored the possibility of training a model relying only on mono-Figure 1: Inference pipeline of proposed flow-adapter based model for source-to-target translation.",
"The decoder also uses the attentional input (shown as the gray arrow between the encoder and the decoder).",
"lingual corpora.",
"They both leveraged a shared-encoder architecture in order to generate universal representations, trained with techniques such as initial word-by-word translation through bilingual dictionaries (Lample et al., 2018b; Artetxe et al., 2017), denoising auto-encoding (DAE) (Vincent et al., 2008) and iterative back-translation (BT) (Hoang et al., 2018).",
"However, Yang et al. (2018) argued that it is a bottleneck in such shared-encoder models to use a shared encoder that maps pairs of sentences of different languages to the same shared latent space.",
"They proposed to use two independent encoders sharing part of their weights and achieved better results.",
"But all of those aforementioned approaches trained the translation models almost from scratch (with only some prior knowledge in the pre-trained embeddings) and therefore it is hard to further advance their performance.",
"models (Peters et al., 2018; Devlin et al., 2019), researchers have begun to explore the possibility of using pre-trained models for unsupervised NMT.",
"Conneau and Lample (2019) extended the pre-training from a single language to multiple languages, referred to as cross-lingual pre-training.",
"By using pre-trained cross-language models (XLMs) to initialize encoder and decoder, they achieved good unsupervised MT performance on multiple language pairs.",
"In related work, Song et al. (2019) proposed masked sequence to sequence pre-training (MASS), which directly pre-trains a whole encoder-decoder model.",
"stn et al. (2021) proposed a language-specific denoising-adapter architecture to increase the multilingual modeling capacity of the pre-trained model mBART (Liu et al., 2020) and used these adapters for multilingual unsupervised NMT.",
"Although these adapters are trained with monolingual data only, the fine-tuning step relies on parallel data.",
"Current NMT frameworks rely heavily on the attention mechanism (Bahdanau et al., 2015; Vaswani et al., 2017) to capture alignments.",
"However, attention-based context vectors can fail to extract sufficiently accurate sentence-level semantics and thus result in incorrect translations or translation ambiguity (Tu et al., 2016; Zhang et al., 2016).",
"To tackle this issue, several variational frameworks for modeling the translation process have been proposed (Zhang et al., 2016; Eikema and Aziz, 2019; Setiawan et al., 2020).",
"These approaches incorporate sentence-level latent representations into NMT.",
"A latent representation, in the context of this paper, is a fixed-size continuous vector from an unknown distribution that captures the semantics of a source sentence.",
"The target sentence is then generated from this latent representation using a simple transformation along with the attention mechanism commonly found in transformer architectures.",
"In this way, when the attention mechanism learns incorrect alignments, the latent representation plays a complementary role in guiding the translation.",
"Prior work in this vein has only been conducted in supervised NMT.",
"In this paper, we propose a flow-adapter architecture for unsupervised NMT.",
"Similar to variational methods, we model the distribution of sentence-level representations.",
"However, unlike variational methods, which model the distribution in an implicit way, we use a pair of normalizing flows to explicitly model the distributions of source and target languages.",
"Secondly, different from some previous unsupervised NMT models that assume that the representations of source and target sentences share a common semantic space, we assume the representations are different because of language-specific characteristics.",
"Hence they are modeled separately for each language.",
"Subsequently a simple transformation converts source representations into target representations.",
"This makes it possible to better capture sentence semantics in a language-specific manner.",
"Lastly, instead of minimizing KL loss, the flows are directly trained by maximum likelihood estimation (MLE) of sentence-level latent representations.",
"This gives the latent representations more flexibility.",
"Our main contributions: (1) We propose a novel flow-adapter architecture.",
"It uses normalizing flows to explicitly model the distributions of sentence-level representations and performs a latent representation transformation from source to target.",
"To the best of our knowledge, this is the first work that uses latent variables and normalizing flows for unsupervised NMT.",
"(2) Experiments show the validity and effectiveness of our flow-adapter architecture.",
"It performs very well in unsupervised NMT on several language pairs on the Multi30K dataset.",
"When additionally using pre-trained models, we achieve results competitive with the state of the art on WMT datasets, especially for en-fr (WMT'14) and en-ro (WMT'16).",
"Normalizing flows (NFs) are a special type of deep generative model.",
"Different from generative adversarial networks (GAN) (Goodfellow et al., 2014) and variational auto-encoding (VAE) (Kingma and Welling, 2014), NFs allow for not only sampling but also exact density estimation.",
"Due to such desirable properties, in recent years, they have been successfully applied to fields such as image (Ho et al., 2019; Kingma and Dhariwal, 2018), audio (Esling et al., 2019; van den Oord et al., 2018) and video generation (Kumar et al., 2019).",
"In addition to significant achievements in modeling continuous data, NFs have also been used for modeling discrete data, either by directly modeling the data in discrete space (Tran et al., 2019; Hesselink and Aziz, 2020) or by transforming the discrete data into continuous space (Ziegler and Rush, 2019; Tang et al., 2021).",
"NFs transform between two distributions based on the following change-of-variables formula (we follow the introduction of (Dinh et al., 2015, 2017)):",
"where z p z ( z ) and x p x ( x ) denote two vectors from a simple latent distribution p z ( z ) and a complex distribution of the observed data p x ( x ) , f is an invertible and differentiable function (neu-ral network with parameters ), f ( z ) = x and det f ( z ) z denotes the determinant of the Jacobian matrix of f .",
"The idea of NFs is to learn an f such that f and f 1 transform between the latent space p z ( z ) and the observed space p x ( x ) .",
"Constructing a single arbitrarily complex invertible and differentiable function is usually cumbersome.",
"Therefore, a generally adopted approach is to stack multiple transformations f i together, i.e., x = f ( z ) = f K f 1 ( z ) .",
"Similarly, for the reverse direction we have z = f 1 ( x ) = f 1 1 f 1 K ( x ) , whose Jacobian matrix is efficient to compute.",
"Here K denotes the number of sequential flows (e.g., K = 3 in Table 1).",
"Normalizing flows are usually optimized by MLE of the parameters , i.e., log p ( D| ) = (cid:80) Nn =1 log p x ( x ( n ) | ) , where N is the data size.",
"By applying a variant of the change-of-variable formula in Equation (1), i.e., log p x ( x ) = log p z ( f 1 ( x )) + log (cid:12)(cid:12)(cid:12)(cid:12) det f 1 ( x ) x (cid:12)(cid:12)(cid:12)(cid:12) , the MLE objective can be reformulated as follows: log p ( D| ) = N (cid:88) n =1 log p z ( f 1 ( x ( n ) ) | ) + log (cid:12)(cid:12) (cid:12)(cid:12)(cid:12) det f 1 ( x ( n ) ) x ( n ) | (cid:12)(cid:12) (cid:12)(cid:12)(cid:12) (2) 2.2 Latent-variable (variational) NMT Compared with standard encoder-decoder based NMT models, latent-variable (variational) approaches (Zhang et al., 2016; Eikema and Aziz, 2019; Ma et al., 2019; Calixto et al., 2019; Setiawan et al., 2020; Shu et al., 2020) additionally leverage latent random variables.",
"Let x be a sentence from the source language and y be its translation in the target language.",
"Then, the variational NMT framework introduces a continuous random latent variable z for the translation modeling, i.e., p ( y | z , x ) .",
"With the introduction of z , the conditional probability p ( y | x ) can then be reformulated as follows: p ( y | x ) = (cid:90) z p ( y | z , x ) p ( z | x ) d z (3) In this way, z serves as a global semantic signal that is helpful to counteract incorrect alignments the model has learned and uses through attention.",
"However, the integration of z poses challenges for inference.",
"To address this problem, variational NMT adopts techniques from VAE (Kingma and Welling, 2014; Rezende et al., 2014), namely, neural approximation and the reparameterization trick.",
"Neural approximation leverages a neural network to approximate the posterior distribution p ( z | x , y ) with q ( z | x , y ) , where denotes the parameters of the neural network.",
"In most works, q ( z | x , y ) is designed as a diagonal Gaussian N ( , diag ( 2 )) , where the mean and the variance 2 are parameterized with neural networks.",
"Reparameterization means that the latent random variable z is parameterized as a function of the mean and the variance 2 .",
"In this way, the gradient with respect to the parameters and 2 can be computed.",
"The reparameterization of z is often carried out in a location-scale manner: z = + where N (0 , 1) With these two techniques, the learning objective of variational NMT is the evidence lower-bound or ELBO of the conditional probability p ( y | x ) : L ( , ; x , y ) = KL ( q ( z | x , y ) || p ( z | x )) + E q ( z | x , y ) [log p ( y | z , x )] (4) where p ( z | x ) is the prior distribution modeled by a neural network and p ( y | z , x ) is modeled by the decoder given the input source sentence x and the latent variable z .",
"The KL term minimizes the discrepancy between the prior p ( z | x ) and the posterior q ( z | x , y ) .",
"In the inference step, z can therefore be sampled from the prior, which only requires x instead of the posterior that requires both x and y .",
"Although this variational framework leverages latent variables, which are helpful for translation, it still has some flaws: 1) training a variational NMT framework requires parallel corpora to construct the posterior q ( z | x , y ) and such parallel corpora are not available for unsupervised MT; 2) the distribution family of the latent variables, e.g., p ( z | x ) , is pre-defined, e.g., a Gaussian, which might restricts the advantage of using a complex posterior; 3) as variational NMT leverages z 1255 sampled from p ( z | x ) for inference, an underlying assumption is that z should be the same whether only x is considered or both x and y are considered.",
"In other words, this framework assumes z is language-agnostic, which might not be true since language-specific characteristics can influence the generation of z .",
"In this work, we want to reap the benefits of introducing latent variables into unsupervised MT while at the same time avoiding the flaws of variational NMT we just discussed.",
"Therefore, we propose a flow-adapter based framework that uses two NFs to explicitly model the distribution of the sentence-level latent representations of the source and target sentences.",
"In this way, we can take account of multilinguality in unsupervised MT and make use of language-specific sentence-level representations.",
"During the translation process, a latent code transformation is performed to transform the source-language representation into the target-language representation so that the decoder can leverage them to generate a better target-language sentence.",
"We will first introduce the sentence-level representation as well as the latent code transformation in Section 3.1, followed by the description of the flow-adapter based framework for unsupervised MT in Section 3.2.",
"As previously mentioned, variational methods such as (Zhang et al., 2016; Setiawan et al., 2020) assume that the semantics of the source sentence x and target sentence y are the same and thus the generated latent variable z is the same regardless of whether we only consider x or consider both x and y .",
"Unsupervised NMT methods such as (Lample et al., 2018a; Conneau and Lample, 2019) similarly assume that a shared encoder maps source and target sentences into a shared latent space.",
"In this work, however, we diverge from this assumption and follow Yang et al. (2018) in adopting the desideratum that the unique and internal characteristics of each language be respected.",
"One could think that the semantics of a pair of sentences should theoretically be the same; but in reality, because of language-specific characteristics, the latent representations z obtained by an encoder can be different for source and target sentences.",
"Differences in vocabulary, pragmatics and other linguistic properties all influence the generation of the latent representations.",
"Therefore, we consider the latent representations from a different perspective as follows.",
"We can view z x and z y as expressions of the sentence-level representations in two distinct languages based on the same semantics where is truly language-agnostic.",
"z x and z y are obtained by applying parameter-free techniques such as pooling to the output of the encoder fed with source and target languages (see Section 3.2 for details).",
"Modeling by NFs.",
"For our unsupervised scenario, we propose to explicitly model the distributions of the sentence-level representations of both source and target sentences i.e., p z x ( z x ) and p z y ( z y ) using NFs with K sequential flows: p z x ( z x ) = p ( ) K (cid:89) i =1 (cid:12)(cid:12)(cid:12) (cid:12)(cid:12) det f ( i ) x ( z ( i ) ) z ( i ) (cid:12)(cid:12)(cid:12) (cid:12)(cid:12) 1 (5) p z y ( z y ) = p ( ) K (cid:89) i =1 (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) det f ( i ) y ( z ( i ) ) z ( i ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) 1 (6) where p ( ) is a base distribution, e.g., the standard normal distribution; f ( i ) x and f ( i ) y are the i th transformations for the source and target languages, respectively; and z ( i ) is the intermediate variable where we define z (1) = and z ( K ) = z x or z y for notational convenience.",
"The base distribution can be viewed as the true underlying semantic space, abstracting away from language specifics.",
"Our transformation to the sentence-level representations is similar to (Li et al., 2020).",
"They argued that BERT induces a non-smooth anisotropic semantic space of sentences, which can harm its accurate representation of semantic similarity.",
"Therefore, they also used NFs to transform the anisotropic BERT sentence-level distribution to a standard Gaussian distribution that is smooth and isotropic and reported better performance on some sentence-level similarity tasks.",
"By using this type of sentence-level representation, the semantics of sentences from different languages can therefore be aligned in a simple common space in an unsupervised way, which we show is effective for unsupervised MT. For simplicity, we denote the NFs for transforming the distributions of source and target sentence-level representations to the base distribution as mappings G ( z x ) and G ( z y ) .",
"Because of the 1256 Figure 2: Top two diagrams: denoising auto-encoding for source and target sentences.",
"invertibility property of NFs, these mappings are also invertible, and we have G ( z x ) = G 1 ( z x ) and G ( z y ) = G 1 ( z y ) .",
"Latent Code Transformation.",
"Inspired by AlignFlow (Grover et al., 2020), we consider the cross-domain transformation between z x and z y .",
"In this way, we can formulate a language-specific latent code for the decoder.",
"We formalize the cross-language latent code transformation from the source to the target language as follows: G ( z x z y ) = G ( z y ) G ( z x ) (7) The target-to-source latent code transformation is then the composition of G ( z x ) and G ( z y ) .",
"As G ( z y ) and G ( z x ) are the inverse mappings of G ( z y ) and G ( z y ) , we can easily obtain them with normalizing flows, such as realNVP (Dinh et al., 2017) and Glow (Kingma and Dhariwal, 2018).",
"We also note that G ( z x z y ) and G ( z y z x ) are both invertible since they are compositions of two invertible mappings.",
"Moreover, G ( z x z y ) is the inverse of G ( z y z x ) and vice versa (see Appendix A.1 for details).",
"The general architecture is shown in Figure",
"1. The transformer architecture (Vaswani et al., 2017) is used for both encoder and decoder.",
"We use source encoder/decoder to denote the encoder/decoder for encoding/generating the source-language sentence.",
"Similarly, target encoder/decoder refer to the encoder/decoder encoding/generating the target-language sentence.",
"The decoders work in an autoregressive way.",
"Source flow and target flow are NFs for modeling the sentence-level latent representations of the source and target language, respectively, as introduced in Section 3.1.",
"Encoding.",
"The source encoder and the target encoder work in the same way; for brevity, we only describe the procedure of encoding the source sentence and how z x is generated.",
"The source encoder takes the source sentence x = { x 0 , , x S } as input and generates the hidden representations { h 0 , , h S } .",
"These hidden representations will be used as encoder-decoder attentional inputs.",
"In addition, we use the hidden representations to generate a sentence-level representation for the source sentence by applying max-pooling and mean-pooling to the token-level representations.",
"After that, we sum up the results with the CLS representation h 0 , which usually encodes some global information.",
"Finally, we use a projection matrix W to project the resulting vector to a latent space.",
"The output is referred to as z x , i.e., the sentence-level representation of the source sentence (see Appendix A.2 for equation and illustration).",
"Cross-lingual Translation.",
"We hypothesize that the decoder can better leverage language-specific latent representations (i.e., z x for the source decoder and z y for the target decoder) than indiscriminately using the same representational space for source and target, e.g., z x for the target decoder.",
"Therefore, we propose to perform a latent code transformation for cross-language translation as shown in Figure",
"1. If the model is performing the translation in the source-to-target direction, the source flow first transforms the source latent representation z x into , which is a vector in the 1257 semantic base space.",
"Then the target flow transforms back into z y , which is in the target latent representation space.",
"Then z y is used in the target decoder for generating the target sentence.",
"Denoising Auto-Encoding (DAE) and Back Translation (BT) Processes.",
"The DAE reconstructs a sentence from its noised version.",
"For inducing noise, we use the same strategy which is used by (Lample et al., 2018a) (For more details, please refer to Appendix A.3).",
"Since we train the DAEs separately for source and target languages, hence we don't need a latent code transformation there.",
"For BT, however, such a latent code transformation is performed twice; taking BT for the source language as an example: first in the source-to-target direction, then in the target-to-source direction as shown in Figure",
"2. Decoding.",
"To enable the decoder to capture the global semantics and mitigate improper alignments, we use the procedure outlined in (Setiawan et al., 2020), and incorporate the latent representation z into the output of the last layer of the decoder { s 0 , , s T } : o i = (1 g i ) s i + g i z (8) where g i = ([ s i ; z ]) , ( ) is the sigmoid function, denotes Hadamard product between two vectors, and o i is the logit vector used to generate a prediction at the i th position.",
"The values in g i control the contribution of z to o i .",
"In case the dimension of the latent representation does not match the dimension of the decoder output, a linear projection maps z to the desired dimension.",
"Training.",
"Our flow-adapter framework has three learning objectives: DAE, BT and MLE of the sentence-level representations.",
"The description of DAE and BT is omitted here as they are well known from related work (Lample et al., 2018a; Artetxe et al., 2018).",
"A single training iteration consists of a DAE step followed by a BT step as shown in Figure",
"2. MLE computation is integrated into the DAE step to calculate the likelihood of the sentence-level representations.",
"Our MLE learning objective for the source monolingual dataset can be formulated as follows (similar for the target dataset, omitted): LMLE ( G ( z x ) ) = E z p z x [log p z x ( z )] (9) where p z x ( z ) = p ( G ( z x ) ( z )) (cid:12)(cid:12)(cid:12)(cid:12) det G ( z x ) z x (cid:12)(cid:12)(cid:12)(cid:12) z x = z (10) by definition of the source NFs in Equation 5.",
"E z p z x is approximated via mini-batches of sentence-level latent representations generated by the encoder in the training process.",
"By training the source flow and the target flow with this MLE loss, the flows can therefore transform between the language-specific latent space of the representations and the base semantic space.",
"In this way, the latent code transformations, i.e., G ( z x z y ) and G ( z y z x ) can be constructed.",
"Multi30K task1 dataset (Elliott et al., 2016, 2017).",
"1 This is a multi-modal dataset that has 30,000 images annotated with captions in English, German and French.",
"Similar to (Lample et al., 2018a), we only use the caption of each image.",
"The officially provided train, validation and test sets are used.",
"We use this dataset as a small-scale test for validating the effectiveness of our methods.",
"WMT datasets.",
"2 Our experiments are run with the settings that were used for XLM (Conneau and Lample, 2019).",
"XLM uses the monolingual data from the WMT News Crawl datasets 3 .",
"We report results on newstest2014 en-fr , newstest2016 en-de and newstest2016 en-ro .",
"Preprocessing.",
"We tokenize the sentences with the Moses script (Koehn et al., 2007).",
"For the Multi30K dataset, we process it similar to Lample et al. (2018a).",
"Specifically, the sentences are randomly divided into two parts.",
"The source-language monolingual dataset is built from the source-language sentences in the first part and the target-language dataset from the second part.",
"In this way, there will be no exact translations of any sentences in the datasets.",
"For the WMT datasets, we use the preprocessing methods from (Conneau and Lample, 2019).",
"For the English-Romanian dataset, we remove the diacritics as done by Sennrich et al. (2016a) to avoid their inconsistent usage in the Romanian part of the dataset.",
"Metric & Performance.",
"We use BLEU as metric (Papineni et al., 2002) for all our experiments.",
"Although Artetxe et al. (2020) recommended to use 1 https://github.com/multi30k/dataset 2 http://www.statmt.org/ 3 https://github.com/facebookresearch/XLM/blob/main/get-data-nmt.sh 1258 Models en-de de-en en-fr fr-en de-fr fr-de baseline 11.87 19.31 16.52 19.24 11.03 8.36 3-scf 12.25 19.83 16.98 20.12 11.67 8.98 3-glow 11.91 20.14 16.86 19.55 11.49 8.61 Table 1: BLEU of our flow-adapter model for multilingual translation on Multi30K.",
"unsupervised validation criteria for systematic tuning, we follow the setting of (Conneau and Lample, 2019; Song et al., 2019) and use the provided parallel validation sets for tuning hyperparameters.",
"We report the results on the test sets of the models that achieve best performance on the validation sets.",
"Pre-trained Embeddings & Models.",
"We use the pre-trained MUSE 4 (Lample et al., 2018b) embeddings for the multilingual unsupervised MT experiment (Table 1).",
"We also leverage pre-trained cross-lingual models in the experiment of shared & separate decoder(s) (Table 2).",
"Specifically, XLM models from HuggingFace 5 (Wolf et al., 2020) are used to initialize the encoder.",
"Moreover, we also incorporate our flow-adapter architecture directly into the codebase of the original implementation of XLM 6 for the WMT dataset experiment (Table 3).",
"In this case, the encoder and decoder are both initialized with pre-trained models.",
"Details of these models can be found in Appendix A.3.",
"As Multi30K provides parallel test data for English, French and German, we first conduct experiments to show the multilingual translation ability of our flow-adapter models.",
"The results are shown in Table",
"1. The baseline model (without flow-adapter architecture) is trained with only DAE loss, while the flow-adapter based models (3-scf and 3-glow) are additionally trained with MLE loss for the NFs.",
"3-scf (resp. 3-glow) is the baseline model with two realNVP NF models (Dinh et al., 2017) (resp.",
"Glow NF models (Kingma and Dhariwal, 2018)) , each of which consists of 3 sequential flows.",
"Each NF model is used to model the sentence-level represen-4 https://github.com/facebookresearch/MUSE 5 https://github.com/huggingface 6 https://github.com/facebookresearch/XLM tations of one specific language, and two NF models then construct a flow-adapter for that translation direction (as shown in Figure 1).",
"The flow-adapter based models additionally perform the latent code transformation to generate a language-specific representation while the baseline model does not perform such a transformation.",
"For this experiment, we use the pre-trained cross-lingual word embeddings (MUSE embeddings) and randomly initialize a shared encoder and a shared decoder for all three languages.",
"It is worth noting that the training objective does not contain the iterative back-translation.",
"For further research where there are far more languages accommodated, random online back-translation (ROBT) proposed by Zhang et al. (2020) could be considered.",
"Table 1 shows improvements over all six translation directions by using the flow-adapter architecture.",
"Notably, our 3-scf and 3-glow models achieve 19.83 and 20.14 BLEU scores, respectively, on de-en , which is 0.52 and 0.83 higher than the baseline model.",
"Similar improvements can also be seen on other translation directions.",
"Our 3-scf model has BLEU scores that are 0.46 to 0.88 higher than the baselines while our 3-glow model has BLEU scores that are 0.04 to 0.83 higher than the baselines.",
"The overall improvements show that the flow-adapter can generate more suitable sentence-level representations by performing the latent code transformation, which is helpful for the decoder to capture the semantics and generate more suitable translations.",
"We also find that the translation performance is closely related to the language pair and the translation direction for both the baseline models and flow-adapter models.",
"Our models obtain very good performance on en-fr , with performances in both the en-fr or fr-en directions better by 16 BLEU points.",
"For other language pairs (including en-fr ), there is always one direction showing better performance than the other.",
"Specifically, de-en achieves more than 19 BLEU points compared with 12 points for en-de , and de-fr achieves more than 11 BLEU points compared with 8.5 for fr-de .",
"We present the performance of our flow-adapter models under the shared-decoder and separate-decoder settings on Multi30K.",
"For this experiment, the encoder is initialized with the pre-trained XLM model and fixed; the decoder parameters are ran-1259 Models en-de de-en en-fr fr-en baseline (shared decoder) 0.25 0.17 0.13 0.11 3-scf (shared decoder) 25.80 28.92 39.26 36.84 3-glow (shared decoder) 26.09 29.48 39.21 36.66 baseline (separate decoders) 27.54 28.97 39.17 36.27 3-scf (separate decoders) 28.24 30.63 39.64 36.45 3-glow (separate decoders) 28.79 30.45 39.31 36.29 UNMT (Lample et al., 2018a) 22.74 26.26 32.76 32.07 Table 2: BLEU of the flow-adapter models and unsupervised SOTA model, i.e., UNMT (Lample et al., 2018a), on Multi30K.",
"domly initialized and then trained.",
"We also report the performance of a previous SOTA model, i.e., UNMT (Lample et al., 2018a).",
"7 The results are shown in Table",
"2. First, we notice that the shared-decoder baseline model obtains very low BLEU scores.",
"By checking the translation generated, we find the model only copies the input as translation.",
"This phenomenon shows that this baseline, which does not perform the latent code transformation, cannot model two languages simultaneously well, and thus cannot generate translations in the desired language domains.",
"However, by incorporating the flow-adapter, the models will no longer have this limitation.",
"Both shared-decoder models, i.e., 3-scf and 3-glow, achieve very good performance on all translation pairs.",
"For example, the 3-scf model obtains BLEU scores of 25.80, 28.92, 39.26 and 36.84 on en-de , de-en , en-fr and fr-en , which are much higher than the baseline.",
"Compared with the shared-decoder scenario, the models under the separate-decoder setting do not suffer from the copying problem, because different decoders are used to specifically model and generate sentences in distinct language domains.",
"The downside, however, is that using multiple decoders at the same time can substantially increase the number of trainable parameters.",
"Within the separate-decoder models, the flow-adapter models generally perform better than the baseline model, with about 1 BLEU increase on en-de and de-en directions and relatively smaller improvements on en-fr and fr-en .",
"Those improvements demonstrate that the model can benefit from the flow-adapter architectures as language-specific latent representations are used, thus advancing the translation quality.",
"els generally perform better than the shared-decoder models.",
"The separate-decoder baseline is much better than its counterpart as it avoids the copying problem.",
"For the 3-scf flow-adapter models, we find that the separate-decoder model outperforms the shared-decoder model by 2.44, 1.71, 0.38 on en-de , de-en and en-fr .",
"However, on fr-en , the shared-decoder model achieves a BLEU socre that is by 0.39 BLEU points better.",
"A similar phenomenon can also be seen for the 3-glow model.",
"We conjecture this is due to the similarity between languages.",
"As English and French share common vocabulary, some common features can therefore be captured by a shared decoder, thus improving its generalization.",
"Lastly, when compared with UNMT, our models show superiority, improving performance by more than 4 BLEU points in each direction.",
"We attribute the improvements to the usage of the pre-trained model and incorporation of language-specific sentence-level representations obtained by our latent code transformation.",
"We further integrate our flow-adapter architecture into the original implementation of XLM (Con-neau and Lample, 2019) and conduct experiments on the WMT datasets.",
"To fully leverage the pre-trained models, we initialize both the encoder and decoder with XLM models and set them trainable.",
"In contrast to the experiment in Section 4.4, a single shared decoder is used for this experiment, since the decoder is also initialized with the pre-trained model and has far more parameters compared with the randomly initialized transformer decoder we use in Section 4.4.",
"We report the performance of the flow-adapter based models (5-scf and 5-1260 Models en-de de-en en-ro ro-en en-fr fr-en XLM (EMD + EMD) (Conneau and Lample, 2019) 21.30 27.30 27.50 26.60 29.40 29.40 XLM (MLM + MLM) (Conneau and Lample, 2019) 26.40 34.30 33.30 31.80 33.40 33.30 5-scf 26.50 32.63 34.11 31.69 35.77 33.72 5-glow 26.43 32.04 33.87 31.32 35.25 33.12 MASS (Song et al., 2019) 28.30 35.20 35.20 33.10 37.50 34.90 CSP and fine-tuning (Yang et al., 2020) 28.70 35.70 -37.90 34.50 Table 3: BLEU of the flow-adapter models (5-scf and 5-glow) and SOTA models on WMT datasets.",
"glow 8 ) as well as the performance of the SOTA models, namely XLM, MASS and CSP.",
"9 The results are shown in Table",
"3. Noticeably, both of our flow-adapter models achieve remarkable performance on all language pairs.",
"Compared with the results of XLM (EMD + EMD), which uses the pre-trained cross-lingual embeddings instead of pre-trained models, both 5-scf and 5-glow achieve overall better performance.",
"For example, 3-scf achieves BLEU scores higher by 5.20, 5.33, 6.61, 5.09, 6.37 and 4.32 on en-de , de-en , en-ro , ro-en , en-fr and fr-en , respectively.",
"While not being as good as 5-scf, 5-glow still shows superiority over XLM (EMD + EMD).",
"These improvements can be contributed to (1) the usage of pre-trained models and (2) the introduction of the flow-adapter.",
"We further compare our flow-adapter based models with XLM (MLM + MLM), which is also initialized with pre-trained models.",
"We find the performance of x-en directions is consistently lower than en-x directions for our models except for ende .",
"This pattern is not limited to our architecture but is consistently present in prior work.",
"We, again, speculate this is relating to the complexity of languages as well as similarity between languages.",
"We leave this finding for future investigation.",
"Our flow-adapter based models, though achieving similar or relatively worse BLEU scores on de-en and ro-en compared with XLM (MLM + MLM), obtain higher scores on other directions, i.e., en-de and en-ro , suggesting that our models might be more helpful on specific translation directions, as the flow-adapter generates language-specific rep-8 Preliminary experiments showed that using 5 flows provides slightly better results than 3 flows for WMT as WMT has many more sentences than Multi30K and therefore more powerful generative models (by adding more intermediate flows) are needed to model the sentence-level representations.",
"9 We follow prior convention and compare directly with MASS and CSP even though dataset processing for MASS and CSP (e.g., filtering, sampling) are not strictly the same as for XLM.",
"But the difference is small and results would not be much different as Yang et al. (2020) mentions.",
"resentations.",
"Lastly, 5-scf achieves scores by 2.37 and 0.42 better than XLM (MLM + MLM) on en-fr and fr-en .",
"As in the other experiments, the improvement due to flow adapters seems to be related to the languages involved in that language pair and the translation directions.",
"We would like to investigate this phenomenon in future research.",
"Finally, out models are competitve with MASS and CSP, with only small differences in BLEU.",
"In general, the experiments shows the validity and effectiveness of our flow-adapter architecture integrated into pre-trained models.",
"In this work, we propose a novel flow-adapter architecture for unsupervised NMT.",
"The flow-adapter employs a pair of NFs to explicitly model the distributions of the sentence-level representations.",
"A latent code transformation is performed in translation, which enables the decoder to better capture the semantics of sentences in certain language domains.",
"Through extensive experiments, we show the flow-adapter can improve multilingual translation ability.",
"Moreover, it can alleviate the copying problem.",
"By integrating the flow-adapter into pre-trained XLM models, we achieve results competitive to state-of-the-art models on WMT datasets.",
"In the future, we would like to explore the possibility of pre-training the flow-adapter simultaneously when pre-training the language models so that the flows can learn more information.",
"Moreover, we would like to extend normalizing flows to language generation.",
"By using different flows for different languages, multilingual language generation of the same semantics can be performed.",
"We are grateful to Alex Fraser and Alexandra Chronopoulou for their insightful input.",
"This work was funded by the European Research Council (ERC #740516)."
] | [
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"result",
"abstain",
"result",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"other",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"result",
"objective",
"objective",
"abstain",
"other",
"other"
] |
[
"When primed with only a handful of training samples, very large, pretrained language models such as GPT-3 have shown competitive results when compared to fully-supervised, fine-tuned, large, pretrained language models.",
"We demonstrate that the order in which the samples are provided can make the difference between near state-of-the-art and random guess performance: essentially some permutations are fantastic and some not.",
"We analyse this phenomenon in detail, establishing that: it is present across model sizes (even for the largest current models), it is not related to a specific subset of samples, and that a given good permutation for one model is not transferable to another.",
"While one could use a development set to determine which permutations are performant, this would deviate from the true few-shot setting as it requires additional annotated data.",
"Instead, we use the generative nature of language models to construct an artificial development set and based on entropy statistics of the candidate permutations on this set, we identify performant prompts.",
"Our method yields a 13% relative improvement for GPT-family models across eleven different established text classification tasks.",
"Large pretrained language models (PLMs, Devlin et al., 2019; Peters et al., 2018; Raffel et al., 2020; Liu et al., 2019; Yang et al., 2019; Radford et al., 2019) have shown remarkable performance when conditioned with an appropriate textual context (Petroni et al., 2019, 2020; Jiang et al., 2020; Shin et al., 2020; Davison et al., 2019).",
"For example, when conditioned on a long document and a TL;DR: token, they can generate a summary of said document, and when provided a partial question (The theory of relativity was developed by __), they can generate the correct answer.",
"Perhaps most strikingly, when primed with a context consisting of very few training examples, they produce 0.1 0.3 0.8 1.5 2.7 6.7 13 175 Model Parameters (Billion) 50 60 70 80 90 100 SST2 A cc u r a c y ( % ) 0.1 0.3 0.8 1.5 2.7 6.7 13 175 Model Parameters (Billion) 50 60 70 80 90 100 S u b j A cc u r a c y ( % ) Figure 1: Four-shot performance for 24 different sample orders across different sizes of GPT-family models (GPT-2 and GPT-3) for the SST-2 and Subj datasets.",
"text classification results that can match those of fully supervised models.",
"This type of few shot setting, is commonly referred to as In-context Learn-ing (Brown et al., 2020).",
"A core component of in-context learning is the text-based prompt that serves as the context.",
"Composing a prompt requires:",
"(i) text linearisation using a template; and",
"(ii) training sample concatenation (See Table 1 for an example).",
"It has been established that the structure of the template has a large impact on performance (Shin et al., 2020; Gao et al., 2020; Schick and Schtze, 2020; Jiang et al., 2020).",
"However, to the best of our knowledge, no work has studied the effect of the sample ordering on In-context Learning performance.",
"Perhaps counter-intuitively, we find that the right sample order can make as much of a difference as 8086 Example trainingset (thegreatestmusicians,1)(redundantconcept,0) linearization Review: thegreatestmusicians.",
"the right template.",
"As can be seen in Figure 1, some permutations have comparable performance (over 85% accuracy) to supervised training for sentiment classification, while others perform close to random (around 50%).",
"This order sensitivity is universal across models, and although increasing the model size somewhat addresses it, the problem is still present for some text classification tasks (Subj in Figure 1) for models with billions of parameters.",
"In our analysis, we find no common denominator between performant sample orders and that they are not transferable across different model sizes and tasks.",
"In a fully-supervised setting, we could rely on a development set to select among sample orders.",
"However, this is not desirable in a few-shot setting where the size of the development set is very limited, even unavailable (Perez et al., 2021) .",
"Instead, we use the generative nature of language models to construct an unlabelled artificial development set and refer to it as a probing set .",
"As the probing set is unlabelled, we use the predicted label distribution statistics and propose entropy-based metrics to measure the quality of candidate prompts.Experimental results show that we can achieve on average 13% relative improvement across eleven different established text classification tasks across all different sizes (four orders of magnitude) of PLMs.",
"To summarise, our contributions are as follows: 1. We study order sensitivity for In-context Learning, which we show is crucial for the success of pretrained language models for few-shot learning.",
"2. We propose a simple, generation-based probing method to identify performant prompts without requiring additional data.",
"3. Our probing method is universally applicable and effective across different sizes of pretrained language models and for different types of datasets achieving on average a Figure 2: Training sample permutations for the In-context Learning setting.",
"In this section, we study the relationship between permutation performance and various factors.",
"For the ease of visualisation, we use a fixed random subset of four samples with a balanced label distribution from the SST-2 dataset and consider all 24 possible sample order permutations.",
"This setup is illustrated in Figure 2. We also test five randomly-selected sets of examples and summarised variance statistics in the experiment section (Section 5).",
"Although beneficial, increasing model size does not guarantee low variance We evaluate the order permutations for four different sizes of GPT-2 (0.1B1.5B) 1 and GPT-3 (2.7B175B).",
"As we can observe in Figure 1, models can obtain remarkable few-shot performance.",
"We see that the GPT2-XL (1.5B) model can even surpass 90% accuracy given just four samples.",
"This result is comparable to those of supervised models trained on more than 60,000 samples.",
"However, the performance variation of different permutations remain a big issue, especially for smaller models.",
"2 The same model can exhibit nearly perfect behaviour given one sample order, but then fall back to be on par with a random baseline for another.",
"While increasing the model size (by a few order of magnitudes) can sometimes alleviate the issue, it still cannot resolve it entirely (especially if we consider tasks other than SST-2).",
"In contrast, different initialisations of supervised fine-tuning approaches typically result in less than 1% standard deviation for their test set performance (Gao et al., 2020).",
"Adding training samples does not significantly reduce variance To further explore the order sensitivity of few-shot prompts, we increase the number of training samples and then sample a subset of at most 24 different orderings.",
"3 We use the GPT2 family models for this experiment.",
"In Figure 3, we can observe that increasing the number of training samples leads to increases in performance.",
"However, a high level of variance remains, even with a large number of samples and can even increase.",
"Based on this, we draw the conclusion that order sensitivity is likely to be a fundamental issue of In-context Learning regardless of the number of training samples.",
"Performant prompts are not transferable across models We find that a specific permuta-tion's performance may drop from 88.7% to 51.6% by changing the underlying model from GPT2-XL (1.5B) to GPT2-Large (0.8B).",
"This suggests that a particular permutation working well for one model does not imply that it will provide good results for another model.",
"To validate this hypothesis, we use all possible order permutations of the four samples as prompts 24 in total.",
"We then perform prediction conditioned on each of these prompts for different models and calculate the pairwise Spearman's rank correlation coefficient between the scores.",
"These results are shown in Figure 4.",
"If there is a common pattern for performant prompts, we should then be able to observe high correlation across models.",
"However, the behaviour of permutations is seemingly random even across 3 Bounded at the lower limit by the total number of samples given, and at the upper limit as there can be up to 64! possible orders.",
"different sizes of the same model.",
"For example, the 175B and 2.7B model only has a correlation of 0.05, this means a good permutation for the 2.7B model is in no way guaranteed that it will also yield good performance for the 175B model.",
"across models In addition to training example ordering, we also explore label ordering for training prompts.",
"We use all patterns of the abovementioned full permutations six different label patterns.",
"4 We then compute the pairwise Spearman correlation across different models as described in the previous paragraph.",
"As shown in Figure 5, the behaviour of label orderings is once again seemingly random across different sizes of the same model.",
"It is thus not possible to identify a label 4 NNPP, NPNP, NPPN, PNNP, PNPN, PPNN, where P/N respectively denotes positive/negative 8088 51.6 85.2 SST-2 Accuracy(%) 0 50 100 150 200 250 N u m b e r o f E x a m p l e s positivenegative orginal calibrated 55 60 65 70 75 80 85 SST2 A cc u r a c y ( % ) Figure 6: Left: Predicted SST-2 label distribution under different prompts.",
"Degenerate behaviour of bad prompts We perform error analysis across performant and non-performant prompts and observe that the majority of failing prompts suffer from highly unbalanced predicted label distributions (Figure 6, left).",
"An intuitive way to address this would be by calibrating the output distribution, along the lines of Zhao et al. (2021).",
"However, we find that although calibration leads to much higher performance, the variance remains high (Figure 6, right).",
"The previous section demonstrates that prompt order can have a substantial effect on performance, with some orderings of the same prompts for the same model providing random performance, and other better orderings providing performance competitive with supervised approaches.",
"This suggests that there could be various ways of selecting prompt orders to achieve better performance, but the challenge is to do so automatically and without the need for additional labels (e.g., a development set).",
"Hence, in this section, we explore the question of: How can we automatically generate a prob-ing set' to find performant prompt orderings?",
"We approach this by:",
"(i) for a randomly-selected set of training samples, we use every possible ordering permutation of this set as candidates;",
"(ii) constructing a probing set by querying the language model using all candidate prompts as context; and",
"(iii) use this probing set to identify the best ordering by ranking them using a probing metric.",
"We propose a simple methodology to automatically construct a probing set, by directly sampling",
"sampling from the language model itself.",
"This approach makes it possible to generate probing sets automatically, without access to any additional data.",
"Concretely, given a set of training samples S = { ( x i , y i ) } , i = 1 , , n , where x i and y i denote the sentence and label of the i th training sample.",
"We then define a transformation T , mapping each sample into natural language space, such that t i = T ( x i , y i ) .",
"t i is therefore a text sequence of the i th training sample using the template defined by T .",
"In this work, we use a simple transformation function T such that T ( x i , y i ) = input: x i type: y i .",
"This transforms each sample into a standard format sentence, which linearises each element in the set into natural language space defined as S (cid:48) = { t i } , i = 1 , , n .",
"We then define a full permutation function group of n training samples, F = { f m } , m = 1 , , n !",
", where each function f m takes S (cid:48) as input and outputs c m : the concatenation of a unique permutation.",
"In our case, sampling four training samples at random gives up to 24 possible ordering permutations of the transformed samples.",
"For each prompt candidate c m , we then sample from the language model to obtain the probing sequence g m P ( | c m ; ) , where denotes the parameters of the pretrained language model.",
"We stop decoding from the language model upon generating the special end-of-sentence token defined by a template, or reach the generation length limit.",
"Our probing set construction method is illustrated in Figure 7, where the objective is to generate a probing set that shares a similar distribution to the training samples.",
"We run this sampling process for all possible prompt ordering permutations and extract probing samples from them ( T 1 ( g ) ).",
"Then gather extracted samples together to form the probing set D = T 1 ( g 1 ) ... T 1 ( g n ! ) .",
"Although the probing set contains predicted label for each sentence, there is no guarantee on the validity of these labels.",
"Therefore, we discard them from the probing set as we are only interested in sampling probes from the language model corresponding to the input distribution.",
"Once we have constructed a probing set for a given set of samples, we can now use that probing set to identify the best possible prompt ordering for that particular sample set.",
"Here, we explore two 8089 Figure 7: Our probing set construction method, showing the various possible ordering permutations of the randomly selected training samples, the resulting generation for each permutation, and the concatenation of each into a probing set.",
"Global Entropy (GlobalE) The motivation behind GlobalE is to identify prompts of specific sample orderings that avoid the issue of extremely unbalanced predictions (as we have previously established it as key problem for non-performant prompts).",
"We compute the predicted label y i for data point ( x (cid:48) i , y (cid:48) i ) under context c m as follows: y i,m = argmax v VP ( v | c m T ( x (cid:48) i ); ) (1) For each label v V (where V denotes the target label set), we compute the label probability over the probing set as: p vm = (cid:80) i 1 { y i,m = v } | D | (2) We then use the predicted category label entropy as the GlobalE score for c m as follows: GlobalE m = (cid:88) v V p vm log p vm (3) Local Entropy (LocalE) The motivation behind LocalE is that if a model is overly confident for all probing inputs, then it is likely that the model is not behaving as desired.",
"At the very least, it is poorly calibrated, which could also be an indication of a poor capability to appropriately differentiate between classes.",
"Similar to the GlobalE computation, we calculate the prediction probability of a data point ( x (cid:48) i , y (cid:48) i ) over the target labels v V under context c m , as follows: p vi,m = P ( x (cid:48) i ,y (cid:48) i ) D ( v | c m T ( x (cid:48) i ); ) , v V (4) We then calculate the average prediction entropy per data point as the LocalE score: LocalE m = (cid:80) i (cid:80) v V p vi,m log p vi,m | D | (5) As we now have a way to score each prompt ordering, based on its effect against the probing set, we can rank each prompt ordering by performance as measured by GlobalE or LocalE respectively.",
"We use four different sizes of GPT-2 (Radford et al., 2019) (with 0.1B, 0.3B, 0.8B, and 1.5B parame-teers) and two sizes of GPT-3 (Brown et al., 2020) (with 2.7B, and 175B parameters).",
"Due to limited context window size (up to 1024 word-pieces for the GPT-2 series of models), we use a 4-shot setting for all datasets except AGNews and DBPedia.",
"Our experiments are based on the open-source checkpoints of GPT-2 models and access to the OpenAI GPT-3 API.",
"5 For probing set generation, we restrict the maximum generation length to 128.",
"We also use sampling with a temperature, t , of 2, and we also make use of block n -gram repetitions (Paulus et al., 2018) to encourage diverse generation.",
"We use 24 different permutations for each set of randomly selected training samples and use 5 different sets (except for GPT-3 with 175B parameters, where we only do two sets with 12 different permutation due to the high monetary cost) for each experiment, giving a total of 120 runs.",
"We report the mean and standard deviation of the corresponding evaluation metric over 5 different sets.",
"For performant prompt selection, we rank candidate prompts using the LocalE and GlobalE prob-5 https://openai.com/api/ 8090 SST-2 SST-5 DBPedia MR CR MPQA Subj TREC AGNews RTE CB Majority 50.9 23.1 9.4 50.0 50.0 50.0 50.0 18.8 25.0 52.7 51.8 Finetuning (Full) 95.0 58.7 99.3 90.8 89.4 87.8 97.0 97.4 94.7 80.9 90.5 GPT-2 0.1B 58 .",
"ing metrics over the automatically generated probing set.",
"We then select top k samples ranked by highest entropy values, where k = 4 in our experiments, of the available 24 permutations as performant prompts.",
"Finally, we use these performant prompts to evaluate performance on various datasets and demonstrate both better performance and reduced variance.",
"We also provide results for a majority baseline, which always predicts the majority label in the dataset, as a lower-bound of performance.",
"We also provide an oracle to show the upper-bound of performance by selecting the top four performant orderings based on prompt performance on the validation set.",
"Similar to previous work (Gao et al., 2020; Zhao et al., 2021), we use eleven text classification datasets ranging from sentiment classification to textual entailment.",
"Further details of the datasets are provided in the Appendix.",
"For evaluation, we sub-sample 256 samples of the validation sets for all datasets to control for the GPT-3 inference costs as it requires the usage of a monetary paid-for API.",
"We report experimental results in Table 2 and observe consistent improvements for both LocalE and GlobalE across all tasks.",
"Entropy-based probing is effective for performant prompt selection regardless of model size We find that GlobalE achieves, on average, a 13% relative improvement across the eleven different sentence classification tasks in comparison to prompts that do not make use of probing.",
"LocalE provides results slightly inferior to GlobalE, with an average 9.6% relative improvement over the baseline model.",
"Our selected performant prompts also demonstrate considerably lower variance than using all candidate prompts.",
"In Figure 8, we visualise the average performance when varying K for the top K prompt selection.",
"K = 24 corresponds to using all sampled prompt orders, which is equivalent to the baseline model performance in Table 2. We can observe that the slope of curves are negative for all datasets, suggesting that our method can rank performant prompts effectively.",
"Though K = 1 can provide good performance for most cases, in our experiments, we use K = 4 as preliminary experiments indicated that it yielded stable performance across datasets.",
"Entropy-based probing is effective across templates We evaluate Entropy-based probing for four different templates similar to Gao et al. (2020) and Zhao et al. (2021) (Table 4) for the SST-2 dataset.",
"Experimental results in Table 3 indicate that Entropy-based probing is valid for different templates.",
"We also observe that the randomness across different templates is similar to Section 2. These findings suggest that Entropy-based probing is not sensitive to specific templates, as it consistently provides improvements for all cases.",
"Performant permutation selection is a safe option for In-context Learning We find that for models that suffer from high prompt variance, our prompt selection process can show large improvements up to 30% relative improvement.",
"Furthermore, for tasks with low initial prompt performance variance, our method does not negatively impact performance.",
"Our prompt selection provides marginal improvement at worse and on average a 13% relative improvement in the most cases.",
"Sentence-pair tasks remain challenging for smaller-sized models even with performant permutation selection For the CB and RTE datasets, Template 1 Template 2 Template 3 Template 4 GPT-2 0.1B 58.9 7 .",
"the performance of GPT-2 models is not significantly different from that of a random baseline.",
"Despite this, we find that our method for identifying performant prompts can still provide minimal performance gains, although these are still within the levels of a random guess or majority vote.",
"One reason for this could be that, for these particular sizes of models on these tasks, no good prompt exists.",
"As such, optimising the prompt is not particularly effective in this setting.",
"This is further supported by the observation that prompt selection can considerably improve performance on both CB and RTE at larger model sizes (particularly so for the GPT-3 175B parameter model).",
"In fact, we find that prompt selection using GlobalE improves performance by 4.9% for GPT-3 175B on CB.",
"This indicates that our method is widely applicable to all model sizes, and across all tasks, as long as they already possess some existing classification ability that can be improved through prompt design.",
"Entropy-based probing outperforms using subsets of the training data for tuning If one was not to rely on generation, an alternative approach to prompt selection could be to split the (limited) training data to form a validation set.",
"To compare 8092 GPT-2 0.1B GPT-2 0.3B GPT-2 0.8B GPT-2 1.5B Baseline 58 .",
"against this approach, we split the 4-shot training samples (same setting as in Table 2) in half.",
"We then select the top four performing prompts using validation set performance.",
"As can be seen in Table 5, this approach consistently outperforms the baseline.",
"However, both Entropy-based probing methods consistently provides better performance across all model sizes.",
"Unified Interface Design for NLP Most previous work focuses on shared-parameters models, pretrain on some tasks, then fine-tune for different tasks, e.g. ELMo (Peters et al., 2018), BERT (De-vlin et al., 2019), etc.",
"Eventually, leading to multiple task-specific models.",
"There has for some time been attempts to design a unified interface for NLP tasks (Kumar et al., 2016; Raffel et al.,",
"2020).In parallel with these works, GPT-2 (Radford et al., 2019) shows that appending trigger tokens (e.g. TL;DR) at the end of language model input can cause language models to behave like summarisation models.",
"The zero-shot capability of language models shows the potential to unify NLP tasks into a language modelling framework where fine-tuning is not necessary to achieve good performance.",
"Furthermore, GPT-3 (Brown et al., 2020) shows that task-agnostic, few-shot performance can be improved by scaling up language models.",
"It can sometimes even become competitive with prior state-of-the-art fine-tuning approaches.",
"Prompt Design for PLMs The core challenge of prompt design is to convert training data (if it exists) into a text sequence.",
"Most work on prompt design focuses on how to make prompts more compatible with language models.",
"Petroni et al. (2019) uses human effort to design natural language sentences and then perform token prediction given the input context.",
"However, hand-crafted templates require significant human effort and is likely to end up with sub-optimal performance.",
"Recent work has explored automatic template construction: Schick and Schtze (2020) uses cloze-style tasks to construct templates, Gao et al. (2020) uses an external language model to generate templates, and Shin et al. (2020) uses gradient-guided search to find templates that maximise performance.",
"Jiang et al. (2020) uses a mining-based method to create multiple diverse templates automatically.",
"Order Sensitivity of Prompt Design Gao et al. (2020) demonstrated that finetuning-based approaches are not as order sensitive as In-context Learning.",
"Making use of a standard-size training set, Liu et al. (2021) used nearest neighbour search to retrieve the most relevant training samples for a specific test sample.",
"They were successful in retrieving relevant samples and concluded that after retrieving them the order in which they are provided in the prompt has little to no effect on performance.",
"While our study is fundamentally different from theirs in that we do not make use of a standard-size training set, we do come to the opposite conclusion.",
"All previous work on prompt design focuses on the textual quality of the prompt and, to the best of our knowledge, none has studied order sensitivity in detail.",
"True Few-shot Learning Perez et al. (2021) evaluated few-shot capability of LMs when a held-out validation set is not available.",
"Experimental result suggested that previous work overestimate the few-shot ability of LMs in this (true few-shot learning) setting.",
"Our work instead use the generative nature of language models to construct a probing set without relying on held-out examples.",
"We show that our probing method is better than relying on held out examples (Figure 5) and thus enables true few-shot learning.",
"We have shown that few-shot prompts suffer from order sensitivity, in that for the same prompt the order in which samples are provided can make the difference between state-of-the-art and random performance.",
"In our analysis of the problem, we established that it is present across tasks, model sizes, prompt templates, samples, and number of training samples.",
"To alleviate this problem, we introduced a novel probing method that exploits the generative nature of language models to construct an artificial development set.",
"We were able to identity performant permutations using entropy-based statistics over this set, leading to an on average 13% improvement across eleven text classification tasks."
] | [
"abstain",
"objective",
"method",
"abstain",
"method",
"method",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"objective",
"objective",
"objective",
"result",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"other",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"other",
"other",
"method",
"abstain",
"result",
"method",
"objective",
"method"
] |
[
"Existing question answering (QA) techniques are created mainly to answer questions asked by humans.",
"But in educational applications, teachers often need to decide what questions they should ask, in order to help students to improve their narrative understanding capabilities.",
"We design an automated question-answer generation (QAG) system for this education scenario: given a story book at the kindergarten to eighth-grade level as input, our system can automatically generate QA pairs that are capable of testing a variety of dimensions of a student's comprehension skills.",
"Our proposed QAG model architecture is demonstrated using a new expert-annotated F airytale QA dataset, which has 278 child-friendly storybooks with 10,580 QA pairs.",
"Automatic and human evaluations show that our model outperforms state-of-the-art QAG baseline systems.",
"On top of our QAG system, we also start to build an interactive story-telling application for the future real-world deployment in this educational scenario.",
"There has been substantial progress in the development of state-of-the-art (SOTA) question-answering (QA) models in the natural language processing community in recent years (Xiong et al., 2019; Karpukhin et al., 2020; Cheng et al., 2020; Mou et al., 2021).",
"However, the opposite of QA tasksquestion-answer generation (QAG) tasks that generate questions based on input textis yet under-explored.",
"We argue, being able to ask a reasonable question is also an important indicator whether the Equal contributions from the first authors: [email protected], [email protected]; Work was done while Mo was at IBM Research.",
"F airytale QA Dataset Source (Section) Maie sighed.",
"she knew well that her husband was right, but she could not give up the idea of a cow.",
"the buttermilk no longer tasted as good as usual in the co ff ee; ... ... they were students, on a boating excursion, and wanted to get something to",
"eat.'bring us a junket, good mother,' cried they to",
"reader comprehends the document, thus belongs to the reading comprehension(RC) task family.",
"QAG also contributes to important real-world applications, such as building automated systems to support teachers to e ffi ciently construct assessment questions (and its correct answer) for the students at a scale (Xu et al., 2021; Snyder et al., 2005).",
"Similar to training QA models, QAG model training requires high-quality and large-scale RC datasets (e.g., NarrativeQA (Ko cisk`y et al., 2018)).",
"However, many of the existing datasets are either collected via crowd-sourcing (Rajpurkar et al., 2016; Kocisk`y et al., 2018; Reddy et al., 2019), or using automated retrievers (Nguyen et al., 2016; 731 Joshi et al., 2017; Dunn et al., 2017; Kwiatkowski et al., 2019), thus risking the quality and validity of labeled QA-pairs.",
"This risk becomes especially problematic when building applications in the education domain: While existing QA models may perform well for the general domain, they fall short in understanding what are the most useful QA pairs to generate for educational purposes.",
"Specifically, RC is a complex skill vital for children's achievement (Snyder et al., 2005), the datasets should contain questions that focus on a well-defined construct (e.g., narrative comprehension) and measure a full coverage of sub-skills within this construct (e.g., reasoning causal relationship and understanding emotion within narrative comprehension) using items of varying di ffi culty levels (e.g., inference making and information retrieval) (Paris and Paris, 2003).",
"In this work, we aim to develop a QAG system to generate high-quality QA-pairs, emulating how a teacher or parent would ask children when reading stories to them (Xu et al., 2021).",
"Our system is built on a novel dataset that was recently released, F airytale QA (Xu et al., 2022).",
"This dataset focuses on narrative comprehension for elementary to middle school students and contains 10,580 QA-pairs from 278 narrative text passages of classic fairytales.",
"As reported in Xu et al. (2022), F airy tale QA was annotated by education experts and includes well-defined and validated narrative elements laid out in the education research (Paris and Paris, 2003), making it particularly appealing for RC research in the education domain.",
"Our QAG system design consists of a three-step pipeline: (1) to extract candidate answers from the given storybook passages through carefully designed heuristics based on a pedagogical framework; (2) to generate appropriate questions corresponding to each of the extracted answers using a state-of-the-art (SOTA) language model; and (3) to rank top QA-pairs with a specific threshold for the maximum amount of QA-pairs for each section.",
"We compare our QAG system with two existing SOTA QAG systems: a 2-step baseline system (Shakeri et al., 2020) fine-tuned on F airytale QA, and the other is an end-to-end generation system trained on a large-scale automatically generated RC dataset ( PAQ ) (Lewis et al., 2021).",
"We evaluate the generated QA-pairs in terms of similarity by Rouge-L precision score with di ff erent thresholds on candidate QA-pair amounts and semantic as well as syntactic correctness by human evaluation.",
"We demonstrate that our QAG system performs better in both automated evaluation and human evaluation.",
"Table 1 is a sample of F airytale QA story as input and the QA pairs generated by human education experts, 2-step baseline model, PAQ baseline, and our QAG System.",
"We conclude the paper by demoing an interactive story-telling application that built upon our QAG system to exemplify the applicability of our system in a real-world educational setting.",
"There exists a large number of datasets available for narrative comprehension tasks.",
"These datasets were built upon di ff erent knowledge resources and went through various QA-pair creating approaches.",
"For instance, some focus on informational texts such as Wikipedia and website articles(Rajpurkar et al. (2016), Nguyen et al. (2016), Dunn et al. (2017), Kwiatkowski et al. (2019), Reddy et al. (2019)).",
"Prevalent QA-pair generating approaches include crowd-sourcing (Rajpurkar et al., 2016; Kocisk`y et al., 2018; Reddy et al., 2019), using automated QA-pair retriever (Nguyen et al., 2016; Joshi et al., 2017; Dunn et al., 2017; Kwiatkowski et al., 2019), and etc.",
"Datasets created by the approaches mentioned above are at risk of not consistently controlling the quality and validity of QA pairs due to the lack of well-defined annotation protocols specifically for the targeting audience and scenarios.",
"Despite many of these datasets involving large-scale QA pairs, recent research (Kocisk`y et al., 2018) found that the QA pairs in many RC datasets do not require models to understand the underlying narrative aspects.",
"Instead, models that rely on shallow pattern matching or salience can already perform very well.",
"NarrativeQA, for instance, (Kocisk`y et al., 2018) is a large dataset with more than 46,000 human-generated QA-pairs based on abstractive summaries.",
"Di ff ering from most other RC datasets that can be answerable by shallow heuristics, the NarrativeQA dataset requires the readers to integrate information about events and relations expressed throughout the story content.",
"Indeed, NarrativeQA includes a significant amount of questions that focus on narrative events and the relationship among events (Mou et al., 2021).",
"One may expect that NarrativeQA could also be used for QAG tasks.",
"In fact, a couple of recent works use this dataset and train a network by combining a QG module and a QA module with a reinforcement learning approach(Tang et al., 2017).",
"For example, Wang et al. (2017) use the QA result to reward the QG module then jointly train the two sub-systems.",
"In addition, Nema and Khapra (2018) also explore better evaluation metrics for the QG system.",
"However, the NarrativeQA dataset is in a di ff erent domain than the educational context of our focus.",
"Thus the domain adaptation di ffi culty is unknown.",
"As previously mentioned, the general-purpose QA datasets ( e.g., SQuAD (Rajpurkar et al., 2016), MS MARCO (Nguyen et al., 2016)) are unsuitable for children's education context, as they impose little structure on what comprehension skills are tested and heavily rely on crowd workers typically with limited education domain knowledge.",
"F airytale QA (Xu et al., 2022) is a newly released RC dataset that precisely aims to solve those issues and complement the lack of a high-quality dataset resource for the education domain.",
"This dataset contains over 10,000 high-quality QA-pairs from almost 300 children's storybooks, targeting students from kindergarten to eighth grade.",
"As discussed in Xu et al. (2022), F airy tale QA has two unique advantages that make it particularly useful for our project.",
"First, the F airytale QA was developed based on an evidence-based reading comprehension framework (Paris and Paris, 2003), which comprehensively focuses on seven narrative elements / relations contributing to reading comprehension: character , setting , feeling , action , causal relationship , outcome resolution , and prediction (Detailed definition and example of each aspect is described in Appendix A).",
"Second, the development of F airytale QA followed a rigorous protocol and was fulfilled by trained annotators with educational research backgrounds.",
"This process ensured that the annotation guideline was followed, the style of questions generated by coders was consistent, and the answers to the questions were factually correct.",
"F airytale QA was reported to have high validity and reliability through a validation study involving actual students (Xu et al., 2022).",
"A few years back, rule-based QAG systems (Heil-man and Smith, 2009; Mostow and Chen, 2009; Yao and Zhang, 2010; Lindberg et al., 2013; Labu-tov et al., 2015) were prevalent, but the generated QA su ff ered from the lack of variety.",
"Neural-based models for question generation tasks (Du et al., 2017; Zhou et al., 2017; Dong et al., 2019; Scialom et al., 2019; Zhao et al., 2022) have been an emerging research theme in recent years.",
"But their focus are on the general domain QAG thus they only used the available general QA dataset for training, we have no idea how these models may perform in an education contxt.",
"In this paper, we use one recent work Shakeri et al. (2020) as our baseline.",
"They proposed a two-step and two-pass QAG method that firstly generate questions (QG), then concatenates the questions to the passage and generates the answers in a second pass (QA).",
"In addition, we include the recently-published Probably-Asked Questions (PAQ) (Lewis et al., 2021) work as a second baseline.",
"The PAQ system is an end-to-end QAG system trained on the PAQ dataset, a very large-scale QA dataset containing 65M automatically generated QA-pairs from Wikipedia.",
"The primary issue with deep-learning-based models in the targeted children education application is that existing datasets and models do not consider the specific audience's language preference and the educational purposes (Hill et al., 2015; Yao et al., 2012).",
"Because both rule-based and neural-network-based approaches have their limitations inherently, in our work, we combine these two approaches to balance both the controllability of what types of QA pairs should be generated to better serve the educational purpose, and the diversity of the generated QA sequences.",
"The released F airytale QA contained 10,580 QA-pairs from 278 books, and each question comes with a label indicating the narrative element(s) / relation(s) the question aims to assess.",
"We split the dataset into train / validation / test splits with 232 / 23 / 23 books and 8,548 / 1,025 / 1,007 QA pairs.",
"The split is random, but the statistical distributions in each split are consistent.",
"Table 2 shows core statistics of the F airytale QA dataset in each split, and Figure 1 shows the distribution of seven types of annotations for the QA pairs across 733 F airytale QA Dataset Train Validation Test 232 Books with 8548 QA-pairs 23 Books with 1025 QA-pairs 23 Books with 1007 QA-pairs Mean S.D. Min Max Mean S.D. Min Max Mean S.D. Min Max # section per story 14.4 8.8 2 60 16.5 10.0 4 43 15.8 10.8 2 55 # tokens per story 2160.9 1375.9 228 7577 2441.8 1696.9 425 5865 2313.4 1369.6 332 6330 # tokens per section 149.6 64.8 12 447 147.8 56.7 33 298 145.8 58.6 24 290 # questions per story 36.8 28.9 5 161 44.5 29.5 13 100 43.7 28.8 12 107 # questions per section 2.8 2.440 0 18 2.9 2.3 0 16 3.0 2.4 0 15 # tokens per question 10.2 3.2 3 27 10.9 3.2 4 24 10.5 3.1 3 25 # tokens per answer 7.1 6.0 1 69 7.7 6.3 1 70 6.8 5.2 1 44 Table 2: Core statistics of the F airytale QA dataset, which has 278 books and 10580 QA-pairs.",
"There are three sub-modules in our QA generation (QAG) pipeline: a heuristics-based answer generation module (AG), followed by a BART-based (Lewis et al., 2019) question generation module (QG) module fine-tuned on F airytale QA dataset, and a DistilBERT-based(Sanh et al., 2019) ranking module fine-tuned on F airytale QA dataset to rank and select top N QA-pairs for each input section.",
"The complete QAG pipeline of our system is shown in Figure",
"2. 4.1 Heuristics-based AG Module Based on our observation of the F airytale QA dataset, educational domain experts seem to have uniform preferences over certain types of question and answer pairs (Figure 1).",
"This may be because these experts take the young children's learning objectives into consideration children's learning ability should be oriented toward specific types of answers to maximize their learning outcome.",
"That is why educational experts rarely ask yes / no questions in developing or assessing children's reading comprehension.",
"For automated QAG systems, we can design the system to mimic human behaviors either by defining heuristics rules for the answer extraction module, or leaving the filtering step to the end after the QA pairs are generated.",
"However, the latter approach may have inherent risks that the training data could influence the types of answers generated.",
"We decided to develop and apply the heuristic rules to the answer extraction module.",
"We observed that some narrative elements such as characters , setting , and feelings are mostly made up of name entities and noun chunks, for instance, the character name in a story, a particular place where the story takes place, or a specific emotional feeling.",
"We then leverage the Spacy 1 English model for Part-of-speech tagging on the input content to 1 https://spacy.io/ 734 Figure 2: QAG system design with three steps: rule-based answer extraction, NN-based question generation, and NNbased ranking.",
"extract named entities and noun chunks as candidate answers to cover these three types of narrative elements.",
"We further observed that the QA pairs created by education experts around the action , causal relationship , prediction , and outcome resolution categories are all related to a particular action event in the story.",
"Thus, the answers to these four types of questions are generally the description of the action event.",
"We realize that Propbank's semantic roles labeler (Palmer et al., 2005) toolkit is constructive for extracting the action itself and the event description related to the action.",
"We then leverage this toolkit to extract the trigger verb as well as other dependency nodes in the text content that can be put together as a combination of subject, verb, and object and use these as candidate answers for the latter four categories.",
"candidate answers that cover all 7 narrative elements with the carefully designed heuristics.",
"Following the answer extraction module that yields candidate answers, we design a QG module which takes a story passage and an answer as input, and generates the corresponding question as output.",
"The QG task is basically a reversed QA task.",
"Such a QG model could be either transfer-learned from another large QA dataset or fine-tuned on our F airy tale QA dataset.",
"Mainstream QA datasets do cover various types of questions in order to comprehensively evaluate QA model's reading comprehension ability; for instance, NarrativeQA (Kocisk`y et al., 2018) is a large-scale QA corpus with questions that examine high-level abstractions to test the model's narrative understanding.",
"We choose NarrativeQA dataset as an alternative option for fine-tuning our QG model because this dataset requires human annotators to provide a diverse set of questions about characters, events, etc., which is similar to the types of questions that education experts created for our F airytale QA dataset.",
"In addition, we leverage BART(Lewis et al., 2019) as the backbone model because of its superior performance on NarrativeQA according to the study in (Mou et al., 2021).",
"We perform a QG task comparison to examine the quality of questions generated for F airytale QA dataset by one model fine-tuned on NarrativeQA, one on F airytale QA, and the other on both the NarrativeQA and F airytale QA.",
"We fine-tune each model with di ff erent parameters and acquire the one with the best performance on the validation and test splits of F airytale QA dataset.",
"Results are shown in Table",
"3. We notice that the model fine-tuned on F airytale QA alone outperforms the other methods.",
"We attribute this to the domain and distribution di ff erences between the two datasets.",
"That is why the model fine-tuned on both NarrativeQA and F airytale QA may be polluted by the NarrativeQA training.",
"The best-performing model 735 is selected for our QG module in the QAG pipeline.",
"Our QAG system has generated hundreds of candidate QA-pairs through the first two modules.",
"However, we do not know the quality of these generated QA-pairs by far, and it is unrealistic to send back all the candidate QA-pairs to users in a real-world scenario.",
"Consequently, a ranking module is added to rank and select the top candidate QA-pairs, where the user is able to determine the upper limit of generated QA-pairs for each input text content.",
"Here, the ranking task can be viewed as a classification task between the ground-truth QA-pairs created by education experts and the generated QA-pairs generated by our systems.",
"We put together QA-pairs generated with the first two modules of our QAG system as well as groundtruth QA-pairs from the train / validation / test splits of F airytale QA dataset, forming new splits for the ranking model, and fine-tuned on a pre-trained Dis-tilBERT model.",
"We test di ff erent input settings for the ranking module, including the concatenation of text content and answer only, as well as the concatenation of text content, question, and answer in various orders.",
"Both input settings can achieve over 80% accuracy on the test split, while the input setting of the concatenation of text content, question, and answer can achieve F 1 = 86 .",
"7% with a leading more than 5% over other settings.",
"Thus, we acquire the best performing ranking model for the ranking module in our QAG system and allow users to determine the amount of top N generated QA-pairs to be outputted.",
"We conduct both automated evaluation and human evaluation for the QAG task.",
"The input of the QAG task is a section of the story (may have multiple paragraphs), and the outputs are generated QA pairs.",
"Unlike QA or QG tasks that each input corresponds to a single generated output no matter what model is used, the QAG task does not have a fixed number of QA-pairs to be generated for each section.",
"Besides, various QAG systems will generate di ff erent amounts of QA-pairs for the same input content.",
"Therefore, we carefully define an evaluation metric that is able to examine the quality of generated QA-pairs over a di ff erent amount of candidate QA-pairs.",
"The comparison is on the validation and test splits of F airytale QA.",
"We select a SOTA QAG system that uses a two-step generation approach (Shakeri et al., 2020) as one baseline system (referred as 2-Step Baseline ).",
"In the first step, it feeds a story content to a QG model to generate questions; then, it concatenates each question to the content passage and generates a corresponding answer through a QA model in the second pass.",
"The quality of generated questions not only relies on the quality of the training data for the QG and QA models but also is not guaranteed to be semantically or syntactically correct because of the nature of neural-based models.",
"We replicate this work by fine-tuning a QG model and a QA model on F airytale QA dataset with the same procedures that help us select the best model for our QG module.",
"We use pre-trained BART just like ours as the backbone model to ensure di ff erent model architectures do not influence the evaluation results.",
"Unlike our QG module that takes both an answer and text content as the input, their QG model only takes the text content as input.",
"Thus, we are not able to evaluate the QG model solely for this baseline.",
"We replicate the fine-tuning parameters for our QG module to fine-tune the baseline QG model.",
"For the selection of QA model used in the 2-Step Baseline , similar to the QG experiments we present in Table 3, we fine-tune a pre-trained BART on each of the three settings: NarrativeQA only, F airytale QA only, and both datasets.",
"According to Table 4, the model that fine-tuned on both NarrativeQA and F airytale QA datasets performs much better than the other settings and outperforms the model that fine-tuned on F airytale QA only by at least 6%.",
"We leverage the best performing QA model for the 2-Step Baseline system.",
"second baseline system (Lewis et al., 2021).",
"PAQ dataset is a semi-structured, very large scale Knowledge Base of 65M QA-pairs.",
"PAQ system is an end-to-end QA-pair generation system that is made up of four modules: Passage Scoring, Answer Extraction, Question Generation, and Filtering Generated QA-pairs.",
"The PAQ system is trained on the PAQ dataset.",
"It is worth pointing out that during the end-to-end generation process, their filtering module requires loading the complete PAQ corpus into memory for passage retrieval, which leads us to an out-of-memory issue even with more than 50G RAM.",
"2 In comparison, our QAG system requires less than half of RAM in the fine-tuning process.",
"In Table 1, we show a sample of F airytale QA story section as input and the QA pairs generated by human education experts, 2-step baseline model, PAQ baseline, and our QAG System.",
"A few more examples are provided in Appendix C. 5.1.2 Evaluation Metrics Since the goal of QAG is to generate QA-pairs that are most similar to the ground-truth QA-pairs given the same text content, we concatenate the question and answer to calculate the Rouge-L precision score for every single QA-pair evaluation.",
"However, the amount of QA-pairs generated by various systems is di ff erent.",
"It is unfair and inappropriate to directly compare all the generated QA-pairs from di ff erent systems.",
"Moreover, we would like to see how QAG systems perform with di ff erent thresholds on candidate QA-pair amounts.",
"In other words, we are looking at ranking metrics that given an upper bound N as the maximum number of QA-pairs can be generated per section, how similar the generated QA-pairs are to the groundtruth QA-pairs.",
"Generally, there are three di ff erent ranking metrics: Mean Reciprocal Rank (MRR), Mean Average Precision (MAP), and Normalized Discounted Cumulative Gain (NDCG).",
"While MRR is only good to evaluate a single best item from the candidate list and NDCG requires complete rank ratings for each item, neither metric is appropriate in our case.",
"As a result, We decide to use MAP @ N , where N [1 , 3 , 5 , 10], as our evaluation metric for the QAG generation task.",
"Furthermore, since the average amount of ground-truth answers are close to 3 per section in F airytale QA dataset (Table 2), we expect the MAP @3 is the most similar to the 2 We do not use the filtering module for PAQ system because of unable to solve the memory issue with their provided code.",
"actual use case, and we provide four N to describe the comparison results and trends for QAG systems on the F airytale QA.",
"Here is the detailed evaluation process on MAP @ N : for each ground-truth QA-pair, we find the highest Rouge-L precision score on the concatenation of generated question and answer, among top N generated QA-pairs from the same story section.",
"Then we average overall ground-truth QA-pairs to get the MAP @ N score.",
"This evaluation metric evaluates the QAG system's performance on di ff erent candidate levels and is achievable even there is no ranking module in the system.",
"For our QAG system, we just need to filter top N QA-pairs from our ranking module; for the 2-Step Baseline and the PAQ baseline system, we simply adjust a topN parameter in the configuration.",
"Table 5 presents the evaluation results of our system and two SOTA baseline systems in terms of MAP @ N , N [1 , 3 , 5 , 10].",
"We observe our system outperforms both the 2-Step baseline system and PAQ system in all settings with significantly better Rouge-L precision performance on both the validation and test splits of F airytale QA dataset.",
"According to the evaluation results, the 2-Step baseline system su ff ers from the inherent lack of quality control of neural models over both generated answers and questions.",
"We notice that the ranking module in our QAG system is an essential component of the system in locating the best candidate QA-pairs across di ff erent limits of candidate QA-pair amounts.",
"The more candidate QA-pairs allowed to be selected for each section, the better our system performs compared to the other two baseline systems.",
"Still, the Rouge-L score lacks the ability to evaluate the syntactic and semantic quality of generated QA-pairs.",
"As a result, we further conduct a human evaluation to provide qualitative interpretations.",
"We recruited five human participants ( N = 5) to conduct a human evaluation to evaluate further our model generated QA quality against the groundtruth and the baseline (only against PAQ system as it outperforms the 2-Step Baseline ).",
"In each trial, participants read a storybook section and multiple candidate QA pairs for the same section: three generated by the baseline PAQ system, three generated by our system (top-3), and the 737 QAG Systems MAP@N with Rouge-L Precision on Q + A for val / test splits N = 10 N = 5 N = 3 N = 1 Ours 0.620 / 0.596 0.543 / 0.523 0.485 / 0.452 0.340 / 0.310 2-Step Baseline 0.443 / 0.422 0.370 / 0.353 0.322 / 0.305 0.225 / 0.216 PAQ Baseline 0.504 / 0.485 0.436 / 0.424 0.387 / 0.378 0.288 / 0.273 Table 5: Results of QAG task by our system and two baseline systems.",
"others were the ground-truth.",
"Participants did not know which model each QA pair was from.",
"The participant was asked to rate the QA pairs along three dimensions using a five-point Likert-scale.",
"Readability : The generated QA pair is in readable English grammar and words.",
"Question Relevancy : The generated question is relevant to the storybook section.",
"Answer Relevancy : The generated answer is relevant to the question.",
"We first randomly selected 7 books and further randomly selected 10 sections out of these 7 books (70 QA pairs).",
"Each participant was asked to rate these same 70 QA pairs to establish coding consistency.",
"The intercoder reliability score (Krip-pendo ff 's alpha (Krippendor ff , 2011)) among five participants along the four dimensions are between 0.73 and 0.79, which indicates an acceptable level of consistency.",
"Then, we randomly selected 10 books (5 from test and 5 from validation splits), and for each book, we randomly selected 4 sections.",
"Each section, on average, has 9 QA-pairs (3 from each model).",
"We assigned each section randomly to two coders.",
"In sum, each coder coded 4 books (i.e. 16 sections and roughly 140 QA-pairs), and in total 722 QA-pairs were rated.",
"We conducted t-tests to compare each model's performance.",
"The result (Table 6) shows that for the Readability dimension, our model (avg = 4.71, s.d. = 0.70) performed significantly better than the PAQ model (avg = 4.08, s.d. = 1.13, t (477) = 7 .",
"33 , p < .",
"01), but was not as good as the groundtruth (avg = 4.95, s.d. = 0.28, t (479) = 4 .",
"85 , p < .",
"01).",
"For the Question Relevancy dimension, groundtruth also has the best rating (avg = 4.92, s.d. = 0.33), which was significantly better than the other two models.",
"Our model (avg = 4.39, s.d. = 1.15) comes in second and outperforms baseline (avg = 4.18, s.d. = 1.22, t (477) = 1 .",
"98 , p < .",
"05).",
"The result suggests that questions generated by our model can generate more relevant to the story plot than those generated by the baseline model.",
"For the Answer Relevancy dimension, in which we consider how well the generated answer can answer the generated question, the ground-truth (avg = 4.83,s.d. = 0.57) significant outperformed two models again.",
"Our model (avg = 3.99, s.d. = 1.51) outperformed PAQ baseline model (avg = 3.90, s.d. = 1.62, t (477) = 0 .",
"58 , p = .",
"56), but the result is not significant.",
"All results show our model has above-average ( > 3) ratings, which suggests it reaches an acceptable user satisfaction along all three dimensions.",
"To exemplify the real-world application of our QAG system, we developed an interactive storytelling application built upon our QAG system.",
"This system is designed to facilitate the language and cognition development of pre-school children via interactive QA activities during a storybook reading session.",
"For example, as children move on to a new storybook page, the back-end QAG system will generate questions for the current section.",
"Furthermore, to optimize child engagement in the QA session, the QAG system also generates follow-up questions for each answered question 738 Figure 3: The QA panel of our interactive storytelling application built upon our QAG system.",
"3. A conversational chatbot interacts with children, reads the story, facilitates questioning-and-answering via speech.",
"The system can also keep track of child performance for the parents.",
"A preliminary user study with 12 pairs of parents and children between the ages of 3-8 suggests that this application powered by our QAG system can successfully maintain engaging conversations with children about the story content.",
"In addition, both parents and children found the system useful, enjoyable, and easy to use.",
"Further evaluation and deployment details of this interactive storytelling system can be found in (Zhang et al., 2022).",
"In this work, we explore the question-answer pair generation task (QAG) in an education context for young children.",
"Leveraging a newly-constructed expert-annotated QA dataset built upon child-oriented fairytale storybooks ( FairytaleQA ), we implemented a QA-pair generation pipeline which, as observed in human and automated evaluation, e ff ectively supports our objective of automatically generating high-quality questions and answers at scale.",
"To examine the model's applicability in the real world, we further built an interactive conversational storybook reading system that can surface the QAG results to children via speech-based interaction.",
"Our work lays a solid foundation for the promising future of using AI to automate educational question answering tasks.",
"In the future, we plan to recruit educational experts to evaluate the educational e ffi cacy of the QA-pairs as an additional evaluation dimension.",
"Another future direction is to develop a context-aware multi-turn QAG system grounded by the story narratives (similar to (Li et al., 2021) ), where the generation of a new turn of QA is conditioned on previous generations as well as the book, so that it can enable new automated dialogue systems in the education setting.",
"This work was supported by the Rensselaer-IBM AI Research Collaboration ( http://airc.rpi. edu ), part of the IBM AI Horizons Network ( http: //ibm.biz/AIHorizons ).",
"The F airytale QA dataset portion of the project is funded by Schmidt Futures."
] | [
"abstain",
"abstain",
"method",
"objective",
"result",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"objective",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"other",
"abstain",
"other",
"other",
"objective",
"abstain",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"method",
"method",
"abstain",
"other",
"other"
] |
[
"Current textual question answering (QA) models achieve strong performance on in-domain test sets, but often do so by fitting surface-level patterns, so they fail to generalize to out-of-distribution settings.",
"To make a more robust and understandable QA system, we model question answering as an alignment problem.",
"We decompose both the question and context into smaller units based on off-the-shelf semantic representations (here, semantic roles), and align the question to a subgraph of the context in order to find the answer.",
"We formulate our model as a structured SVM, with alignment scores computed via BERT, and we can train end-to-end despite using beam search for approximate inference.",
"Our use of explicit alignments allows us to explore a set of constraints with which we can prohibit certain types of bad model behavior arising in cross-domain settings.",
"Furthermore, by investigating differences in scores across different potential answers, we can seek to understand what particular aspects of the input lead the model to choose the answer without relying on post-hoc explanation techniques.",
"We train our model on SQuAD v1.1 and test it on several adversarial and out-of-domain datasets.",
"The results show that our model is more robust than the standard BERT QA model, and constraints derived from alignment scores allow us to effectively trade off coverage and accuracy.",
"Current text-based question answering models learned end-to-end often rely on spurious patterns between the question and context rather than learning the desired behavior.",
"They may ignore the question entirely (Kaushik and Lipton, 2018), focus primarily on the answer type (Mudrakarta et al., 2018), or otherwise bypass the intended mode of reasoning for the task (Chen and Durrett, 2019; Niven and Kao, 2019).",
"Thus, these models are not robust to adversarial attacks (Jia and Liang, What day was Super Bowl 50 played on?",
"2017; Iyyer et al., 2018; Wallace et al., 2019): they can be fooled by surface-level distractor answers that follow the spurious patterns.",
"Methods like adversarial training (Miyato et al., 2016; Wang and Bansal, 2018; Lee et al., 2019; Yang et al., 2019), data augmentation (Welbl et al., 2020), and posterior regularization (Pereyra et al., 2016; Zhou et al., 2019) have been proposed to improve robustness.",
"However, these techniques often optimize for a certain type of error.",
"We want models that can adapt to new types of adversarial examples and work under other distribution shifts, such as on questions from different text domains (Fisch et al., 2019).",
"In this paper, we explore a model for text-based question answering through sub-part alignment.",
"The core idea behind our method is that if every aspect of the question is well supported by the answer context, then the answer produced should be trustable (Lewis and Fan, 2018); if not, we suspect that the model is making an incorrect prediction.",
"The sub-parts we use are predicates and arguments from Semantic Role Labeling (Palmer et al., 2005), which we found to be a good semantic representation for the types of questions we studied.",
"We then view the question answering procedure as a constrained graph alignment problem (Sachan and Xing, 2016), where the nodes represent the predicates and arguments and the edges are formed by relations between them (e.g. predicate-argument relations and coreference relations).",
"Our goal is to align each node in the question to a counterpart in the context, respecting some loose constraints, and in the end the context node aligned to the wh-span should ideally contain the answer.",
"Then we can use a standard QA model to extract the answer.",
"Figure 1 shows an adversarial example of SQuAD (Jia and Liang, 2017) where a standard BERT QA model predicts the wrong answer August 18, 1991 .",
"In order to choose the adversarial answer, our model must explicitly align Super Bowl 50 to Champ Bowl .",
"Even if the model still makes this mistake, this error is now exposed directly, making it easier to interpret and subsequently patch.",
"In our alignment model, each pair of aligned nodes is scored using BERT (Devlin et al., 2019).",
"These alignment scores are then plugged into a beam search inference procedure to perform the constrained graph alignment.",
"This structured alignment model can be trained as a structured support vector machine (SSVM) to minimize alignment error with heuristically-derived oracle alignments.",
"The alignment scores are computed in a black-box way, so these individual decisions aren't easily explainable (Jain and Wallace, 2019); however, the score of an answer is directly a sum of the score of each aligned piece, making this structured prediction phase of the model faithful by construction (Jain et al., 2020).",
"Critically, this allows us to understand what parts of the alignment are responsible for a prediction, and if needed, constrain the behavior of the alignment to correct certain types of errors.",
"We view this interpretability and extensibility with constraints as one of the principal advantages of our model.",
"We train our model on the SQuAD-1.1 dataset (Rajpurkar et al., 2016) and evaluate on SQuAD Adversarial (Jia and Liang, 2017), Universal Triggers on SQuAD (Wallace et al., 2019), and several out-of-domain datasets from MRQA (Fisch et al., 2019).",
"Our framework allows us to incorporate natural constraints on alignment scores to improve zero-shot performance under these distribution shifts, as well as explore coverage-accuracy tradeoffs in these settings.",
"Finally, our model's alignments serve as explanations for its prediction, allowing us to ask why certain predictions are made over others and examine scores for hypothetical other answers the model could give.",
"Our approach critically relies on the ability to decompose questions and answers into a graph over text spans.",
"Our model can in principle work for a range of syntactic and semantic structures, including dependency parsing, SRL (Palmer et al., 2005), and AMR (Banarescu et al., 2013).",
"We use SRL in this work and augment it with coreference links, due to the high performance and flexibility of current SRL systems (Peters et al., 2018).",
"Throughout this work, we use the BERT-based SRL system from Shi and Lin (2019) and the SpanBERT-based coreference system from Joshi et al. (2020).",
"An example graph we construct is shown in Figure 2. Both the question and context are represented as graphs where the nodes consist of predicates and arguments.",
"Edges are undirected and connect each predicate and its corresponding arguments.",
"Since SRL only captures the predicate-argument relations within one sentence, we add coreference edges as well: if two arguments are in the same coreference cluster, we add an edge between them.",
"Finally, in certain cases involving verbal or clausal arguments, there might exist nested structures where an argument to one predicate contains a separate predicate-argument structure.",
"In this case, we remove the larger argument and add an edge directly between the two predicates.",
"This is shown by the edge from was to determine (la-beled as nested structure ) in Figure 2).",
"Breaking down such large arguments helps avoid ambiguity during alignment.",
"useful for question answering in previous work (Sachan et al., 2015; Sachan and Xing, 2016; Khashabi et al., 2018).",
"Our framework differs from theirs in that it incorporates a much stronger alignment model (BERT), allowing us to relax the alignment constraints and build a more flexible, higher-coverage model.",
"Alignment Constraints Once we have the constructed graph, we can align each node in the question to its counterpart in the context graph.",
"In this work, we control the alignment behavior by placing explicit constraints on this process.",
"We place a locality constraint on the alignment: adjacent pairs of question nodes must align no more than k nodes apart in the context.",
"k = 1 means we are aligning the question to a connected sub-graph in the context, k = means we can align to a node anywhere in a connected component in the context graph.",
"In our experiments, we set k = 3 .",
"In the following sections, we will discuss more constraints.",
"Altogether, these constraints define a set A of possible alignments.",
"Let T represent the text of the context and question concatenated together.",
"Assume a decomposed question graph Q with nodes q 1 , q 2 , . . . , q m represented by vectors q 1 , q 2 , . . . , q m , and a decomposed context C with nodes c 1 , . . . , c n represented by vectors c 1 , . . . , c n .",
"Let a = ( a 1 , . . . , a m ) be an alignment of question nodes to context nodes, where a i { 1 , . . . , n } indicates the alignment of the i th question node.",
"Each question node is aligned to exactly one context node, and multiple question nodes can align to the same context node.",
"We frame question answering as a maximization of an alignment scoring function over possible alignments: max a A f ( a , Q , C , T ) .",
"In this paper, we simply choose f to be the sum over the scores of all alignment pairs f ( a , Q , C , T ) = (cid:80) ni =1 S ( q i , c a i , T ) , where S ( q, c, T ) denotes the alignment score between a question node q and a context node c .",
"This function relies on BERT (Devlin et al., 2019) to compute embeddings of the question and context nodes and will be described more precisely in what follows.",
"We will train this model as a structured support vector machine (SSVM), described in Section 3.2.",
"Scoring Our alignment scoring process is shown in Figure 3. We first concatenate the question text with the document text into T and then encode them using the pre-trained BERT encoder.",
"We then compute a representation for each node in the question and context using a span extractor, which in our case is the self-attentive pooling layer of Lee et al. (2017).",
"The node representation in the question can be computed in the same way.",
"Then the score of a node pair is computed as a dot product S ( q, c, T ) = q c .",
"Answer Extraction Our model so far produces an alignment between question nodes and context nodes.",
"We assume that one question node contains a wh-word and this node aligns to the context node containing the answer.",
"1 Ideally, we can use this aligned node to extract the actual answer.",
"However, in practice, the aligned context node may only contain part of the answer and in some cases answering the question only based the aligned context node can be ambiguous.",
"We therefore use the sentence containing the wh-aligned context node as the new context and use a standard BERT QA model to extract the actual answer post-hoc.",
"In the experiments, we also show the performance of our model by only use the aligned context node without the sentence, which is only slightly worse.",
"We train our model as an instance of a structured support vector machine (SSVM).",
"Ignoring the regularization term, this objective can be viewed as a sum over the training data of a structured hinge loss with the following formulation: N (cid:88) i =1 max(0 , max a A [ f ( a , Q i , C i , T i ) + Ham ( a , a i ) f ( a i , Q i , C i , T i )]) 1 We discuss what to do with other questions in Section 4.1.",
"where a denotes the predicted alignment, a i is the oracle alignment for the i th training example, and Ham is the Hamming loss between these two.",
"To get the predicted alignment a during training, we need to run loss-augmented inference as we will discuss in the next section.",
"When computing the alignment for node j , if a j (cid:54) = a j , we add 1 to the alignment score to account for the loss term in the above equation.",
"Intuitively, this objective requires the score of the gold prediction to be larger than any other hypothesis a by a margin of Ham ( a , a ) .",
"When training our system, we first do several iterations of local training where we treat each alignment decision as an independent prediction, imposing no constraints, and optimize log loss over this set of independent decisions.",
"The local training helps the global training converge more quickly and achieve better performance.",
"Since our alignment constraints do not strongly restrict the space of possible alignments (e.g., by enforcing a one-to-one alignment with a connected subgraph), searching over all valid alignments is intractable.",
"We therefore use beam search to find the approximate highest-scoring alignment: (1) Initialize the beam with top b highest aligned node pairs, where b is the beam size.",
"(2) For each hypothesis (partial alignment) in the beam, compute a set of reachable nodes based on the currently aligned pairs under the locality constraint.",
"(3) Extend the current hypothesis by adding each of these possible alignments in turn and accumulating its score.",
"Beam search continues until all the nodes in the question are aligned.",
"An example of one step of beam hypothesis expansion with locality constraint k = 2 is shown in Figure 4. In this state, the two played nodes are already aligned.",
"In any valid alignment, the neighbors of the played question node must be aligned within 2 nodes of the played context node to respect the locality constraint.",
"We therefore only consider aligning to the game , on Feb 7, 2016 and Super Bowl 50 .",
"The alignment scores between these reachable nodes and the remaining nodes in the question are computed and used to extend the beam hypotheses.",
"Note that this inference procedure allows us to easily incorporate other constraints as well.",
"For instance, we could require a hard match on entity nodes, meaning that two nodes containing entities was Super Bowl 50 determine An American football game the champion of the NFL The game played on February 7, 2016 Super Bowl 50 played what day Super Bowl 50 must align within 2 nodes of aligned neighbor (locality constraint) Possible in base model, ruled out by entity constraints in 5 Valid alignment choice (correct one) Figure 4: An example of constraints during beam search.",
"can only align if they share entities.",
"With this constraint, as shown in the figure, Super Bowl 50 can never be aligned to on February 7, 2016 .",
"We discuss such constraints more in Section 5. 3.4 Oracle Construction Training assumes the existence of gold alignments a , which must be constructed via an oracle given the ground truth answer.",
"This process involves running inference based on heuristically computed alignment scores S oracle , where S oracle ( q, c ) is computed by the Jaccard similarity between a question node q and a context node c .",
"Instead of initializing the beam with the b best alignment pairs, we first align the wh-argument in the question with the node(s) containing the answer in the context and then initialize the beam with those alignment pairs.",
"If the Jaccard similarity between a question node and all other context nodes is zero, we set these as unaligned nodes.",
"During training, our approach can gracefully handle unaligned nodes by treating these as latent variables in structured SVM: the gold target is then highest scoring set of alignments consistent with the gold supervision.",
"This involves running a second decoding step on each example to impute the values of these latent variables for the gold alignment.",
"Our focus in this work is primarily robustness, interpretability, and controllability of our model.",
"We focus on adapting to challenging settings in order to stress test our approach.",
"SQuAD normal SQuAD addSent Natural Questions NewsQA BioASQ TBQA ans in wh F1 ans in wh F1 ans in wh F1 ans in wh F1 ans in wh F1 ans in wh F1 Sub-part Alignment 84.7 84.5 49.5 50.5 65.8 61.5 49.3 48.1 63.5 53.4 35.1 38.4 global train+inf 85.8 85.2 45.0 46.8 65.9 62.3 48.9 47.1 62.5 52.1 31.9 34.6 ans from full sent 84.7 81.8 49.5 46.7 65.8 57.8 49.3 45.0 63.5 51.1 35.1 37.5 BERT QA 87.8 39.2 59.4 48.5 52.4 25.3 Table 1: The performance and ablations of our proposed model on the development sets of SQuAD, adversarial SQuAD, and four out-of-domain datasets.",
"For all experiments, we train our model only on the English SQuAD-1.1 dataset (Rajpurkar et al., 2016) and examine how well it can generalize to adversarial and out-of-domain settings with minimal modification, using no fine-tuning on new data and no data augmentation that would capture useful transformations.",
"We evaluate on the addSent and addOneSent proposed by Jia and Liang (2017), and the Universal Triggers on SQuAD (Wallace et al., 2019).",
"We also test the performance of our SQuAD-trained models in zero-shot adaptation to new English domains, namely Natural Questions (Kwiatkowski et al., 2019), NewsQA (Trischler et al., 2017), BioASQ (Tsat-saronis et al., 2015) and TextbookQA (Kemb-havi et al., 2017), taken from the MRQA shared task (Fisch et al., 2019).",
"Our motivation here was to focus on text from a variety of domains where transferred SQuAD models may at least behave credibly.",
"We excluded, for example, HotpotQA (Yang et al., 2018) and DROP (Dua et al., 2019), since these are so far out-of-domain from the perspective of SQuAD that we do not see them as a realistic cross-domain target.",
"We compare primarily against a standard BERTQA system (Devlin et al., 2019).",
"We also investigate a local version of our model, where we only try to align each node in the question to its oracle, without any global training ( global train + inf ), which can still perform reasonably because BERT embeds the whole question and context.",
"When comparing variants of our proposed model, we only consider the questions that have a valid SRL parse and have a wh word (results in Table 1, Table 2, and Figure 5).",
"When comparing with prior systems, for questions that do not have a valid SRL parse or wh word, we back off to the standard BERT QA system (results in Table 3).",
"We set the beam size b = 20 for the constrained alignment.",
"We use BERT-base-uncased for all of our experiments, and fine-tune the model using Adam (Kingma and Ba, 2014) with learning rate set to 2e-5.",
"Our preprocessing uses a SpanBERT-based coreference system (Joshi et al., 2020) and a BERT-based SRL system (Shi and Lin, 2019).",
"We limit the length of the context to 512 tokens.",
"For our global model, we initialize the weights using a locally trained model and then fine-tune using the SSVM loss.",
"We find the initialization helps the model converge much faster and it achieves better performance than learning from scratch.",
"When doing inference, we set the locality constraint k = 3 .",
"Our model is not as good as BERT QA on normal SQuAD but outperforms it in challenging settings.",
"Compared to the BERT QA model, our model is fitting a different data distribution (learn-ing a constrained structure) which makes the task harder.",
"This kind of training scheme does cause some performance drop on normal SQuAD, but we can see that it consistently improves the F1 on the adversarial (on SQuAD addSent, a 11.3 F1 improvement over BERT QA) and cross-domain datasets except NewsQA (where it is 0.4 F1 worse).",
"This demonstrates that learning the alignment helps improve the robustness of our model.",
"2 Here we omit SQuAD addOneSent for simplicity, since the performance on it has the same trend as SQuAD addSent .",
"Refer to the Appendix for the results on SQuAD addOneSent .",
"Global training and inference improve performance in adversarial settings, despite having no effect in-domain.",
"Normal SQuAD is a relatively easy dataset and the answer for most questions can be found by simple lexical matching between the question and context.",
"From the ablation of global train+inf , we can see that more than 80% of answers can be located by matching the wh-argument.",
"We also observe a similar pattern on Natural Questions.",
"3 However, as there are very strong distractors in SQuAD addSent , the wh-argument matching is unreliable.",
"In such situations, the constraints imposed by other argument alignments in the question are useful to correct the wrong wh-alignment through global inference.",
"We see that the global training plus inference is consistently better than the local version on all other datasets.",
"Using the strict wh answer extraction still gives strong performance From the ablation of ans from full sent , we observe that our strictest system that extracts the answer only using the wh-aligned node is only worse by 3-4 points of F1 on most datasets.",
"Using the full sentence gives the system more context and maximal flexibility, and allows it to go beyond the argument spans introduced by SRL.",
"We believe that better semantic representations tailored for question answering (Lamm et al., 2020) will help further improvement in this regard.",
"The results on subsets of the universal triggers dataset are shown in Table 2. We see that every trigger results in a bigger performance drop on BERT QA than our model.",
"Our model is much more stable, especially on who and where question 3 For the MRQA task, only the paragraph containing the short answer of NQ is provided as context, which eliminates many distractors.",
"In such cases, those NQ questions have a similar distribution as those in SQuAD-1.1, and similarly make no use of the global alignment.",
"types, in which case the performance only drops by around 2%.",
"Several factors may contribute to the stability: (1) The triggers are ungrammatical and their arguments often contain seemingly random words, which are likely to get lower alignment scores.",
"(2) Because our model is structured and trained to align all parts of the question, adversarial attacks on span-based question answering models may not fool our model as effectively as they do BERT.",
"In Table 3, we compare our best model (not using constraints from Section 5) with existing adversarial QA models in the literature.",
"We note that the performance of our model on SQuAD-1.1 data is relatively lower compared to those methods, yet we achieve the best overall performance; we trade some in-distribution performance to improve the model's robustness.",
"We also see that our model achieves the smallest normal vs. adversarial gap on addSent and addOneSent , which demonstrates that our constrained alignment process can enhance the robustness of the model compared to prior methods like adversarial training (Yang et al., 2019) or explicit knowledge integration (Wang and Jiang, 2018).",
"One advantage of our explicit alignments is that we can understand and inspect the model's behavior more deeply.",
"This structure also allows us to add constraints to our model to prohibit certain behaviors, which can be used to adapt our model to adversarial settings.",
"In this section, we explore how two types of constraints enable us to reject examples the model is less confident about.",
"Hard constraints can enable us to reject questions where the model finds no admissible answers.",
"Soft constraints allow us to set a calibration threshold for when to return our answer.",
"We focus on evaluating our model's accuracy at various coverage points, the so-called selective question answering setting (Kamath et al., 2020).",
"Constraints on Entity Matches By examining addSent and addOneSent , we find the model is typically fooled when the nodes containing entities in the question align to adversarial entity nodes.",
"An intuitive constraint we can place on the alignment is that we require a hard entity match for each argument in the question, if it contains Normal addSent addOneSent overall adv overall adv R.M-Reader (Hu et al., 2018) 86.6 58.5 31.1 67.0 19.6 KAR (Wang and Jiang, 2018) 83.5 60.1 23.4 72.3 11.2 BERT + Adv (Yang et al., 2019) 92.4 63.5 28.9 72.5 19.9 Our BERT 87.8 61.8 39.2 27.0 70.4 52.6 18.4 Sub-part Alignment* 84.7 65.8 47.1 18.9 72.8 60.1 11.9 Table 3: Performance of our systems compared to the literature on both addSent and addOneSent .",
"Constraints on Alignment Scores The hard entity constraint is quite inflexible and does not generalize well, for example to questions that do not contain a entity.",
"However, the alignment scores we get during inference time are good indicators of how well a specific node pair is aligned.",
"For a correct alignment, every pair should get a reasonable alignment score.",
"However, if an alignment is incorrect, there should exist some bad alignment pairs which have lower scores than the others.",
"We can reject those samples by finding bad alignment pairs, which both improves the precision of our model and also serves as a kind of explanation as to why our model makes its predictions.",
"We propose to use a simple heuristic to identify the bad alignment pairs.",
"We first find the max score S max over all possible alignment pairs for a sample, then for each alignment pair ( q i , c j ) of the prediction, we calculate the worst alignment gap (WAG) g = min ( q,c ) a ( S max S ( q, c )) .",
"If g is beyond some threshold, it indicates that alignment pair is not reliable.",
"4 Comparison to BERT Desai and Durrett (2020) show that pre-trained transformers like BERT are well-calibrated on a range of tasks.",
"Since we are rejecting the unreliable predictions to improve the precision of our model, we reject the same number of examples for the baseline using the posterior probability of the BERT QA predictions.",
"To be specific, we rank the predictions of all examples by the sum of start and end posterior probabilities and compute the F1 score on the top k predictions.",
"4 The reason we look at differences from the max alignment is to calibrate the scores based on what typical scores look like for that instance.",
"We find that these are on different scales across different instances, so the gap is more useful than an absolute threshold.",
"On Adversarial SQuAD, the confidence scores of a normal BERT QA model do not align with its performance.",
"From Figure 5, we find that the highest-confidence answers from BERT (i.e., in low coverage settings) are very inaccurate.",
"One possible explanation of this phenomenon is that BERT overfits to the pattern of lexical overlap, and is actually most confident on adversarial examples highly similar to the input.",
"In general, BERT's confidence is not an effective heuristic for increasing accuracy.",
"Hard entity constraints improve the precision but are not flexible.",
"Figure 5 also shows that by adding a hard entity constraint, we achieve a 71.4 F1 score which is an 8.6 improvement over the unconstrained model at a cost of only 60% of samples being covered.",
"Under the hard entity constraint, the model is not able to align to the nodes in the adversarial sentence, but the performance is still Question : Who led the North American Huguenot colonial expedition?",
"lower than what it achieves on normal SQuAD.",
"We examine some of the error cases and find that for a certain number of samples, there is no path from the node satisfying the constraint to the node containing the answer (e.g. they hold a more complex discourse relation while we only consider coreference as cross-sentence relation).",
"In such cases, our method cannot find the answer.",
"A smaller worst alignment gap indicates better performance.",
"As opposed to BERT, our alignment score is well calibrated on those adversarial examples.",
"This substantiates our claim that those learned alignment scores are good indicators of how trustful alignment pairs are.",
"Also, we see that when the coverage is the same as the entity constraint, the performance under the alignment score constraint is even better.",
"The alignment constraints are simultaneously more flexible than the hard constraint and also more effective.",
"In this section, we give several examples of the alignment and demonstrate how those scores can act as an explanation to the model's behavior.",
"Those examples are shown in Figure 6.",
"As shown by the dashed arrows, all adversarial alignments contain at least one alignment with significantly lower alignment score.",
"The model is overconfident towards the other alignments with a high lexical overlap as shown by the bold arrows.",
"These overconfident alignments also show that the predicate alignment learned on SQuAD-1.1 is not reliable.",
"To further improve the quality of predicate alignment, either a more powerful training set or a new predicate alignment module is needed.",
"Crucially, with these scores, it is easy for us to interpret our model's behavior.",
"For instance, in example",
"(a), the very confident predicate alignment forces Luther's 95 Theses to have no choice but align to Jeff Dean , which is unrelated.",
"Because we have alignments over the sub-parts of a question, we can inspect our model's behavior in a way that the normal BERT QA model does not allow.",
"We believe that this type of debuggability provides a path forward for building stronger QA systems in high-stakes settings.",
"Adversarial Attacks in NLP.",
"Adversarial attacks in NLP may take the form of adding sentences like adversarial SQuAD (Jia and Liang, 2017), universal adversarial triggers (Wallace et al., 2019), or sentence perturbations: Ribeiro et al. (2018) propose deriving transformation rules, Ebrahimi et al. (2018) use character-level flips, and Iyyer et al. (2018) use controlled paraphrase generation.",
"The highly structured nature of our approach makes it more robust to such attacks and provides hooks to constrain the system to improve performance further.",
"Neural module networks.",
"Neural module networks are a class of models that decompose a task into several sub-tasks, addressed by independent neural modules, which make the model more robust and interpretable (Andreas et al., 2016; Hu et al., 2017; Cirik et al., 2018; Hudson and Manning, 2018; Jiang and Bansal, 2019).",
"Like these, our model is trained end-to-end, but our approach uses structured prediction and a static network structure rather than dynamically assembling a network on the fly.",
"Our approach could be further improved by devising additional modules with distinct parameters, particularly if these are trained on other datasets to integrate additional semantic constraints.",
"Unanswerable questions Our approach rejects some questions as unanswerable.",
"This is similar to the idea of unanswerable questions in SQuAD 2.0 (Rajpurkar et al., 2018), which have been studied in other systems (Hu et al., 2019).",
"However, techniques to reject these questions differ substantially from ours many SQuAD 2.0 questions require not only a correct alignment between the question and context but also need to model the relationship between arguments, which is beyond the scope of this work and could be a promising future work.",
"Also, the setting we consider here is more challenging, as we do not assume access to such questions at training time.",
"Graph-based QA Khashabi et al. (2018) propose to answer questions through a similar graph alignment using a wide range of semantic abstractions of the text.",
"Our model differs in two ways: (1) Our alignment model is trained end-to-end while their system mainly uses off-the-shelf natural language modules.",
"(2) Our alignment is formed as node pair alignment rather than finding an optimal sub-graph, which is a much more constrained and less flexible formalism.",
"Sachan et al. (2015); Sachan and Xing (2016) propose to use a latent alignment structure most similar to ours.",
"However, our model supports a more flexible alignment procedure than theirs does, and can generalize to handle a wider range of questions and datasets.",
"Past work has also decomposed complex questions to answer them more effectively (Talmor and Berant, 2018; Min et al., 2019; Perez et al., 2020).",
"Wolfson et al. (2020) further introduce a Question Decomposition Meaning Representation (QDMR) to explicitly model this process.",
"However, the questions they answer, such as those from HotpotQA (Yang et al., 2018), are fundamentally designed to be multi-part and so are easily decomposed, whereas the questions we consider are not.",
"Our model theoretically could be extended to leverage these question decomposition forms as well.",
"We note a few limitations and some possible future directions of our approach.",
"First, errors from SRL and coreference resolution systems can propagate through our system.",
"However, because our graph alignment is looser than those in past work, we did not observe this to be a major performance bottleneck.",
"The main issue here is the inflexibility of the SRL spans.",
"For example, not every SRL span in the question can be appropriately aligned to a single SRL span in the context.",
"Future works focusing on the automatic span identification and alignment like recent work on end-to-end coreference systems (Lee et al., 2017), would be promising.",
"Second, from the error analysis we see that our proposed model is good at performing noun phrase alignment but not predicate alignment, which calls attention to the better modeling of the predicate alignment process.",
"For example, we can decompose the whole alignment procedure into separate noun phrase and predicate alignment modules, in which predicate alignment could be learned using different models or datasets.",
"Finally, because our BERT layer looks at the entire question and answer, our model can still leverage uninterpretable interactions in the text.",
"We believe that modifying the training objective to more strictly enforce piecewise comparisons could improve interpretability further while maintaining strong performance.",
"In this work, we presented a model for question answering through sub-part alignment.",
"By structuring our model around explicit alignment scoring, we show that our approach can generalize better to other domains.",
"Having alignments also makes it possible to filter out bad model predictions (through score constraints) and interpret the model's behavior (by inspecting the scores).",
"This work was partially supported by NSF Grant IIS-1814522 and NSF Grant SHF-1762299.",
"The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC resources used to conduct this research.",
"Results presented in this paper were obtained using the Chameleon testbed supported by the National Science Foundation.",
"Thanks as well to the anonymous reviewers for their helpful comments."
] | [
"abstain",
"abstain",
"result",
"method",
"objective",
"objective",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"method",
"objective",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"abstain",
"other",
"other",
"method",
"abstain",
"method",
"other",
"objective",
"method",
"other",
"objective",
"abstain",
"other",
"abstain",
"other",
"other",
"method",
"objective",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"objective",
"method",
"result",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"Progress with supervised Open Information Extraction (OpenIE) has been primarily limited to English due to the scarcity of training data in other languages.",
"In this paper, we explore techniques to automatically convert English text for training OpenIE systems in other languages.",
"We introduce the Alignment-Augmented Consistent Translation (AACTRANS) model to translate English sentences and their corresponding extractions consistently with each other with no changes to vocabulary or semantic meaning which may result from independent translations.",
"Using the data generated with AACTRANS, we train a novel two-stage generative OpenIE model, which we call GEN2OIE, that outputs for each sentence: 1) relations in the first stage and 2) all extractions containing the relation in the second stage.",
"GEN2OIE increases relation coverage using a training data transformation technique that is generalizable to multiple languages, in contrast to existing models that use an English-specific training loss.",
"Evaluations on 5 languages Spanish, Portuguese, Chinese, Hindi and Telugu show that the GEN2OIE with AACTRANS data outperforms prior systems by a margin of 6-25% F1.",
"1 1 Introduction Open Information Extraction (OpenIE) is the task of converting unstructured text to semi-structured tuples of the format < subject; relation; object >, where these three components are textual phrases, broadly extracted from the original text (Etzioni et al., 2011).",
"OpenIE tuples have shown utility in various downstream tasks (Mausam, 2016) like Question Answering (Fader et al., 2013; Khot et al., 2017), Machine Reading (Poon et al., * denotes equal contribution 1 Code and models released at github.com:dair-iitd/moie 2010), Multi-Document Summarization (Chris-tensen et al., 2014; Fan et al., 2019), Schema Induction (Balasubramanian et al., 2013), and Knowledge Base Construction (Gupta et al., 2019; Chandrahas and Talukdar, 2021).",
"With widespread adoption of Deep Learning in NLP, Open Information Extraction (OpenIE) systems have gone through a paradigm shift from using rule-based, statistical systems to supervised neural models.",
"However, both types of systems have been limited to only a few languages earlier systems required language-specific OpenIE insights, and current systems require annotated training corpus that pose a barrier, particularly for low-resource languages.",
"Related tasks such as Semantic Role Labeling face similar challenges in extending to multiple languages.",
"X-SRL (Daza and Frank, 2020) addresses this by automatic translation of English sentences to the target language followed by label projection to infer the semantic role labels in the translated sentence.",
"However, translating the sentence alone may be insufficient for OpenIE because the generated tuples (also referred to as extractions) can include additional words absent in the sentence or require some changes to the word morphology used in the sentence.",
"Although less prevalent in English, these characteristics need to be addressed in other languages.",
"X-SRL approach may be extended such that each extraction can also be automatically translated and subject, relation, object labels projected from English extractions.",
"However, independent translation of sentence and extraction may introduce unwanted lexical (e.g. synonyms) or semantic (e.g., change in gender) variations between the translations, as shown in Table",
"1. Such translation inconsistencies in the training data lead to invalid OpenIE examples.",
"lations must use same words or their morphological variants as much as possible.",
"Hence, we propose Alignment-Augmented Consistent Translation (AACTRANS), a seq2seq model that translates the given input text in a way that is consistent with a reference translation by biasing the translation to use words similar to the reference.",
"To ensure that translations of sentence and extractions are consistent with each other, we use AACTRANS model to translate each of them with the same reference.",
"In Section 4.1, we describe the reference used in training and inference.",
"Both generation based (Kolluru et al., 2020b) and labeling based (Ro et al., 2020) architectures have shown competitive performance on English OpenIE.",
"However, labeling based models cannot naturally introduce new words or change morphology of sentence words required in some languages.",
"Therefore, we use a new generative model, GEN2OIE, that contains two stages: the first stage produces all the relations in the sentence and the second stage generates the extractions containing the given relation.",
"We also use a training heuristic specific to two stage models that increases relation coverage across multiple languages.",
"Our major contributions are that we:",
"1. introduce a novel technique for transferring data from English to other languages using the AACTRANS model and label projection,",
"2. propose two-stage generative model, GEN2OIE, for training OpenIE system in multiple languages,",
"3. release OpenIE evaluation datasets for two Indian languages, Hindi and Telugu, and",
"Our work is in line with the recent trend of extending IE and knowledge-based NLP systems to multiple languages.",
"Recent works have explored distantly supervised relation extraction (Rathore et al., 2022; Bhartiya et al., 2022), knowledge-base completion (Singh et al., 2021), and fact linking (Kol-luru et al., 2021).",
"Our focus is OpenIE.",
"Many of the prior OpenIE systems, both nonneural (OpenIE-4 (Pal and Mausam, 2016; Christensen et al., 2011), OpenIE-5 (Saha et al., 2017; Saha and Mausam, 2018), ClausIE (Del Corro and Gemulla, 2013)) and neural (RnnOIE (Stanovsky et al., 2018), OpenIE-6 (Kolluru et al., 2020a)) have been deployed for English.",
"Moreover, OpenIE systems built for other languages often work only for a single language due to their reliance on language-specific resources.",
"For example, Bassa et al. (2018); Rahat and Talebpour (2018); Ro-madhony et al. (2018); Guarasci et al. (2020); Papadopoulos et al. (2021) focus on German, Persian, Indonesian, Italian, and Greek, respectively.",
"Claro et al. (2019) present the importance of and various challenges involved with building multilingual OpenIE systems.",
"Neural models like Logician (Sun et al., 2018) and CrossOIE (Cabral et al., 2020) use language-specific training data.",
"Reliance on manually-annotated data or language-specific resources makes it infeasible to develop systems for the plurality of languages in the world, due to the cost and effort involved.",
"However, our automated data conversion method can handle even low-resource languages like Telugu.",
"Non-neural systems such as PredPatt (White et al., 2016) and ArgOE (Gamallo and Garcia, 2015) work for multiple languages by using CoNLL-X and Universal Dependency parses respectively, to extract predicate-argument structures.",
"Owing to their pipelined nature, their performance is below that of neural systems like Multi 2 OIE (Ro et al., 2020).",
"Multi 2 OIE is a two-stage labeling model that works for English, Spanish and Portuguese.",
"GEN2OIE extends this 2-stage design to the generative paradigm which allows for better modeling of the OpenIE task.",
"The underlying mBERT encoder in Multi 2 OIE allows for cross-lingual generalization across various languages even after training with only English supervised data.",
"However, dependence on zero-shot generalization limits the performance of the model.",
"Two types of methods have been proposed for constraining the outputs of the machine translation systems: 1) altering the decoding algorithm (Hasler et al., 2018), or 2) modifying the training methodology (Chen et al., 2020; Dinu et al., 2019).",
"We follow the second approach for constraining translations by AACTRANS to be consistent to that of a reference sentence.",
"Unlike prior work which focuses on constraining translations of few words, our task requires constraining the entire translation.",
"We make use of awesome-align (Dou and Neubig, 2021a), an unsupervised word alignment technique (Och and Ney, 2003), that outputs the alignment between words in sentences of two languages.",
"Awesome-align is trained using only parallel set of sentences in the two languages and generates aligned target words for each source word.",
"Transferring linguistic annotations from source to target language has been pioneered by (David et al., 2001) and has been used in context of Semantic Role Labeling (Annesi and Basili, 2010) and PoS-tagging (Zennaki et al., 2019).",
"After consistent translation, we make use of Crosslingual Projection (Faruqui, 2015), to transfer OpenIE tags.",
"For the transfer of OpenIE data from one language to another, we represent the source language 2 as E and the target language as F .",
"Further, we use sent E and ext E to represent a sentence and extraction in the source language and aact sent F and aact ext F to represent the transferred sentence and extraction in the target language.",
"To aid in the translation of extractions, we create a sub-sentence from each extraction by concatenating the phrases in all the fields of the extraction.",
"The order of concatenation is such that the formed sub-sentence is grammatically valid.",
"We refer to this sub-sentence as an ext-sentence and represent it as es L , where the subscript L represents its language.",
"For most English extractions, the ext-sentence corresponds to concatenating the fields in the order of subject, relation and object.",
"However, other languages may follow a different order or allow for multiple orders.",
"We rely on the output of system that translates the English ext-sentence to determine the ext-sentence in other languages.",
"Moreover, each extraction can be seen as a labeling over the words of ext-sentence with either the S ubject, R elation or O bject tags.",
"Tags for each word in the ext-sentence can also be regarded as the extraction.",
"In this section we describe the technique used to convert OpenIE training data from source language E to a target language F .",
"The source sentence, sent E , and all its corresponding ext-sentences, es E , are consistently translated to language F (Section 4.1), and then, for each extraction in language E , ext E , the S , R or O labels are projected to the translated ext-sentence, es F , to form the extraction, ext F , in language F (Section 4.2).",
"Figure 1 describes the pipeline with the help of an example.",
"We introduce a new Seq2Seq-based translation model called Alignment-Augmented Consistent Translation (AACTRANS) to ensure that sentences and ext-sentences are translated consistently from languages E to F .",
"We define two translations as consistent if similar phrases have same grammatical structure, vocabulary and morphology while allowing for minimal changes necessary to ensure fluency.",
"To ensure consistency among translations of multiple pieces of text (both the sentence and respective ext-sentences present in an English OpenIE instance), we make use of a reference text in language F to guide all of their translations.",
"By individually maintaining consistency with the reference, their respective translations end up being consistent to one another as well.",
"To generate a translation f (language F ) of text e (language E ), consistent with a reference r (lan-guage F ), we use the following procedure.",
"Firstly, given e = e 1 e 2 . . . e N and r = r 1 r 2 . . . r M , we find the set of aligned words A e i ={ r j } for each word e i in e , using a word alignment model.",
"Secondly, the aligned text e is constructed by concatenating each of the words e i in e , with their aligned words A e i , using ## as a separator (shown as <1>, <3> <4> and <2>, <3> <5> in Figure 1).",
"If e i is aligned to the words r j , r k ( j < k ), then e contains e i ## r j r k #.",
"If e i has no aligned words, then e contains e i #.",
"Thirdly, the AACTRANS model takes e as input and produces the sequence f as output, which represents a translation of e that is biased to use the aligned reference words (shown as <4> <7> and <5> <8> in Figure 1).",
"Next we discuss the training and inference of AACTRANS model.",
"Training : We use parallel sentences of languages E and F that are available in existing translation corpora for training the AACTRANS model.",
"For each parallel sentence pair e and f , we use the sentence f itself as the reference r .",
"Using the alignments between the words of e and f , we form the input e , as discussed.",
"The AACTRANS Seq2Seq model is trained with e as input and f as output.",
"Since e has words from f , the model learns to use them during training.",
"Inference : Here, we consistently translate English sentence sent E and each of its ext-sentences es E .",
"We use an off-the-shelf translation system to translate sent E to language F , represented as t sent F .",
"t sent F is used as the common reference r for constructing aligned sentence al sent EF and aligned ext-sentence al sent EF from sentence sent E and ext-sentence es E , respectively.",
"We then apply the trained AACTRANS model on al sent EF and al sent EF to generate target sentence aact sent F and target ext-sentence aact es F respectively.",
"Each word in the target ext-sentence, aact es F , must be labeled with either the Subject, Relation, or Object tag to form the completed extraction in language F .",
"The tags from the corresponding ext E are projected onto aact es F using the Crosslingual Projection algorithm (Faruqui, 2015) (described in Appendix A), which uses word alignments between es E and aact es F and produces as output, the tags over aact es F , giving extraction aact ext F .",
"The final set of <sentence, extractions> pairs constitute the data for training OpenIE system in language F .",
"Thus the overall flow is: 1) AACTRANS model training is done on parallel corpus, 2) AACTRANS model inference is applied on language E OpenIE examples, 3) CLP projection is used to obtain the labelled extractions, and 4) the generated data is used to train OpenIE system like GEN2OIE, which is discussed next.",
"To train OpenIE systems in multiple languages, we use a novel GEN2OIE model that extends the 2-stage design of Multi 2 OIE (Ro et al., 2020) to a generative paradigm.",
"The first stage generates all possible relations and the second stage generates all extractions that contain a given relation.",
"tion, thus overcoming the limitations of Multi 2 OIE model.",
"Moreover, due to its generative nature, GEN2OIE can add new words or introduce changes in morphology that may be necessary for producing correct extractions, which cannot be achieved by labeling models.",
"Stage-1 Seq2Seq : The input sentence is passed to the encoder and decoder generates a string formed by concatenating the set of relations from all the extractions, separated by an [SEP] token.",
"During training, the target relations are concatenated in the order in which they occur in the sentence.",
"We find that a deterministic order is important for adding stability to the model training.",
"Stage-2 Seq2Seq : To produce extractions corresponding to each relation generated in Stage-1, the relation r is concatenated with the input sentence s and passed to the encoder as r [SEP] s .",
"The decoder is trained to generate all the extractions containing the relation r .",
"Multiple extractions are separated by an < e > token and each extraction contains delimiters tokens to identify the various parts of the extraction.",
"The surrounding < s >...</ s >, < r >...</ r > and < o >...</ o > tokens are used to identify the subject, relation and object phrases.",
"Labeling models like OpenIE-6 (Kolluru et al., 2020a) have used constrained training to increase the relation coverage.",
"However, the constraints are limited to English and specific to labeling architectures.",
"We introduce a simple parts-of-speech based heuristic during Stage-1 training of GEN2OIE that increases the relation coverage in the generative paradigm while being applicable across languages.",
"Relation Coverage (RC) : We observe that for generating all possible extractions, all the verbs in the sentence must be contained in some relation.",
"However, the extractions of training data may be incomplete and not satisfy this property.",
"Therefore, during the training phase, we modify the input to the Stage-1 model by removing the verbs in the sentence which are not present in relation of any extraction.",
"Thus the model learns that every verb must be included in some relation and applies the same during inference as well.",
"This heuristic does not effect Stage-2 model training.",
"The word log probabilities assigned by the Stage-2 decoder can be summed up to be used as confidence score for the extractions generated by GEN2OIE.",
"We experiment with using a separate model for obtaining the confidence scores.",
"A sequence-labeling model is trained on each lan-guage's extractions with ext-sentence as input and S, R, O labels over the ext-sentence as the output.",
"The log probabilities given by the sequence-labeling model to the labels predicted by the GEN2OIE model are summed up to get the new confidence scores.",
"We train OpenIE systems in 5 languages, Spanish (ES), Portuguese (PT), Chinese (ZH), Hindi (HI) and Telugu (TE), by using the training data transferred from English to the respective language.",
"For training the Seq2Seq models used in the data generation pipeline and the OpenIE systems based on the GEN2OIE architecture, we choose either the mBART (Liu et al., 2020) or mT5 (Xue et al., 2020) model depending on the particular language.",
"Both of them are pre-trained multilingual Seq2Seq models that are trained with a span denoising objective on a large corpus of text containing many languages.",
"mBART is pre-trained on CC25 and mT5 is pre-trained on mC4 corpus which contain text in 25 and 101 languages, respectively.",
"Since mBART does not support Portuguese and Telugu, we use mT5 for these two languages and mBART for the 2506 remaining 3 languages.",
"We use the default hyperparameters recommended for these models and they are reported in Appendix F. Training Datasets : For training the AACTRANS model, we make use of parallel English, language F sentences available in standard translation corpora using the method described in Section",
"4. For Spanish we use parallel sentences from EuroParl corpus (Koehn et al., 2005), and for Portuguese we use a subset of the ParaCrawl corpus (Ban et al., 2019), as chosen by Lopes et al. (2020).",
"For Hindi we use the IIT-B corpus (Kunchukuttan et al., 2018), and for Telugu we use the Samanantar corpus (Ramesh et al., 2021).",
"For Chinese we use the data released for WMT19 (Barrault et al., 2019).",
"We list the BLEU scores of the various systems in Appendix C. We use the OIE4 training corpus from Kolluru et al. (2020b) and transfer it to other languages for training OpenIE systems.",
"Evaluation Datasets and Metrics : For evaluating translation systems we use the test sets available in the respective corpora and use SacreBLEU (Post, 2018) as the metric.",
"3 For evaluating different OpenIE systems we use the Optimal F1 and Area Under Curve (AUC) as computed by the CaRB (Bhard-waj et al., 2019) scoring function.",
"For Spanish, Portuguese OpenIE we use test sets provided in Ro et al. (2020).",
"For Chinese OpenIE, we randomly choose 10% of the SAOKE dataset (Sun et al., 2018).",
"In order to evaluate our method on medium and low resource languages, we release new OpenIE test sets in Hindi and Telugu.",
"Human annotators who are fluent in both the language and are knowledgeable about the OpenIE task translated about 300 randomly chosen sentences and their corresponding extractions from CaRB test set.",
"They were paid $2.5 per sentence.",
"Table 2 lists the number of examples in different languages used for training and evaluating translation and OpenIE systems.",
"We perform experiments to answer the questions:",
"1. How effective is the GEN2OIE model?",
"2. What is the quality of data generated with the AACTRANS+CLP pipeline, assessed both by 3 BLEU+case.mixed+numrefs.1+smooth.none+tok.intl+version.1.5.1 EN ES PT ZH HI TE TranslationTrain-1.9M 5M 1M 1.6M 4.8M Test 38473 99,087 2001 2507 2390 OpenIETrain91K 91K 91K 91K 91K 91K Test 641 594 594 3833 298 302 Table 2: Data statistics for OpenIE examples and (En-glish, language F ) parallel sentences.",
"the final performance of systems trained using it and with metrics defined for evaluating consistency?",
"3. What are the roles of different components in the GEN2OIE and AACTRANS+CLP data?",
"To study the baseline monolingual effectiveness of GEN2OIE, we first train and evaluate the system on English data.",
"The results are shown in Table",
"3. We compare with previously proposed English OpenIE models such as Multi 2 OIE (Ro et al., 2020), OpenIE6 (Kolluru et al., 2020a) and IMoJIE (Kol-luru et al., 2020b).",
"We also consider individual components in OpenIE6, the IGL and Constrained-IGL (CIGL) architectures.",
"CIGL achieves the highest performance among all prior models but uses of English specific constraints in training.",
"We find that GEN2OIE, which uses the proposed language-agnostic relation coverage (RC) outperforms CIGL by 0.4% F1.",
"However, its AUC remains lower.",
"Therefore, we rescore the generated extractions with labeling-based rescoring model (Section 6).",
"This results in a new state of the art for English in F1 and AUC with the labeling-based rescoring resulting in a 2.9% AUC gain over CIGL.",
"To further analyze the effectiveness of our 2-stage architecture, we introduce another model called GENOIE that outputs all extractions for a sentence as a single string, separated by an < e > token.",
"We find that using GENOIE results in (2.3, 2.0)% drop in F1, AUC compared to GEN2OIE which leverages RC.",
"We also report GEN2OIE performance without using RC.",
"In order to test the quality of the OpenIE examples generated using the AACTRANS+CLP pipeline, we train both the GENOIE and GEN2OIE models over the data generated for different languages.",
"In Table 4, we compare it with examples generated from two other methods, SentTrans and SentExtTrans.",
"SentTrans+CLP represents an adaptation of X-SRL (Daza and Frank, 2020) for OpenIE where only the sentence is translated and each extraction, which is expressed as labeling over the words in the sentence, are projected onto the translated sentence using the CLP algorithm described in Section 4.2.",
"The projected extraction is now a labeling over the translated sentence and hence it uses the same morphology as the sentence and cannot add new words.",
"SentExtTrans+CLP uses independent translation of English sentence and ext-sentences followed by CLP algorithm between the English and translated ext-sentences to transfer the labels.",
"Although this allows for adding new words and changing morphology, it can result in a lack of consistency between the translations.",
"We find that both GENOIE and GEN2OIE show consistent gains with AACTRANS+CLP data across various languages, when compared with SentExt-Trans+CLP and SentTrans+CLP data.",
"We further use rescoring models that are trained on the same AACTRANS+CLP data.",
"Labeling-based rescoring achieves significantly higher AUC, with as much as 8.3% gain in Telugu.",
"We experiment with two versions of Multi 2 OIE: 1) trained only on English OpenIE data and applied to other languages in a zero-shot manner and 2) using language-specific training data generated from SentTrans+CLP.",
"We specifically choose Sent-Trans+CLP data as all the extractions can be expressed as labels over the sentence, which is a requirement for training Multi 2 OIE which is itself a labeling model.",
"We find that Multi 2 OIE model trained with SentTrans+CLP data improves over the zero-shot setting in all languages other than Chinese (discussed below).",
"However, it performs significantly worse than GEN2OIE by (5.2, 3.3)% in (F1, AUC) on average, even on training with the same SentTrans+CLP data.",
"This can be attributed to Multi 2 OIE's lack of capability to handle: 1) overlapping relations, 2) multiple extractions per relation, 3) adding auxiliary words or 4) changing inflectional forms, as shown in Table 5.",
"We train IMoJIE and OpenIE6 (initialized with mBERT) on AACTRANS+CLP and Sent-Trans+CLP data.",
"We find that they underperform 2508 SentenceExtractions George Bluth Sr., patriarch of the Bluth family, is the founder and former CEO of the Bluth Company.",
"<s> George Bluth Sr.",
"</s> <r> is patriarch of </r> <o> the Bluth family </o> <s> George Bluth Sr.",
"</s> <r> is </r> <o> the founder and former CEO of the Bluth Company </o> <s> George Bluth Sr.",
"</s> <r> is </r> <o> patriarch of the Bluth family </o> TeluguEnglishExtraction Sharon's longtime rival Benjamin Netanyahu was elected as leader of Likud <s> </s> <o> </o> <r> </r> HindiEnglishExtraction (cid:607) John Lambert put forward a new constitution known as the Instrument of Government <s> (cid:607) </s> <o> </o> <r> </r> Table 5: Sentence and OpenIE predictions of GEN2OIE in English, Telugu and Hindi.",
"GEN2OIE and Multi 2 OIE.",
"Compared to the two-stage models, both IMoJIE and OpenIE6 generate all the extractions autoregressively, which makes them more susceptible to noise in the automatically generated training data.",
"We additionally compare with Faruqui (2015), where the test sentence is translated into English, extractions are generated using OpenIE6 and they are projected back onto the test sentence.",
"We find that the system results in poor performance due to lack of language-specific training.",
"We observe that all systems have low performance on Chinese.",
"We attribute this to the various artifacts present in the SAOKE test set, that include special relations such DESC , TIME , ISA , etc.",
"Since these extractions cannot be generated in our pipeline, we observe performance of only 33.2% F1 and 15.8% AUC with our best model, when compared to training GEN2OIE with SAOKE training data, which gives 52.5% F1 and 32% AUC.",
"We additionally train the GEN2OIE model using mT5 on AACTRANS data for all five languages (GEN2OIE-mT5 in Table 4) and find improvements of (2.1%, 3.5%, 0.8%) F1 over the mBART models used for ES, ZH and HI.",
"In order to measure the inconsistency of the generated extractions with respect to the sentence, we",
"compute the fraction of words that occur in the extraction but are absent in the sentence.",
"In Table 7, we find that across languages, the fraction is lower for training examples generated through the consistent translation methodology (AACTRANS+CLP) when compared against independent translations (SentExtTrans+CLP).",
"This indicates that AAC-TRANS+CLP indeed achieves better consistency.",
"In order to analyze the reasons for improvement in CaRB performance, we compute the fraction of words that are present in model predictions but absent in the gold extractions of the test set (denoted by AG Absent in Gold).",
"In Table 8, we see that GEN2OIE trained on AACTRANS+CLP achieves lower values than the same model trained on Sen-tExtTrans+CLP data and this correlates with the increased CaRB performance.",
"This shows that the model generates words closer to gold extractions (and hence closer to input sentence), which contributes to higher performance.",
"We choose three representative languages to conduct the ablation study Spanish, Chinese, and Hindi.",
"Portuguese and Telugu belong to the same language family as Spanish and Hindi, respectively.",
"In Table 6, we show the results of individually removing components from the GEN2OIE trained on AACTRANS+CLP data.",
"In AACTRANS w/o Sentence Consistency, we use regular translation of sentence while using consistent translation of extraction.",
"This leads to a drop of (1.9, 0.2, 0.9)% in F1 for the three languages, and shows the importance of using consistent translation on both the sentence and extraction.",
"In GEN2OIE w/o Relation Ordering, we train Stage-1 GEN2OIE with randomly shuffled relations.",
"This reduces the performance as our model uses auto-regressive training which benefits from following a fixed order, which we choose as the order of occurrence of the relations in the sentence.",
"In GEN2OIE w/o Relation Coverage, we find that performance decreases in Spanish and Chinese by 5.3% and 5.9% in F1, respectively, but remains the same in Hindi, possibly due to the smaller number of examples in the test set.",
"Error Analysis : We find that the AAC-TRANS+CLP suffers from: 1) missing or 2) wrong word alignments and 3) inability to label discontinuous S, R, O phrases.",
"We show examples of these cases in Appendix B. 9 Conclusion We develop a novel AACTRANS+CLP pipeline for consistently transferring English OpenIE examples to other languages and present a novel two-stage generative model, GEN2OIE, for training OpenIE systems in various languages.",
"We show improvements over the existing baseline of Multi 2 OIE, with an average improvement of 7.2% in F1 and 16.1% in AUC.",
"It is effective in five languages, which is the largest number of languages covered by a single OpenIE technique known to us.",
"To encourage research in medium and low-resource languages, we additionally release new OpenIE evaluation examples in Hindi and Telugu.",
"Keshav is supported by TCS Research Fellowship.",
"Mausam is supported by grants from Huawei, Google, Bloomberg and IBM, and a Jai Gupta Chair Fellowship.",
"Soumen is partly supported by a Jagadish Bose Fellowship and an AI Horizons Network grant from IBM.",
"We thank IIT Delhi HPC facility and TFRC program for compute resources."
] | [
"abstain",
"objective",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"abstain",
"abstain",
"objective",
"method",
"objective",
"objective",
"objective",
"objective",
"objective",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"abstain",
"other",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"With the recent advances of open-domain story generation, the lack of reliable automatic evaluation metrics becomes an increasingly imperative issue that hinders the fast development of story generation.",
"According to conducted researches in this regard, learnable evaluation metrics have promised more accurate assessments by having higher correlations with human judgments.",
"A critical bottleneck of obtaining a reliable learnable evaluation metric is the lack of high-quality training data for classifiers to efficiently distinguish plausible and implausible machine-generated stories.",
"Previous works relied on heuristically manipulated plausible examples to mimic possible system drawbacks such as repetition, contradiction, or irrelevant content in the text level, which can be unnatural and oversimplify the characteristics of implausible machine-generated stories.",
"We propose to tackle these issues by generating a more comprehensive set of implausible stories using plots , which are structured representations of controllable factors used to generate stories.",
"Since these plots are compact and structured, it is easier to manipulate them to generate text with targeted undesirable properties, while at the same time maintain the grammatical correctness and naturalness of the generated sentences.",
"To improve the quality of generated implausible stories, we further apply the adversarial filtering procedure presented by Zellers et al. (2018) to select a more nuanced set of implausible texts.",
"Experiments show that the evaluation metrics trained on our generated data result in more reliable automatic assessments that correlate remarkably better with human judgments compared to the baselines.",
"The surge of downstream applications for open-domain natural language generation (NLG), such as dialog systems (Zhang et al., 2020) and story",
"Human Written Story: jenny liked fresh fish.",
"she decided to go fishing to catch her own.",
"she brought her worms and pole and a chair.",
"she sat there all day but didn't catch anything.",
"she packed it up and went home disappointed.",
"Sentence Manipulation: jenny liked fresh fish.",
"she decided to go fishing to catch her own.",
"she wrote songs every single day.",
"she sat there all day but didn't catch anything.",
"she packed it up and went home disappointed.",
"Keyword Manipulation: jenny liked fresh fish.",
"she decided to go fishing to catch her own.",
"she brought her worms and pole and a chair.she sat there all day but didn't catch anything.",
"she unpackedit up and went home disappointed.",
"UNION: jenny liked fresh fish.",
"jimhas a very structured workout program to help him achieve goals.she brought her worms and pole and a relaxer.",
"she sat there all day but didn't catch anything.",
"she unpackit up and went home disappointed.",
"Plot: jenny fresh fish -> decided Manipulated Plot: jenny fresh fish -> tasha fishing catch -> brought worms chair offered woman store -> brought worms chair -> -> sat -> packed home disappointed sat -> got wet packed home disappointed Manipulated Plot Guided Generation (Ours): jenny was out of fresh fish.",
"tasha offered to buy her some from the woman at the store.",
"she brought her worms and a chair and decided to play with them.",
"jenny sat down and laid down on the chair.",
"when she got wet, she packed up and went home disappointed.",
"generators (Rashkin et al., 2020a) necessitates automatic evaluation metrics for quality assessment.",
"The existence of accurate automatic evaluation metrics can accelerate the development cycle by facilitating the process of model comparison and hyper-parameter search.",
"Many existing reference-based approaches such as BLEU (Papineni et al., 2002) or ROUGE (Lin, 2004) fail to correlate well with human judgment in open-domain settings due to the fact that there can be potentially many plausible generations that do not have significant overlap with the limited set of given references.",
"This failure invites research on more sophisticated and reliable evaluation metrics.",
"Recently, learning-based approaches have been proposed to overcome this limitation by training classifiers to distinguish between plausible and implausible texts (Li and Jurafsky, 2016; Holtzman et al., 2018).",
"The choice of training data for learning such classifiers is a key determinant of the metric effectiveness.",
"Existing works take human-written texts as plausible (positive) examples, while the negative samples are heuristically generated by randomly substituting keywords or sentences (See Figure 1) (Li and Jurafsky, 2016; Guan and Huang, 2020).",
"Guan and Huang (2020) further improved the quality of evaluators by applying heuristic rules such as adding repetition, reordering and negation (See the UNION story in Figure 1).",
"In this work, we hypothesize that heuristically generated data cannot adequately reflect the characteristics of the implausible texts generated by language models, thus result in suboptimal trained evaluation metrics.",
"This deficiency can be mitigated by generating high-quality implausible examples that are closer to the test data.",
"Toward this goal, we propose an approach based on the manipulation of plots , which are high-level structured representations of generated texts originally used as a content-planning tool for better text generation (Fan et al., 2019; Goldfarb-Tarrant et al., 2020).",
"Specifically, we propose to manipulate plots by injecting incoherence sources into them.",
"The generation models conditioned on such manipulated plots lead to implausible texts that have pertinent similarities with implausible machine-generated texts and thus can serve as good negative examples for training evaluation metrics.",
"We further improve the quality of training data by incorporating the adversarial filtering technique proposed by Zellers et al. (2018) to select more challenging negative samples generated from the manipulated plots (See Figure 1).",
"Eventually, these samples result in more reliable evaluation metrics.",
"The contributions of this work are four-fold: We study the importance of training data for learnable automatic evaluation metrics in open-domain story generation task and show the inadequacy of heuristically generated negative examples in this setting.",
"We propose a novel technique to generate negative samples by introducing plot-level incoherence sources that guide generation models to produce implausible texts.",
"We show the affirmative role of adversarial filtering techniques in constructing training data for learnable open-domain story generation evaluation metrics.",
"We demonstrate that the evaluation metrics trained on our generated data have a significantly higher correlation with human judgments compared to strong baselines.",
"Existing work on automatic evaluation of generation models can be classified into two subgroups, non-learning-based and learning-based methods, which we briefly summarize below.",
"Non-learning-based Metrics.",
"Some metrics in this group consider the centrality of a text around a specific topic as a proxy for measuring its quality.",
"The transitions of entities in neighbor sentences and their distribution across text have been served as a measurement for quality assessment (Miltsakaki and Kukich, 2004; Lapata and Barzilay, 2005).",
"Perplexity is another commonly used metric to evaluate the quality of text and story generation models (Fan et al., 2018; Peng et al., 2018).",
"Learning-based Metrics.",
"This group of metrics is based on neural-based classifiers trained on a set of positive (plausible) and negative (implausi-ble) texts.",
"The common point between these metrics is using random sentence substitution to construct training examples, while the architectures are slightly different.",
"Li and Jurafsky (2016) trained a neural network with a sigmoid function on top of sentence embeddings extracted from LSTM.",
"Lai and Tetreault (2018) designed SENTAVG that gets the sentence vectors from LSTM, takes the average of these vectors to represent the whole text, and then passes it through a hidden layer.",
"Recently, Guan and Huang (2020) proposed a more accurate automatic evaluation metric called UNION.",
"This metric achieved better performance by using BERT (Devlin et al., 2019) as a more effective classification model and have a broader set of negative samples coming from different heuristics.",
"For all learning-based metrics, the simplicity of heuristically generated data samples makes them inadequate for an accurate evaluation of plausibility in open-domain generated texts.",
"We formulate the evaluation of open-domain story generation as a binary classification task where the goal is to distinguish plausible and implausible generated stories, also referred to as positive and negative examples.",
"Clearly, the availability of high-quality positive and negative examples is essential for training reliable and generalizable metrics.",
"While human-generated stories can be considered as positive examples, what constitutes good negative examples is a non-trivial question.",
"Specifically, consider a hypothetical decision boundary that separates positive and negative stories.",
"While any point on one side of the boundary will be a negative example, intuitively we want examples that are not too far away from that boundary.",
"To achieve this, we will start from positive examples, and modify them in a controllable manner to generate corresponding negative samples.",
"There are some widely-used approaches to heuristically manipulate positive examples and change their structure to generate negative examples.",
"Sentence Substitution.",
"Sentence substitution (briefly HEUR _S ENT _S UB ) replaces a fraction of sentences in the plausible text with random ones (See Figure 1).",
"This breaks the discourse-level coherence, making a story not interpretable (Li and Jurafsky, 2016; Holtzman et al., 2018).",
"Keyword Substitution.",
"Guan and Huang (2020) proposed to apply random substitutions at the keyword-level (briefly HEUR _K EY _S UB ), where a fraction of keywords are randomly substituted with their corresponding antonyms from a commonsense knowledge base such as ConceptNet (Speer and Havasi, 2012) to corrupt the plausibility in the text.",
"ConceptNet consists of ( object , relation , subject ) triplets.",
"For each selected keyword that exists as an object or subject in the ConceptNet, its counterpart is extracted from one of the contradiction-type relations; Antonym , NotDesires , NotCapableOf , or NotHasProperty .",
"For instance, packed word in the second example of the implausible text in Figure 1 is substituted by its antonym unpacked .",
"UNION Manipulations.",
"Alongside the keyword and sentence substitutions, Guan and Huang (2020) proposed to use repetition, reordering, and negation techniques to generate a more complete and nuanced set of implausible examples.",
"The sentences and keywords are repeated throughout the text to reflect the repetition issue of language models.",
"The order of sentences is changed and negation words are added to make texts implausible due to wrong causal dependencies and conflicted logic.",
"They simultaneously apply some of these techniques to human-written texts to construct negative examples (See third negative story in Figure 1).",
"We refer to this data as UNION_D ATA .",
"Despite the demonstrated effectiveness of UNION_D ATA in open-domain story evaluation, heuristically constructed negative samples are quite far from machine-generated texts, and thus inadequate to represent a broad set of machine-generated implausible texts.",
"As we stated above, applying heuristic rules at the utterance level result in negative examples that are usually unnatural and do not reflect the complex characteristics of machine-generated texts.",
"Instead, we propose to introduce perturbations at a more abstract plot level.",
"Namely, we seek to improve the quality of negative samples using plot-controlled generation with adversarial filtering techniques.",
"Studies have shown that high-quality fluent stories can be generated by planning in advance and leveraging lucrative plots (Yao et al., 2019; Fan et al., 2019; Goldfarb-Tarrant et al., 2019, 2020; Rashkin et al., 2020b; Brahman et al., 2020).",
"Yao et al. (2019) leverage a sequence of keywords as the plot representation (also called storyline).",
"Fan et al. (2019) use semantic role labeling tool to extract plots as abstract presentation of stories over actions and entities.",
"Their experiments affirm that plots have positive effects on generating high-quality stories.",
"Here we leverage this idea for generating implausible texts, by controllable injection of implausibility sources, or perturbations, into the ground-truth plots.",
"The resulting plot-level manipulations will force the model to reflect applied implausibility in the generated text and will negatively impact the text's plausibility.",
"In contrast to Guan and Huang (2020), our proposed plot-level manipulations (MANPLTS ) do not directly change the text at the token level instead, we inject incoherence into language at the concept level.",
"The plot-guided generation guarantees the naturalness of generations since it leverages a well-trained conditional language model.",
"The generated samples are also anticipated to be closer and congruous to the machine-generated texts that will be assessed during the inference time.",
"Concept-level incoherence creates implausible factors that guide models to include that implausible sources.",
"Figure 2 demonstrates various proposed plot-level manipulations in dotted boxes.",
"1 All proposed manipulations are described in the following sections.",
"We refer this data as ManPlts .",
"Non-logically Ordered Plots.",
"Logical conflict is one of the sources for implausibility that results from not-logically ordered concepts in the text.",
"While Guan and Huang (2020) covered this type of implausibility by changing the order of sentences, we hypothesize that disrupting the logical order at the concept-level is more efficient.",
"To accomplish concept reordering, we first randomly choose verbs from the plot and leverage the COMET (Bosselut et al., 2019) model to predict their subsequent events.",
"Then we dislocate the resulted concept pairs.",
"COMET, which is trained on tuples of the form ( subject , relation , object ), can be used to predict an object given a pair of subject and relation .",
"As an example, given the pair ( work , causes ) COMET will predict get pay to show that work causes to get paid.",
"We focus on COMET relations HasPrerequisite , HasFirstSubevent , Causes and HasLastSubevent that imply ordering.",
"In the first two relations, object should appear before subject, while in the other two the order is reversed.",
"Therefore, subject work comes before get pay due to the causes relation that holds between them.",
"We flip the correct order of concepts and attach them with or without randomly selected connection words such as then, later, subsequently to generate implausible texts (the purple box in Figure 2).",
"relationship between its words.",
"It can be harmed by accompanying words with their antonyms or other conflicting concepts that add contradiction to the text and make it hard to grasp.",
"In order to add such kind of implausibility, we propose to insert contradictory sources of randomly selected plots in consecutive positions.",
"For each selected plot, we use ConceptNet (Speer and Havasi, 2012) to extract concepts that hold negation relations such as Antonym , NotDesires , NotCapableOf , and NotHasProperty with it and insert them as neighbor plots.",
"In the navy blue box of Figure 2, purse has been added before wallet as its antonym.",
"This guides the generation model to include consecutive contradictory elements in the generated text that harms the coherence of sections and makes it difficult to interpret.",
"Repetition Insertion.",
"Repetition is one of the common issues that many generative models suffer from.",
"Recently proposed top-k (Fan et al., 2018) and top-p (Holtzman et al., 2020) sampling techniques partially mitigated but not completely solved this issue.",
"Guan and Huang (2020) proposed to replicate this problem in negative implausible text construction by repeating N-grams in consecutive positions.",
"These heuristically constructed outputs only mirror local repetition issues, while the state-of-the-art generative models produce more complex and subtle repetitions throughout the whole text.",
"We propose to repeat random plots of each text in various positions that would force the language model to duplicate them throughout the text and exhibit more realistic machine-generated repetitive examples.",
"In Figure 2, the repetition of floor and jake decided compels the model to generate boring and repetitive sentences.",
"Random Substitution.",
"Random sentence substitutions employed by many evaluation models amplify the implausibility sources in the text by inserting completely off-topic sentences that could potentially result in topical inconsistency throughout the text.",
"Such scenarios are less likely for state-of-the-art high-quality generation models that use encoded context to generate tokens.",
"Once again, we propose to do the replacement at the plot level.",
"Within our approach, even though the inserted random plots are completely irrelevant, the model would attempt to incorporate them into the text as much as possible by using encoded context sentences.",
"This can be seen in the third sentence of Figure",
"2. Even if this sentence's plots are randomly inserted, the model is able to generate a sentence that does not have significant topical inconsistency, thanks to the contextualized nature of the generative process.",
"Table 1 depicts four different machine-generated stories, each containing five sentences that are conditioned on the manipulated plots.",
"Bold italic keywords represent manipulated plots resulted from the proposed approaches shown in the middle column.",
"Adversarial filtering (AF) technique was originally proposed to generate high-quality negative examples for a grounded commonsense inference task (Zellers et al., 2018).",
"AF uses a committee of trained models to identify more appropriate negative endings from a pool of candidate samples generated for a given context.",
"For each human-written text, there are N machine-generated endings.",
"The goal is to select the most unbiased subset ( A ) of generated endings with similar stylistic features to the human-written ones.",
"AF starts by randomly specifying the best endings in the assignment set ( A ) from all N endings of each context (Zellers et al., 2018).",
"In each iteration, the data is divided into two parts.",
"The first part is used for training a classifier to distinguish high/low quality endings, and the second part is used for replacing easy endings in A with adversarial endings from N .",
"Easy endings are the ones that a trained classifier assigns a much lower score compared to human-written texts, e.g., due to their significantly different writing styles.",
"Adversarial texts have a higher positive probability than easy texts indicating the challenge for a classifier to distinguish them from human-written texts.",
"The replacement of easy texts with adversarial ones maximizes the empirical error of the trainable classifier.",
"The steps outlined above are repeated till the assignment set is filled with high-quality endings for each context.",
"We use AF on top of the plot-based manipulations for generating implausible texts (briefly call AF_M ANPLTS ).",
"Our approach for negative texts construction has two main stages: 1) generate a set of N implausible texts conditioned on manipulated plots 2) pick out the A most challenging high-quality implausible texts without stylistic biases based on applied adversarial filtering technique to increase the quality of negative samples.",
"We assess the plausibility of a text by training a classification model on the data that consists of human-written texts (positive examples) and constructed implausible stories (negative examples).",
"Binary classifiers trained on this data can produce the probability of plausible/implausible labels for each text.",
"The predicted probability of the positive class is interpreted as the text's plausibility score.",
"The effectiveness of large pretrained language models has been proven in NLP downstream tasks (De-vlin et al., 2019; Liu et al., 2019; Yang et al., 2019; Beltagy et al., 2020).",
"RoBERTa introduced by Liu et al. (2019) is one of these models achieving impressive performances on text classification.",
"We employ RoBERTa for our plausibility classification task.",
"We start from pretrained RoBERTa parameters and fine-tune them on the constructed evaluation dataset to predict plausibility scores.",
"One of the main limitations of RoBERTa is its length requirement of at most 512 tokens.",
"Recently, this limitation was addressed by considering a sparser set of attention mechanisms such as locality-sensitive hashing and sliding window attentions, which reduce the computation complexity from O ( n 2 ) to O ( n log n ) and O ( n ) respectively (Kitaev et al., 2020; Beltagy et al., 2020).",
"In this work, we broaden the scope of the text plausibility evaluation to cover not only short but also long texts with more than 512 tokens.",
"To this end, we examine and evaluate the quality of long texts using Longformer (Beltagy et al., 2020) that has linear complexity in terms of the number of tokens in a text.",
"We fine-tune the pretrained Longformer for long text plausibility evaluation.",
"We benchmark both fine-tuned classifiers on the manipulated data with the two following baselines.",
"UNION.",
"Recently, Guan and Huang (2020) proposed an automatic evaluation metric by training a BERT model (Devlin et al., 2019) with an auxiliary reconstruction objective which helps to recover the perturbation from a negative sample.",
"The proposed model is trained on negative implausible texts constructed by adopting repetition, substitution, reordering, and negation sampling techniques.",
"This model and its proposed approach for data construction were compared with previously proposed methods and shown to be more efficient.",
"SENTAVG.",
"We complete our investigation by selecting SENTAVG (Lai and Tetreault, 2018) as another baseline model for the plausibility evaluation task.",
"SENTAVG leverages LSTM to get sentence representation from their words GloVe embeddings.",
"All the sentences vectors are averaged to form the representation for the whole text and this vector is passed to a hidden layer.",
"A softmax layer at the end computes the probability distribution of texts over positive and negative labels.",
"We investigate the effectiveness of our proposed approach versus heuristic negative sampling techniques by focusing on the evaluation of open-domain story generation models in two datasets with short and long stories.",
"We show the generalizability of metrics trained on our proposed plot manipulation data.",
"We also separately assess the impact of each manipulation technique on the metric accuracy.",
"We conduct our experiments on two English stories datasets that are significantly different in terms of length and topic; ROCStories (shortly ROC) and",
"average 49.4 and 734.5 tokens in each story.",
"ROCStories.",
"ROCStories is a resource of five-sentence commonsense stories collected via crowd-sourcing (Mostafazadeh et al., 2016) covering a logically linked set of daily events.",
"We follow the approach proposed by Yao et al. (2019) to extract story plots (storylines) for the stories and manipulate them to guide conditional language models to generate negative samples.",
"Writing Prompt.",
"Writing Prompt dataset contains abstract high-level prompts and their corresponding long human-written stories from an online forum (Fan et al., 2018).",
"To apply the plot manipulation technique for implausible text construction, we follow the procedure proposed by Fan et al. (2019) to extract the plots with verb and argument type role labeling tags.",
"Data Preparation.",
"We split the stories from both datasets into two subsets for training generation and evaluation models, respectively.",
"We use 70 percent of stories in ROC (ROC_LM) and WP (WP_LM) for fine-tuning GPT2 (Radford et al., 2019) language model with batch size of",
"4. 2 After 3 epochs of fine-tuning, the perplexity on the validation set of ROC and WP datasets are 8.28 and 25.04, respectively.",
"The remaining 30 percent of stories from ROC (ROC_Eval) and WP (WP_Eval) are used for training and evaluating the evaluation models.",
"All stories in the original dataset represent plausible texts.",
"We apply approaches from Section 3 to augment negative samples.",
"Table 2 and Table 3 summarize the resulting datasets for ROC and WP.",
"In HEUR _S ENT _S UB , we extract all stories with at least 2 sentences and replace 50% of their sentences with random ones.",
"For HEUR _K EY _S UB , we do random substitution of 15% of keywords with their corresponding antonyms extracted from ConceptNet and ignore stories without substitutable keywords.",
"The UNION_Data is resulted by following rules from Guan and Huang (2020) and is applied 2 We fine-tune GPT2 language model using https:// github.com/huggingface/transformers .",
"To create MANPLTS dataset, we first fine-tune the BART model (Lewis et al., 2019) with a batch size of 8 for three epochs on pairs of ground-truth plots and stories from ROC_LM and WP_LM data with the resulting perplexity of 3.44 and 6.79 for the validation sets.",
"Afterward, 15% of plots are employed and two up to four proposed manipulation techniques in Section 3.2 are randomly selected and applied.",
"We leverage the fine-tuned BART model and use the top-50 sampling technique with a temperature of 0.8.",
"We specify the maximum length of 200 for ROC dataset and 1024 for WP dataset to generate implausible texts on manipulated plots.",
"In the AF_M ANPLTS dataset, we apply the adversarial filtering technique on top of six generated implausible stories using the fine-tuned BART model conditioned on the manipulated plots.",
"The output contains each human-written story and its three most challenging implausible samples.",
"The performance of automatic evaluation metrics is assessed based on their correlations with human judgments.",
"To this end, we gather human evaluations and examine the Spearman ( ) and Kendall ( ) correlations with metrics predicted scores (New-man et al., 2010; Lai and Tetreault, 2018; Guan and Huang, 2020).",
"Spearman and Kendall are benefi-cial in estimating monotonic associations for not normally distributed and ranked scores.",
"We collect human judgments through Amazon Mechanical Turk (AMT) experiments.",
"We randomly choose 150 human-written stories from ROC_Eval and WP_Eval test sets and 150 machine-generated texts by the fine-tuned GPT2 models.",
"Five distinct participants are asked to rate each story on a scale of 0 to 5 (from not at all plausible to completely plausible ).",
"We prepare an attention check test to guarantee the accuracy of human annotations and recollect evaluations for users who do not pass the test.",
"The average score of the five annotators is treated as the final human score for each text.",
"We normalize human scores to be in the same range of 0-1 as the model's output scores are.",
"Table 4 shows the statistics and agreements in the conducted experiments.",
"We conduct a comprehensive set of experiments to examine and show the importance of training data in the plausibility evaluation task.",
"We train both evaluation and language models on a machine with a GeForce RTX 2080 Ti GPU.",
"In our experiments, we have SENTAVG as the baseline model.",
"We compare SENTAVG across more powerful classifiers RoBERTa for ROC stories and Longformer for WP stories (FT_LM).",
"We fine-tune pretrained RoBERTa-base model with the learning rate of 2e-5 and batch size 8 for three epochs and process the ROC stories with a maximum of 128 tokens.",
"To evaluate WP with lengthy stories, we fine-tune pretrained Longformer-base model with the learning rate of 2e-5 and batch size 3 by encoding texts with at most 1024 tokens for three epochs.",
"3 We complete the models' comparisons by incorporating the recently proposed UNION model (Guan and Huang, 2020) to our experiments.",
"We retrain it on the ROC_Eval and WP_Eval sets with the same hyper-parameters stated in their paper.",
"Table 5 depicts the quantitative results of correlation analysis between human and automatic evaluation metrics.",
"For almost all constructed datasets for evaluation, the RoBERTa and Longformer in the case of short and long stories surpass the baseline models that show the impact of large transformer-based models in this evaluation task.",
"The models trained on heuristically generated implausible samples by random sentence/keyword substitutions show the lowest correlations.",
"The main reason for such weakness is the huge dissimilarity of heuristically generated training data and machine-generated test data, which has a significant negative impact on the model's performance.",
"The positive 3 We fine-tune RoBERTa and Longformer models using https://github.com/huggingface/ transformers .",
"impact of UNION_Data is visible in Table",
"5. It demonstrates that the construction of implausible stories based on a more complete set of heuristic alterations yields better training data but still has its own shortcomings.",
"This could be due to fact that text-level manipulations introduce artifacts that break the naturalness of the texts and have quite different styles compared to machine-generated implausible texts.",
"The superiority of RoBERTA and Longformer models trained on MANPLTS and AF_M ANPLTS datasets show the effectiveness of our proposed plot manipulation technique in enhancing the similarity between the training and test data.",
"Adversarial filtering technique further helps to increase the quality of negative samples and generate better implausible machine-generated texts, which consequently improves the accuracy of evaluation.",
"By applying hypothesis testing to compare the metrics correlations with human scores (Diedenhofen and Musch, 2015), we verify that these improvements are statistically significant (p<.05).",
"We also note that the correlations between plot manipulation-based metrics and human evaluation are much higher in WP dataset.",
"This could result from the limited ability of the current generative models to generate plausible long stories, thus making them easily distinguishable both by humans and automated metrics.",
"One of the desirable features of automated evaluation metrics for story generation is their generalizability or robustness to different datasets (Sellam et al., 2020; Guan and Huang, 2020).",
"The dataset Dataset ROC WP WP ROC UNION_D ATA 0.17 0.15 0.12 0.07 MANPLTS 0.57 0.39 0.23 0.16 AF_M ANPLTS 0.60 0.42 0.26 0.18 Table 6: Correlation of plausibility metrics with human judgements.",
"shifting robustness shows the metric's success in accurately evaluating texts in different datasets.",
"We examine the robustness of metrics by leveraging ROC and WP as two distributionally different types of stories datasets.",
"We train models on various training data constructed from negative sampling techniques in ROC dataset and test them on human scores collected through AMT experiments conducted on WP dataset ( ROC WP ) and vice versa ( WP ROC ).",
"In Table 6, we show the robustness of fine-tuned language models trained on the last three datasets of Table 5 as the best performing models in comparison to models trained on sentence and keyword substitutions.",
"According to Table 6, the correlation drops due to the quite different structure of two datasets.",
"RoBERTa/Longformer models fine-tuned on AF_M ANPLTS in ROC/WP datasets and subsequently tested on WP/ROC dataset have the highest correlations with human judgments and can be generalized well on two datasets.",
"The data shifting from ROC to WP better preserves the performance of metrics rather than the counterpart shifting.",
"The reason for correlation decline of models trained on WP and tested on ROC could be the format of implausible texts in WP that could not be found in ROC data since the stories are shorter in this data and the reason for implausibility is fewer.",
"The positive impact of plot-level manipulations in precisely evaluating the plausibility can be assessed with regard to the four different manipulation",
"manipulation techniques.",
"We conduct an ablation study on WP dataset to examine each manipulation tech-nique's impact separately.",
"We construct different training data each time by excluding one of the manipulation techniques and generating a new set of negative samples.",
"Then we fine-tune Longformer on all these training datasets with different negative samples and compute the correlation of the fine-tuned Longformer as the evaluation metric with human judgments.",
"The lower correlations shown in Table 7 in comparison to Table 5 illustrate the harms that the elimination of each of the proposed approaches from the construction of training data could cause.",
"This attests to the effectiveness of all proposed manipulation techniques in the generation of higher quality training data and subsequently resulting in more accurate evaluation metrics.",
"As this table demonstrates, the correlation drops the most by ablating the reordering and repeating plots, which shows that they are the major problems in generating long texts by language models and have the most significant role in constructing high-quality implausible samples and consequently accurate evaluation metrics.",
"Automatic plausibility evaluation models that are trained on heuristically generated data show low correlation with human judgement.",
"We address this issue by creating a better quality set of implausible texts.",
"In contrast to existing methods that modify text at token level, our approach introduces incoherence sources at a more abstract plot level, which helps to guide the generative model conditioned on those manipulated plots to generate negative samples that are more similar to machine-generated incoherent texts.",
"We further improve the data quality by applying adversarial filtering to select more challenging and refined negative samples.",
"Our experiments demonstrate that negative examples generated according to the proposed method result in more realistic implausible texts and consequently lead to more accurate evaluation metrics that have higher correlation with human judgement.",
"All co-authors of this work totally understand and agree with ACM Code of Ethics and its importance in expressing the conscience of the profession.",
"We ensure this work is compatible with the provided code, specifically in the terms of providing nonoffensive dataset construction.",
"1) training data construction In our approach, we use BART model conditioned on manipulated story plots to construct implausible samples that better reflect the implausibility in generation models.",
"The main concern that arises here is the probability of generating abusive language samples from manipulated plots.",
"Indeed, these plots origin from human-written stories without abusive languages provided by (Mostafazadeh et al., 2016; Fan et al., 2018) where users are not allowed to write profanity and inappropriate content.",
"Accordingly, our manipulated version of plots and the BART model conditioned on them generate samples unlikely to contain strong biases or abusive content.",
"It is noteworthy to mention that even the source plots are relatively benign, the process of altering them would have the possibility of creating objectionable texts.",
"Other potential attack could be the dual-usage of metrics by augmenting offensive language texts as plausible samples.",
"This would harm the underlying tasks such as story generation models to be encouraged to generate inappropriate stories.",
"Such attacks can be identified and dissolved by security trended studies which are out of this work's scope.",
"1) testing data collection We collect human judgments by conducting Amazon Mechanical Turk (AMT) experiments that are leveraged to compare the accuracy of trained metrics in terms of their correlations with human scores.",
"The conducted AMT does not disrupt user privacy as we do not contain their personal information.",
"This fades the possibility of any gender bias problems and IRB approval needs.",
"Annotators were asked to rate the coherence of stories in each HIT page of AMT in the range of 0 up to",
"5. We fairly compensated annotators.",
"The average time of annotating each HIT in AMT was 25 minutes (including three stories for evaluation and their explanations), and according to the per hour wage of $13, we fairly paid them $6 per HIT.",
"This work targets the NLP open-domain generation community.",
"Our metrics establish the main basis to achieve higher-quality generations by automatically assess the outputs and save time, cost, and human efforts.",
"We don't anticipate specific failure modes in our work since the provided ap-proach's success has been investigated through a comprehensive set of comparisons with other existing metrics.",
"This work is supported by the CwC program under the Contract W911NF-15-1-0543 with the US Defense Advanced Research Projects Agency (DARPA).",
"We would like to thank the anonymous reviewers for their helpful comments and the members of PLUSlab from USC/UCLA, Shushan Arakelyan, and Ninareh Mehrabi for their constructive feedback."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"result",
"objective",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"Contextual word embedding models, such as BioBERT and Bio_ClinicalBERT, have achieved state-of-the-art results in biomedical natural language processing tasks by focusing their pre-training process on domain-specific corpora.",
"However, such models do not take into consideration structured expert domain knowledge from a knowledge base.",
"We introduce UmlsBERT, a contextual embedding model that integrates domain knowledge during the pre-training process via a novel knowledge augmentation strategy.",
"More specifically, the augmentation on UmlsBERT with the Unified Medical Language System (UMLS) Metathesaurus is performed in two ways:",
"(i) connecting words that have the same underlying concept' in UMLS and",
"(ii) leveraging semantic type knowledge in UMLS to create clinically meaningful input embeddings.",
"By applying these two strategies, UmlsBERT can encode clinical domain knowledge into word embeddings and outperform existing domain-specific models on common named-entity recognition (NER) and clinical natural language inference tasks.",
"In recent years, the volume of data being collected in healthcare has grown considerably.",
"A signifi-cant proportion of the data is in text form, which requires advanced Natural Language Processing (NLP) models to process.",
"This has led to the cre-ation of high-performing, optimized NLP models focused on the biomedical domain.",
"Contextual word embedding models, such as ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019) have achieved state-of-the-art results in many NLP tasks.",
"Initially tested in a general domain, these models have also been successfully applied in the biomedical domain by pre-training them on biomedical corpora, leading to the best performances in a variety of biomedical NLP tasks (Lee et al., 2019), (Alsentzer et al., 2019).",
"However, current biomedical applications of transformer-based Natural Language Understanding (NLU) models do not incorporate structured expert domain knowledge from a knowledge base into their embedding pre-training process.",
"The Unified Medical Language System (UMLS) (Bodenreider, 2004) Metathesaurus is a compendium of many biomedical terminologies with the associated information, such as synonyms and categorical groupings.",
"It allows for the connection of words that represent the same or similar concept'.",
"For example, the words lungs' and pulmonary' share a similar meaning and thus can be mapped to the same concept unique identifier (CUI) CUI: C0024109 .",
"Additionally, UMLS allows the grouping of concepts according to their semantic type (McCray et al., 2001).",
"For example, skeleton' and skin' have the same Body System' semantic type, and inflammation' and bleed' are in the Pathologic Function' semantic type.",
"In this paper, we present and publicly release 1 a novel architecture for augmenting contextual embeddings with clinical domain knowledge.",
"Specifically:",
"(i) We are the first, to the best of our knowledge, to propose the usage of domain (clin-ical) knowledge from a clinical Metathesaurus (UMLS Metathesaurus) in the pre-training phase of a BERT-based model (UmlsBERT) in order to build semantically enriched' contextual representations that will benefit from both the contextual learning (BERT architecture) and the domain knowledge (UMLS Metathesaurus).",
"(ii) We propose a new multi-label loss function for the pre-training of the Masked Language Modelling (Masked LM) task in the UmlsBERT that incorporates the connections between clinical words using the CUI attribute of UMLS.",
"(iii) We introduce a semantic type embedding that enriches the input embeddings process of the UmlsBERT by forcing the model to take into 1 https://github.com/gmichalo/UmlsBERT consideration the association between words that are of the same semantic type.",
"(iv) Finally, we demonstrate that UmlsBERT outperforms two popular clinical-based BERT models (BioBERT and Bio_ClinicalBERT) and a general domain BERT model on different clinical named-entity recognition (NER) tasks and on one clinical natural language inference task.",
"The rest of paper is organized as follows.",
"Related work is presented in Section 2. The data that were used to pre-train and test the new UmlsBERT are described in Section 3. The characteristics of the proposed UmlsBERT architecture for augmenting contextual embeddings with clinical knowledge are detailed in Section 4. Finally, the results of the down-stream tasks and the qualitative analysis are reported in Section 5, and a conclusion and a plan for future work are presented in Section 6.",
"In (Peters et al., 2018), contextualized word embeddings were introduced in a bidirectional language model (ELMo).",
"This allowed the model to change the embedding of a word based on its imputed meaning, which was derived from the surrounding context.",
"Subsequently, (Devlin et al., 2019) proposed the Bidirectional Encoder Representations from Transformers (BERT) which used bidirectional transformers (Vaswani et al., 2017) to create context-dependent representations.",
"For both models, pre-training is done on massive corpora and the context-sensitive embeddings can be used for downstream tasks.",
"Other approaches enhance the BERT's performance by injecting external knowledge from a knowledge base.",
"Sense-BERT (Levine et al., 2020) is pre-trained to predict the supersenses (seman-tic class) of each word by incorporating lexical semantics (from the lexical database WordNet (Miller, 1995)) into the model's pre-training objective and by adding supersense information to the input embedding.",
"In addition, GlossBERT (Huang et al., 2019) focuses on improving word sense disambiguation by using context-gloss pairs on the sentence-pair classification task of a BERT model.",
"Furthermore, there have been multiple attempts to improve the performance of contextual models in the biomedical domain.",
"BioBERT is a BERT-based model which was pre-trained on both general (BooksCorpus and English Wikipedia) and biomedical corpora (PubMed abstracts and PubMed Central full-text articles) (Lee et al., 2019).",
"The authors demonstrate that incorporating biomedical corpora in the pre-training process improves the performance of the model in downstream biomedical tasks.",
"This is likely because medical corpora contains terms that are not usually found in a general domain corpus (Habibi et al., 2017).",
"Finally, Bio_ClinicalBERT (Alsentzer et al., 2019) further pre-trains BioBERT on clinical text from the MIMIC-III v1.4 database (Johnson et al., 2016).",
"It is shown that the usage of clinical specific contextual embeddings can be beneficial for the performance of a model on different clinical NLP downstream tasks.",
"We use the Multiparameter Intelligent Monitoring in Intensive Care III (MIMIC-III) dataset (John-son et al., 2016) to pre-train the UmlsBERT model.",
"MIMIC dataset consists of anonymized electronic medical records in English of over forty-thousand patients who were admitted to the intensive care units of the Beth Israel Deaconess Medical Center (Boston, MA, USA) between 2001 and 2012.",
"In particular, UmlsBERT is trained on the NO-TEEVENTS table, which contains 2,083,180 rows of clinical notes and test reports.",
"We evaluate the effects of the novel features of the UmlsBERT model on the English MedNLI natural language inference task (Romanov and Shivade, 2018) and on four i2b2 NER tasks (in IOB format (Ramshaw and Marcus, 1995)).",
"More specifically, we experiment on the following English i2b2 tasks: the i2b2 2006 de-identification challenge (Uzuner et al., 2007), the i2b2 2010 concept extraction challenge (Uzuner et al., 2011), the i2b2 2012 entity extraction challenge (Uzuner et al., 2011) and the i2b2 2014 de-identification challenge (Stubbs et al., 2015).",
"These datasets are chosen because of their use in benchmarking prior biomedical BERT models, thereby allowing for performance comparison.",
"In addition, these publicly available datasets enable the reproducibility of our results and meaningful comparison with future studies.",
"Table 1 lists the statistics of all the datasets.",
"Finally, it should be noted that for the identification of the UMLS terms, we use the UMLS 2020AA version.",
"The original BERT model (Devlin et al., 2019) is based on multi-layer bidirectional transformers (Vaswani et al., 2017), which generates contextualized word representations.",
"Incorporating information from bidirectional representations allows the BERT model to capture more accurately the meaning of a word based on its surrounding context, i.e. sentence.",
"The pre-training phase of the BERT model consists of two self-supervised tasks: Masked Language Modelling (LM), in which a percentage of the input is masked at random and the model is forced to predict the masked tokens, and Next Sentence Prediction, in which the model has to determine whether two segments appear consecutively in the original text.",
"Since our UmlsBERT model is focused on augmenting the Masked LM task with clinical information from the UMLS Metathesaurus, we omit the description of the Next Sentence Prediction task and only describe the details of the Masked LM task herein.",
"In Masked LM, 15% of the tokens of each sentence are replaced by a [MASK] token.",
"For the j th input token in the sentence, an input embedding vector u ( j ) input is created by the following equation: u ( j ) input = p ( j ) + SEGseg ( j ) id + Ew j (1) where p ( j ) R d is the position embedding of the j th token in the sentence, and d is the transformer's hidden dimension.",
"Additionally, SEG R d 2 is called the segment embedding, and seg id R 2 , a 1-hot vector, is the segment id that indicates the sentence to which the token belongs.",
"In Masked LM, the model uses only one sentence and therefore, the segment id indicates that all the tokens belong to the first sentence.",
"E R d D is the token embedding where D is the length of the model's vocabulary and w j RD is a 1-hot vector corresponding to the j th input token.",
"The input embedding vectors pass through multiple attention-based transformer layers where each layer produces a contextualized embedding of each token.",
"Finally, for each masked token w , the model outputs a score vector y w RD with the goal of minimizing the cross-entropy loss between the softmax of y w and the 1-hot vector corresponding to the masked token ( h w ) : loss = log ( exp ( y w [ w ]) (cid:80) w (cid:48) exp ( y w [ w (cid:48) ])) (2) 4.2 Enhancing Contextual Embeddings with Clinical Knowledge In the UmlsBERT model, we update the Masked LM procedure to take into consideration the associations between the words specified in the UMLS Metathesaurus.",
"We introduce a new embedding matrix called ST RD s d into the input embedding of the BERT model, where d is BERT's transformer hidden dimension and D s = 44 is the number of unique UMLS semantic types that can be identified in the vocabulary of our model.",
"In particular, in this matrix, each row represents the unique semantic type in UMLS that a word can be identified with (for example the word heart' is associated with the semantic type T023:Body Part, Organ, or Organ Component' in UMLS).",
"To incorporate the ST embedding matrix into the input embedding of our model, all words with a clinical meaning defined in UMLS are identified.",
"For each of these words, the corresponding concept unique identifier (CUI) and semantic type are extracted.",
"We use s w RD s as a 1-hot vector corresponding to the semantic type of the medical word w .",
"The identification of the UMLS terms and their UMLS semantic type is accomplished using the open-source Apache clinical Text Analysis and Knowledge Extraction System (cTakes) (Savova et al., 2010).",
"Thus, by introducing the semantic type embedding, the input vector (equation 1) for each word is updated to: u ( j ) (cid:48) input = u ( j ) input + ST (cid:62) s w (3) where the semantic type vector ST (cid:62) s w is set to a zero-filled vector for words that are not identified in UMLS.",
"tensor could be beneficial for the performance of the model as the semantic type representation can be used to enrich the input vector of words that are rare in the training corpus and the model do not have the chance to learn meaningful information for their representation.",
"Figure 1 presents an overview of the insertion of the semantic type embeddings into the standard BERT architecture.",
"Furthermore, we update the loss function of the Masked LM pre-training task to take into consideration the connection between words that share the same CUI.",
"As described in Subsection 4.1, the loss function of the Masked LM pre-training task of a BERT model is a cross-entropy loss between the softmax vector of the masked word and the 1-hot vector that indicates the actual masked word.",
"We proposed to soften' the loss function and updated it to a multi-label scenario by using information from the CUIs.",
"More specifically, instead of using a 1-hot vector ( h w ) that corresponds only to the masked word w , we use a binary vector indicating the presence of all the words which shared the same CUI of the masked word ( h (cid:48) w ) .",
"Finally, in order for the model to properly function in a multi-label scenario, the cross entropy loss (equation 2) is updated to a binary cross entropy loss: loss = D (cid:88) i =0 ( h (cid:48) w [ i ] log ( y w [ i ]) + (1 h (cid:48) w [ i ]) log (1 y w [ i ])) (4) These changes force UmlsBERT to learn the semantic relations between words, which are associated with the same CUI in a biomedical context.",
"An example of predicting the masked word lungs' with and without the clinical information is presented in Figure 2. As seen in this figure, the UmlsBERT model tries to identify the words lung', lungs' and pulmonary' because all three words are associated with the same CUI: C0024109 in the UMLS Metathesaurus.",
"We initialize UmlsBERT with the pre-trained Bio_ClinicalBERT model (Alsentzer et al., 2019), and then we further pre-train it with the updated Masked LM task on MIMIC-III notes.",
"Afterwards, in order to perform the downstream tasks, we add a single linear layer on top of UmlsBERT and fine-tuned' it to the task at hand, using either the associated embedding for each token or the embedding of the [CLS] token.",
"The same fine-tuning method is applied to all other models used for comparison.",
"In order to keep the experiment controlled, we use Dataset BERT based BioBERT Bio_ClinicalBERT UmlsBERT MedNLI epochs 4 4 4 3 batch size 16 16 32 16 learning rate 5e 5 3e 5 3e 5 3e 5 i2b2 2006 epochs 20 20 20 20 batch size 32 16 16 32 learning rate 2e 5 2e 5 2e 5 5e 5 i2b2 2010 epochs 20 20 20 20 batch size 16 32 32 16 learning rate 3e 5 3e 5 5e 5 5e 5 i2b2 2012 epochs 20 20 20 20 batch size 16 32 16 16 learning rate 3e 5 3e 5 5e 5 5e 5 i2b2 2014 epochs 20 20 20 20 batch size 16 16 32 16 learning rate 2e 5 2e 5 5e 5 3e 5 Table 2: Hyperparameter selection of all the models for each dataset the same vocabulary and WordPiece tokenization (Wu et al., 2016) across all the models.",
"WordPiece divides words not in the vocabulary into frequent sub-words.",
"Since our goal is to demonstrate the beneficial effect of incorporating domain knowledge in this study, we haven't experimented with a more complicated layer on top of UmlsBERT (e.g. the Bi-LSTM layer in (Si et al., 2019)).",
"This is because our goal is to demonstrate that incorporating domain knowledge was beneficial for the performance of the model by showing that UmlsBert outperformed the other medical-based BERT models on a variety of medical NLP tasks (Section 5).",
"It should be noted that we chose the UMLS Metathesaurus in our process of augmenting the UmlsBERT model for two reasons: 1. We aim to create a clinical contextual embedding model that is capable of integrating domain (medical) knowledge.",
"2. The UMLS Metathesaurus is a compendium of many popular biomedical vocabularies (e.g. MeSH (Dhammi and Kumar, 2014) and ICD-10 (Organization, 2004)).",
"By choosing to utilize the domain (medical) knowledge of UMLS, we actually incorporate domain knowledge from all major internationally standardized clinical terminologies.",
"In the pre-training phase, UmlsBERT is trained for 1 , 000 , 000 steps with a batch size of 64 , maximum sequence length of 128 and learning rate of 5 10 5 .",
"All other hyper-parameters are kept to their default values.",
"UmlsBERT is trained by using 2 nVidia V100 16GB GPU's with 128 GB of system RAM running Ubuntu 18.04.3 LTS.",
"In this section, we present the results of an empirical evaluation of the UmlBERT model.",
"In particular, we provide a comparison between different available BERT models to show the efficiency of our proposed model on different clinical NLP tasks.",
"In addition, we provide the results of an ablation test to exam the effect of the semantic type embeddings on the performance of the model.",
"Furthermore, we conduct a qualitative analysis of the embedding of each model in order to illustrate how medical knowledge improves the quality of medical embeddings.",
"Finally, we provide a visualized comparison of the embeddings of the words that are associated with semantic types between UmlsBERT and Bio_ClinicalBert.",
"In this section, we report the results of the comparison of our proposed UmlsBERT model with the other BERT-based models on different downstream clinical NLP tasks described in Section 3. All BERT-based models are implemented using the transformers library (Wolf et al., 2019) on PyTorch 0.4.1.",
"All experiments are executed on a Tesla P100 16.3 GB GPU with 32G GB of system RAM on Ubuntu 18.04.3 LTS.",
"For the clinical NER tasks, we take a similar approach to (Lee et al., 2019) and set the number of training epochs to 20 to allow for maximal performance, except for MedNLI, for which we train the models on 3 and 4 epochs.",
"The best values are chosen based on validation set F1 values using the seqevals python framework for sequence labeling evaluation, due to the fact that it can provide an evaluation of a NER task on entity-level 2 for the i2b2 tasks and validation set accuracy, which is the standard metric for this task 3 for the MedNLI dataset.",
"In the interest of providing a fair comparison, we also tune the hyperparame-ters of each model in order to demonstrate its best 2 https://github.com/chakki-works/ seqeval 3 https://tinyurl.com/ transformers-metrics performance.",
"The final hyper-parameters selection of all the models for each dataset can be found in Table 2. In order to achieve more robust results, we run our model on five different (random) seeds (6809, 36275, 5317, 82958, 25368) and we provide the average scores and standard deviation for the testing and the validation set.",
"It should be noted that BERT base , BioBERT and Bio_ClinicalBERT have the exact same number of parameters as they use the same BERT-based architecture.",
"However, because we introduce the semantic type embeddings into the UmlsBERT model, our model has an additional 33792 [the number of unique UMLS semantic types (44) transformer's hidden dimen-sion(768)] parameters 4 .",
"In Table 3, we provide the number of parameters for each dataset where we include the linear layer on top of the BERT-based models for the text and token classification.",
"The mean and standard deviation (SD) of the scores for all the competing models on different NLP tasks are reported in Table 3. UmlsBERT achieves the best results in 4 out of the 5 tasks.",
"It achieves the best F1 score in three i2b2 tasks (2006, 2010 and 2012) ( 93 . 6% , 88 . 6% and 79 . 4% ) and the best accuracy on the MedNLI task ( 83 . 0% ).",
"Because our model is initialized with Bio_ClinicalBERT model and pre-trained on the MIMIC-III dataset, it is not surprising that it does not outperform the BERT model on i2b2 2014 (The BERT base model achieved 95 . 2% on i2b2 2014).",
"This is probably due to the nature of the de-ID challenges which is described in detail in (Alsentzer et al., 2019).",
"In summary, protected health information (PHI) are replaced with a sentinel PHI' marker in the MIMIC dataset, but in the de-ID challenge dataset (i2b2 2014), the PHI is replaced with different synthetic masks, and thus, the sentence structure that appears in BERT's training is not present at the down-stream task (Alsentzer et al., 2019).",
"However, even in this task, UmlsBERT achieves a better performance than the other biomedical BERT models.",
"These results confirm that augmenting contextual embedding through domain (biomedical) knowledge is indeed beneficial for the model's performance in a variety of biomedical down-stream tasks.",
"In order to understand the effect that semantic type embeddings have on the model performance, we conduct an ablation test where the performance of",
"two variations of the UmlsBERT model are compared, where in one model the semantic type embeddings are available to it, and in the other, they are not.",
"The results of this comparison are listed in Table 5.",
"We observe that for every dataset, UmlsBert achieves its best performance when semantic type embeddings are available.",
"This experiment further confirms the positive effect of the semantic type embeddings on the performance of the UmlsBERT model.",
"Table 4 shows the nearest neighbors for 6 words from 3 semantic categories using UmlsBERT, Bio_ClinicalBERT, BioBERT and BERT.",
"The first two categories (ANATOMY' and DISORDER') are chosen to demonstrate the ability of the models to identify similar words in a clinical context, and the third category (GENERIC') is used to validate that the medical-focus BERT models can find meaningful associations between words in a general domain even if they are trained on medical-domain text datasets.",
"This analysis demonstrates that augmenting the contextual embedding of UmlsBERT with Clinical Metathesaurus (UMLS) information is indeed beneficial for discovering associations between words with similar meanings in a clinical context.",
"For instance, only UmlsBERT discovers the connection between kidney' and ren' (from the latin word renes', which means kidneys), between mass' and lump', between bleeding' and hem' (a commonly used term to refer to blood) and between feet' and pedal'(a term pertaining to the foot or feet in a medical context).",
"These associations are the result of changing the nature of the Masked LM training phase of UmlsBERT to a multi-label scenario by connecting different words which share a common CUI in UMLS.",
"In the previously mentioned examples, kidney' and ren' have CUI:C0022646 ; mass' and lump' have CUI:C0577559 ; bleeding' and hem' have CUI:C0019080 and feet' and pedal' have CUI:C0016504 .",
"Finally, the results in the generic list of words indicate that the medical-focused BERT models did not trade their ability to find meaningful associations in a general domain in order to be more precise in a clinical context as there is no meaningful difference observed in the list of neighbour words that the four models identified.",
"In order to demonstrate the effect of the semantic types on the input embeddings, we present in Figure 3, a UMAP dimensionality reduction (McInnes and Healy, 2018) mapping comparison between Bio_ClinicalBERT and UmlsBERT.",
"We compare the input embedding of Bio_ClinicalBERT with the input embedding of UmlsBERT for all the clinical terms that UMLS identified in the standard BERT vocabulary.",
"It should be noted that in the graph, we group the medical terms by their semantic groups, which are clusters that consist of different semantic types.",
"For example, the semantic types Cell' and Body System' are grouped in the semantic group ANATOMY'.",
"It is evident that the clustering according to the semantic group that exists in the UmlsBERT embeddings (Figure 3b) cannot be found in the Bio_ClinicalBERT embeddings (Figure 3a).",
"Thus, we can conclude that more meaningful input embeddings can be provided to the model, by augmenting the input layer of the BERT architecture with the semantic type vectors, as they force the embeddings of the words of the same semantic type to become more similar.",
"This paper presents UmlsBERT, a novel BERT-based architecture that incorporates domain (biomedical) knowledge in the pre-training process of a contextual word embeddings model.",
"We demonstrate that UmlsBERT can learn the association of different clinical terms with similar meaning in the UMLS Metathesaurus.",
"UmlsBERT can also create more meaningful input embeddings by leveraging information from the semantic type of each (biomedical) word.",
"Finally, we confirm that these modifications can improve the model's performance as our UmlsBERT model outperforms other biomedical BERT models in various downstream tasks.",
"As for future work, we plan to address the limitations of this study including:",
"(i) Examining the effect of augmenting contextual embeddings with medical knowledge when more complicated layers are used atop of the output embedding of UmlsBERT.",
"(ii) Exploring the UMLS hierarchical associations between words that extend the concept connection that we investigated in this paper.",
"(iii) Testing our model in other datasets and biomedical tasks (e.g. relation extraction task (Krallinger et al., 2017)) to investigate further the strengths and weaknesses of our model.",
"We acknowledge the generous support from Mi-crosoft AI for Health Program, MITACS Accelerate grant (#IT19239), Semantic Health Inc., NSERC and Canada Research Chairs Program.",
"Contextual word embeddings models have achieved state-of-the-art results in many (clinical) NLP tasks such as NER or relation extraction (Devlin et al., 2019; Lee et al., 2019).",
"These results suggest that medical-based contextual word embeddings models, such as our model (UmlsBERT), can be a valuable tool for better processing and understanding the vast volume of health data that is amassed at a rapid speed in health and biomedical domain.",
"However, one of obstacles for adopting such a model in any system lies in the computing cost of pre-training.",
"For example, our UmlsBERT model was trained for 10 days using 2 nVidia V100 16GB GPU's with 224 GB of system RAM running Ubuntu 18.04.3 LTS, and we acknowledge that investing these types of computational resources or even time is not a viable option for many research groups, let alone regular healthcare providers.",
"This is the reason for making the UmlsBert model publicly available, as we hope that the clinical NLP community can benefit from using our model.",
"In addition, UmlsBERT is the first contextual word embedding model, to the best of our knowledge, that integrated structured medical-domain knowledge into its pre-training phase.",
"Although this study demonstrates the beneficial effect of incorporating structured biomedical domain knowledge in the pre-training phase of a contextual embedding model on the performance of the model, it is not a far-fetched hypothesis that similar pretraining strategy can be applied to incorporate structured domain-knowledge in different disciplines (e.g. environment, sciences, etc) to improve the performance of the model in the respective domain-specific down-stream tasks.",
"Finally, we believe that many research groups in the clinical NLP field could benefit from the use of our models by either using the contextual embeddings of our model or fine-tuning our model in specific down-stream tasks, for example, automatic encoding of diseases and procedures in electronic medical records.",
"This automatic encoding model can significantly reduce time and cost in data extraction and reporting.",
"Success in such task will have huge impact in clinical practices and research since assigning correct codes for diseases and clinical procedures are important for making care or operational decisions in healthcare."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"objective",
"other",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"other",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain"
] |
[
"Effective adversary generation for neural machine translation (NMT) is a crucial prerequisite for building robust machine translation systems.",
"In this work, we investigate veritable evaluations of NMT adversarial attacks, and propose a novel method to craft NMT adversarial examples.",
"We first show the current NMT adversarial attacks may be improperly estimated by the commonly used mono-directional translation, and we propose to leverage the round-trip translation technique to build valid metrics for evaluating NMT adversarial attacks.",
"Our intuition is that an effective NMT adversarial example, which imposes minor shifting on the source and degrades the translation dramatically, would naturally lead to a semantic-destroyed round-trip translation result.",
"We then propose a promising black-box attack method called Word Saliency speedup Local Search (WSLS) that could effectively attack the mainstream NMT architectures.",
"Comprehensive experiments demonstrate that the proposed metrics could accurately evaluate the attack effectiveness, and the proposed WSLS could significantly break the state-of-art NMT models with small perturbation.",
"Besides, WSLS exhibits strong transferability on attacking Baidu and Bing online translators.",
"Recent studies have revealed that neural machine translation (NMT), which has achieved remarkable progress in advancing the quality of machine translation, is fragile when attacked by some crafted perturbations (Belinkov and Bisk, 2018; Cheng et al., 2019, 2020; Wallace et al., 2020).",
"Even if the perturbations on inputs are small and imperceptible to humans, the translation quality could be degraded The four authors contributed equally.",
"dramatically, raising increasing attention to adversarial defenses for building robust machine translation systems as well as its prerequisite researches on building effective NMT adversarial attacks.",
"As character level perturbations usually lead to lexical errors and are easily corrected by spell checking tools (Ren et al., 2019; Zou et al., 2020), in this work, we focus on crafting word level adversarial examples that could maintain lexical and grammatical correctness and hence are more realistic.",
"An essential issue of crafting NMT adversarial examples is how to define what is an effective NMT adversarial attack.",
"Researchers have provided an intuitive definition that an NMT adversarial example should preserve the semantic meaning on the source but destroy the translation performance with respect to the reference translation (Michel et al., 2019; Niu et al., 2020).",
"Correspondingly, the attack criteria are proposed as the absolute degradation or relative degradation against the reference translation (Ebrahimi et al., 2018; Michel et al., 2019; Niu et al., 2020; Zou et al., 2020).",
"To craft a perturbation that maintains the semantics as well as grammatical correctness following the above definition and evaluation, a variety of methods to impose word replacements have been proposed in recent studies (Michel et al., 2019; Cheng et al., 2019, 2020; Zou et al., 2020), making it a commonly used paradigm for NMT attacks.",
"However, there exist potential pitfalls overlooked in existing researches.",
"First, it is possible to craft an effective attack on the NMT models by reversing the semantics on the source, as illustrated in Table 1 1 .",
"Meanwhile, since the antonyms are potentially in the neighborhood of the victim word in the embedding space, just as the same as the synonyms, it is entirely possible to produce opposing semantics when replacing a word with its neighbors, making the proposed attack method break the definition.",
"Furthermore, there is a risk of evaluating the attacks directly using the reference translation.",
"Differs to the classification tasks, even if the perturbation is small to be synonymous with the original word in the source, the actual ground-truth reference may be changed due to the substitution.",
"Table 2 illustrates a typical failing adversarial example x (cid:48) and a successful example x (cid:48) , where x (cid:48) could be falsely distinguished as effective due to the missing of ground-truth reference Ref. (cid:48) 2 .",
"Obviously, x (cid:48) would be correctly distinguished if we have the actual ground-truth reference of x (cid:48) .",
"However, the actual ground-truth reference of the perturbed input is notoriously difficult to be built beforehand, making the NMT attack hardly to be evaluated veritably.",
"In this work, in order to craft appropriate NMT adversarial examples, we introduce new definition 1 This is a real case reported on Google translation community in October, 2020.",
"See details in:",
"https://support.google.com/translate/thread/78771708?hl=en.",
"2 BLEU ( ref , y ) = 39 .",
"20 BLEU ( ref , y (cid:48) ) = 2 .",
"86 , BLEU ( x , x (cid:48) ) = 61 .",
"34 BLEU ( y , y (cid:48) ) = 49 .",
"83 .",
"and metrics for the machine translation adversaries by leveraging the round-trip translation, the process of translating text from the source to target language and translating the result back into the source language.",
"Our intuition is that an effective NMT adversarial example, which imposes minor shifting on the input and degrades the translation dramatically, would naturally lead to a semantic destroying round-trip translation result.",
"Based on our new definition and metrics, we propose a promising black-box attack method called Word Saliency speedup Local Search (WSLS) that could effectively attack the mainstream NMT architectures, e.g .",
"RNN and Transformer.",
"We introduce an appropriate definition of NMT adversary and the deriving evaluation metrics, which are capable of estimating the adversaries only using source information, and tackle well the challenge of missing ground-truth reference after the perturbation.",
"We propose a novel black-box word level NMT attack method that could effectively attack the mainstream NMT models, and exhibit high transferability when attacking popular online translators.",
"Let X denote the source language space consisting of all possible source sentences and Y denote the target language space.",
"Given two NMT models, the primal source-to-target NMT model M x y aims to learn a forward mapping f : X Y to maximize P ( y ref | x ) where x X and y ref Y , while the dual target-to-source NMT model M y x aims to learn the backward mapping g : Y X .",
"After the training, NMT can correctly reconstruct the source sentence x = g ( f ( x )) .",
"In the following, we first give the definition of NMT adversarial examples, then introduce our word substitution based black-box adversarial attack method.",
"Given a subset of (test) sentences T X and a small constant (cid:15) , we summarize previous works (Belinkov and Bisk, 2018; Ebrahimi et al., 2018; Michel et al., 2019) and give their conception of NMT adversarial examples as follows.",
"Definition 1 (NMT Adversarial Example).",
"An NMT adversarial example is a sentence in A = { x (cid:48) X | x T , (cid:107) x (cid:48) x (cid:107) < (cid:15) S t ( y, y ref ) S t ( y (cid:48) , y ref ) < (cid:48) } , where y = f ( x ) , y (cid:48) = f ( x (cid:48) ) , and S t ( , ) is a metric for evaluating the similarity of two sentences, and (or (cid:48) , (cid:48) < ) is threshold we can accept (or refuse) for the translation quality .",
"A smaller (cid:48) indicates a more strict definition of the NMT adversarial example.",
"In contrast to the adversarial examples in image domain (Szegedy et al., 2014), we argue that taking y ref as the reference sentence for x (cid:48) is not appropriate because the perturbation might change the semantic of x to some extent, causing that Definition 1 is not appropriate.",
"To address this problem, we propose to evaluate the similarity between the benign sentence x and the reconstructed sentence x , as well as the similarity between the adversarial sentence x (cid:48) and the reconstructed adversarial sentence x (cid:48) .",
"We introduce a new definition of NMT adversarial example basing on the round-trip translation.",
"Definition 2 (NMT adversarial example).",
"An NMT adversarial example is a sentence in A = { x (cid:48) X | x T , (cid:107) x (cid:48) x (cid:107) < (cid:15) S t ( y, y ref ) S t ( x, x ) E ( x, x (cid:48) ) } , where E ( x, x (cid:48) ) = S t ( x, x ) S t ( x (cid:48) , x (cid:48) ) is defined as the adversarial effect for NMT.",
"And, the reconstructed x and x (cid:48) are generated with round-trip translation: x = g ( f ( x )) , x (cid:48) = g ( f ( x (cid:48) )) .",
"A larger E indicates that the generated sentence x (cid:48) can not be well reconstructed by round-trip translation when compared with the reconstruction quality of the source sentence x .",
"Here is a threshold ranging in [0 , 1] to determine whether x (cid:48) is an NMT adversarial example.",
"A larger indicates a more strict definition of the NMT adversarial example.",
"In this work, we use the BLEU score (Papineni et al., 2002) to evaluate the similarity between two sentences.",
"Based on Definition 2, we further provide two metrics, i.e",
"., Mean Decrease (MD) and Mean Percentage Decrease (MPD) to estimate the translation adversaries appropriately.",
"MD directly presents the average degradation of the reconstruction quality, and MPD reduces the bias of the original quality in terms of the relative degradation.",
"The proposed MD is defined as: MD = 1 NN (cid:88) i D i , (1) where N is the number of victim sentences, D i is the decreasing reconstruction quality of the adversarial example x (cid:48) i , denoted as: D i = (cid:26) 0 if S t ( x i , x i ) = 0 , S t ( x i , x i ) S t ( x (cid:48) i , x (cid:48) i ) otherwise.",
"(2) Similarly, MPD is defined as: MP D = 1 NN (cid:88) i P D i , (3) where P D i is denoted as: P D i = (cid:40) 0 if S t ( x i , x i ) = 0 , S t ( x i , x i ) S t ( x (cid:48) i , x (cid:48) i ) S t ( x i , x i ) otherwise.",
"(4) In practice, except for the constraints in Definition 2, adversarial examples should also satisfy the lexical and syntactical constraints so that they are hard for human to perceive.",
"Therefore, the correct word in the source sentence must be replaced with other correct words instead of misspelled word to meet the lexical constraint.",
"Besides, to keep the grammatical correctness and syntax consistency, the modification should not change the syntactic relation of each word in the source sentence.",
"To meet all the above constraints, we propose a novel NMT adversarial attack method by substituting words with their neighbors selected from the parser filter to generate reasonable and effective adversarial examples.",
"There are two phases in the proposed Word Saliency speedup Local Search (WSLS) attack",
"method.",
"At the first phase, we design initial strategies to obtain an initial example x (cid:48) .",
"At the second phase, we present a local search algorithm accelerated by word saliency to optimize the perturbed example.",
"Candidates .",
"For a word w i in the source sentence x = { w 1 , . . . , w i , . . . , w n } , where i denotes the position of word w i in the sentence, we first build a candidate set W i D where D is the dictionary consisting of all the legal words.",
"In this work, we build the candidate set by finding the k closest neighbors in the word embedding space: W i = { w 1 i , . . . , w ki } .",
"Then we filter the candidates based on the parsing, as shown in Part A of Figure 1 3 .",
"Note that the combination of them can impose minor shifting on the source so as to meet the lexical and semantic constraints, as discussed in Section 2.1.",
"In our experiments, we use the pre-trained mask language model (MLM) to extract the embedding space to follow the black-box setting.",
"3 This is important to rule out invalid victim locations wherein the token ( e.g ., punctuation) is nonsense, and ensure the perturbations keep grammatical correctness.",
"Greedy Substitution .",
"For each position i , we can substitute word w i with w ji W i to obtain an adversary x (cid:48) = { w 1 , . . . , w ji , . . . , w n } , and evaluate the adversarial effect E ( x, x (cid:48) ) by reconstruction.",
"Then we select a word w i that yields the most significant degradation: w i = arg max w ji W i E ( x, x (cid:48) ) .",
"It is straightforward to generate an initial adversary through a Random Order Greedy Replacement (ROGR) method, which is to randomly select positions expected to make substitutions, then iteratively replace the word with its neighbors by Eq.",
"5 on the selected positions in a random order.",
"However, the initial result has a significant im-pact on the final result of the local search.",
"If the local search phase starts with a near-optimal solution, it is likely to find a more powerful adversary after the local search process.",
"Therefore, we design a greedy algorithm called Greedy Order Greedy Replacement (GOGR) for the initialization, which is depicted in Part B of Figure 1. In the GOGR algorithm, at each step we enumerate all possible positions we haven't attacked yet, and for each position we try to substitute word w i x with word w i W i according to Eq.",
"5, then we choose the best w among the possible positions, and iteratively substitute words until we substitute enough words.",
"To speed up the local search process, we adopt the word saliency , used for text classification attack, to sort the word positions in which the word has not been replaced yet.",
"In this way, we can skip the positions that may lead to low attack effect so as to speedup the search process.",
"For text classification task, Li et al. (2016) propose the concept of word saliency that refers to the degree of change in the output of text classification model when a word is set to the unknown token.",
"Ren et al. (2019) incorporate the word saliency to generate adversarial examples for text classification.",
"To adopt the concept of word saliency for NMT, we regard the output of a MLM for the word as a more general concept of word saliency, which is independent of the specific tasks.",
"Definition 3 (Word Saliency).",
"For a sentence x = { w 1 , . . . , w i , . . . , w n } and a mask language model (MLM) M , the word saliency of w i is defined as S ( x, w i ) = 1 P ( w i | x i , M ) where x i = { w 1 , . . . , w i 1 , mask , w i +1 . . . , w n } and mask means the word is masked in the sentence.",
"Through Definition 3, the higher word saliency represents the lower context-dependent probability, which can be caused by numerous reasonable substitutions or rare syntax structure, indicating weaker word positions that are easier to be attacked.",
"In this work, as shown in Part C of Figure 1, we calculate the word saliency S ( x, w i ) for all positions before the local search phase, making the local search efficiently inquire the word saliency.",
"In the local search phase, as shown in Part D of Figure 1 and detailed in Figure 2, there are three types of walks, namely saliency walk , random walk and certain walk , used to update x (cid:48) to promote the attack quality.",
"To explore and exploit the search space, we define some basic operations and walks to evolve the adversaries.",
"A mute operator is to restore an executed perturbation w i to its original word w i to mutate the adversary.",
"A prune operator is to exclude a portion of candidate locations where the perturbations will not be imposed to narrow down the search area.",
"A tabu operator indicates that the last perturbed location is forbidden to be manipulated in the current iteration.",
"As illustrated in Figure 2, the three operators are utilized in the local search walks ( Part D ).",
"We interpret the three walks as follows.",
"Saliency Walk .",
"We first design an efficient walk for the search, called the saliency walk (SW), to make a balanced exploration and exploitation in the neighbourhood of the well initialized solution generated by the aforementioned GOGR algorithm.",
"During the saliency walk, as shown in Figure 2a, at the current iteration ( t ) , we mute each perturbed word to generate a set of partial solutions, sorted in the ascending order of the saliency score, so as to give higher priority to the perturbations with higher word saliency on the locations.",
"Then we prune other unperturbed words according to the descending order of the saliency score, and query candidate substitutions for each of the remaining words.",
"Then candidate adversaries, consisting of the concatenation of each partial solution with each candidate substitution, are evaluated by Eq.",
"2 iteratively.",
"To accelerate the saliency walk, we have an early stop strategy: if the current best adversarial effect in the enumeration of the candidate adversaries at the present iteration ( t ) , denoted as pbest ( t ) = E , is better than pbest ( t 1) (the best adversarial effect at the previous iteration ( t 1) ), i.e .",
"pbest ( t ) pbest ( t 1) , then we terminate the enumeration of the candidates and pass the state of pbest ( t ) as well as the tabu operator to the next walk, otherwise the state of pbest ( t 1) will be passed to the next walk and the tabu location is expired.",
"Random Walk .",
"To avoid the current adversarial example get trapped in a local optimum, we design an effective mutation walk, called the random walk (RW), to mutate the current solution.",
"During the random walk, as shown in Figure 2b, we randomly mute a perturbed word to generate a partial solution, and query the candidate substitutions for each of the unperturbed words as in saliency walk.",
"Then we concatenate the partial solution with each candidate substitution to build the candidate adversaries, among which the best solution is used to update pbest ( t ) .",
"After that, the tabu operator will be forcibly passed to the next walk, reinforcing the exploration ability of the WSLS algorithm.",
"Certain Walk .",
"To do a sufficient exploitation after the random walk as a mutation, we design the certain walk (CW).",
"As shown in Figure 2c, certain walk is similar to saliency walk but it removes the prune operation to enlarge the neighborhood space.",
"To trade off the efficiency and search time, we adopt one saliency walk followed by random walk, certain walk, random walk and certain walk, to construct one round of local search, denoted as { SW, RW, CW, RW, CW } , as shown in Part D of Figure 1. Besides, we bring an early-stop-finetune mechanism to the WSLS method.",
"For any walk in WSLS, if there exists an adversarial candidate that updates the historically best adversarial effect, this adversarial candidate will be immediately set as the initial solution to start a new local search.",
"Otherwise, the WSLS will stop after the ending of the current round 4 .",
"We conduct experiments on the Chinese-English (Zh-En), English-German (En-De), and English-Russian (En-Ru) translation tasks.",
"For the Zh En translation task, we use LDC corpus 5 consisting of 1.25M sentence pairs, and use NIST (MT) datasets 6 to craft the attacks.",
"Following the preprocessing in Zhang et al. (2019), we limit the source and target vocabulary to the most frequent 30K words, remove sentences longer than 50 words from the training data, and use NIST 2002 as the validation set for the model selection.",
"For this translation task, we implement our attacks on two state-of-art word-level NMT models.",
"1) RNNsearch (Bahdanau et al., 2015) has an encoder consists of forward and backward RNNs each having 1000 hidden units and a decoder with 1000 hidden units.",
"Denote this model as Rnns. for abbreviation.",
"2) Transformer comprises six layers of transformer with 512 hidden units and 8 heads in both encoder and decoder, which mimics the hyperparameters in (Vaswani et al., 2017).",
"Denote this model as Transf. for abbreviation.",
"For the or-4 Code is available at https://github.com/JHL-HUST/ AdvNMT-WSLS/.",
"acle back-translation (En Zh), we use a sub-word level transformer as our oracle model which was trained with LDC datasets and then finetuned with the NIST datasets.",
"For the En De and En Ru translation tasks, We use WMT19 test sets to craft the adversaries, and implement our attacks on the winner models of the WMT19 En De and En Ru sub-tracks 7 .",
"Specifically, the En De model and En Ru model are both subword-level transformer, where a joint byte pair encodings (BPE) with 32K split operations is applied for En De, and separate BPE encodings with 24K split operations is applied for each language in En Ru (Ng et al., 2019).",
"We denote these two models as BPE-Transf. for abbreviation.",
"For the oracle back-translation (De En, Ru En), the best submitted NMT models in WMT19 are used as our oracle models which are further finetuned with 90% of the previous WMT test sets and validated with the remaining sets.",
"As for the reference result, Table 3 and Table 4 show the case-insensitive BLEU scores for forward-translation, back-translation, and round-trip translation on the selected language pairs.",
"We observe that the word-level victim models ( Rnns. and Transf. ) achieve an average BLEU score of 36.71 and 41.55 for Zh En translation respectively, demonstrating the accuracy of these two models on translating the original Chinese sentences.",
"For the back-translation, the oracle models achieve an average BLEU score of 82.9 for En Zh translation, as well as a BLEU score of 54.83 and 57.24 for De En and Ru En translations respectively, indicating that the oracle models are reliable enough in the back-translation stage for the source reconstruction.",
"Besides, the reconstruction quality of the victim models are reported in Table 3 and Table 4, where the source sentences are back-translated by the oracle models in the round-trip translation, showing that the source language is reconstructed well enough by the cooperation of forward-translation and oracle back-translation.",
"Furthermore, to enhance the authenticity of the attack performance, we removed the noisy data, which could not be correctly identified as the corresponding language sentences by online translators, and we also excluded sentences longer than 50 words in the NIST datasets, ensuring that the attack 7 https://github.com/pytorch/fairseq/tree/master/examples/ translation.",
"As for the parameter settings of the attack methods, we use pyltp 9 as the parser checking tool and generate the top 10 nearest parser-filtered words to construct the candidate sets for each word.",
"To generate the word saliency, two state-of-art whole word masking BERT are utilized as the MLM for the Chinese 10 and English 11 languages respectively.",
"And the prune operators implemented in SW and RW will reserve the highest five word saliency locations and their word candidates.",
"Finally, the adversaries are crafted by substituting 20% words.",
"To demonstrate our proposed WSLS method, we implement AST-lexcial (Cheng et al., 2018) as a black-box baseline, wherein AST-lexcial shares the same idea of random order random replacement.",
"Besides, the naive ROGR method can be considered as another black-box counterpart of the white-box kNN method in Michel et al. (2019) that randomly selects the word positions and greedily selects the neighbor words based on the gradient loss.",
"8 After the preprocessing, the size of the original NIST datasets are reduced from 878 to 617 (MT02), 919 to 793 (MT03), 1788 to 1495 (MT04), 1082 to 907 (MT05), 1664 to 988 (MT06), and 1357 to 789 (MT08).",
"9 https://github.com/HIT-SCIR/pyltp.",
"10 https://huggingface.co/hfl/chinese-bert-wwm-ext.",
"11 https://huggingface.co/bert-large-uncased-whole-word-masking.",
"As shown in Table 5 and Table 6, both GOGR and WSLS have the MD scores close to the original reconstruction scores for Rnns.",
", Transf.",
", and BPE-Transf.",
", and their attack results are much better than that of AST-lexical as well as ROGR.",
"It shows that both WSLS and GOGR can effectively attack various NMT models under the standard of Definition 2. WSLS is superior to GOGR, indicating that the local search phase can further promote the attack quality.",
"Specifically, the MPD score of WSLS is almost 1.5 higher than that of GOGR, which is more obvious as compared to the MD metric, revealing the rationality of MPD also.",
"We do ablation study on the WSLS algorithm in Table 7.",
"Here Init is for the method used for initialization, WS indicates whether we use word saliency to speedup the local search, LS indicates whether we use local search or other variants of walk sequence for the local search.",
"From Table 7 we observe that: 1) The initialization of GOGR exhibits significantly better results than ROGR, and also converges faster than ROGR; 2) WSLS without word saliency speedup, denoted as WSLS 1 , exhibits slightly higher attack results but the running times are much longer than WSLS.",
"Thus, we choose WSLS to have a good tradeoff on attack quality and time.",
"To test the transferability of our method, we transfer our crafted adversarial examples on NIST 2002 dataset to attack the online Baidu and Bing translators.",
"As shown in Table 8, the attack effectiveness is significant.",
"It degrades the reconstruction quality of Baidu and Bing with more than 20 BLEU points, demonstrating the high transferability.",
"In addition, we provide two adversarial examples in Table 9, generated by WSLS on the Rnns.",
"model, that can effectively attack the online Bing Metrics Model Method MT02 MT03 MT04 MT05 MT06 MT08 AVG MD Rnns.",
"and Baidu translators, respectively.",
"It demonstrates that WSLS could craft adversarial examples with strong readability and high transferability.",
"In recent years, adversarial examples have attracted increasing attention in the area of natural language processing (NLP), mainly on text classification (Jia and Liang, 2017; Ren et al., 2019; Wang et al., 2021).",
"For neural machine translation (NMT), there are also some adversary works emerging quickly (Belinkov and Bisk, 2018; Ebrahimi et al., 2018; Michel et al., 2019; Cheng et al., 2019; Niu et al., 2020; Wallace et al., 2020).",
"On the character level, a few adversarial attacks by manipulating character perturbations have been proposed since 2018.",
"Belinkov and Bisk (2018) confront NMT models with synthetic and natural misspelling noises, and show that character-based NMT models are easy to be attacked by character level perturbation.",
"Ebrahimi et al. (2018) propose to attack the character level NMT models by manipulating the character-level insertion, swap and deletion.",
"Similarly, Michel et al. (2019) perform a gradient-based attack that processes words in source sentences to maximize the translation loss.",
"To attack against production MT systems, Wallace et al. (2020) imitate the popular online translators and manipulate the perturbations based on the gradient of the adversarial loss with the imitation models.",
"The above four works also incorporate adversarial training to improve the robustness of NMT.",
"However, the character level perturbations are hard to be applied into confronting practical NMT models, as these perturbations significantly reduce x (cid:48) : ,",
"in which the adversaries are generated on the Rnns.",
"model using WSLS.",
"the readability and also could be easily corrected by spell checkers (Ren et al., 2019; Zou et al., 2020).",
"On the other hand, word level adversaries could maintain lexical and grammatical correctness, which are more realistic but more challenging to generate.",
"Cheng et al. (2018) craft the adversaries with randomly sampled perturbed positions, and then replace the words according to the cosine similarity of the embedding vectors between the original word and the neighbors.",
"Cheng et al. (2019) propose a gradient-based attack method that replaces the original word with the candidates generated by integrated language model.",
"Michel et al. (2019) generate adversaries by substituting the word with its nearest neighbors, which are informed by the gradient of the victim models.",
"(Zou et al., 2020) introduce a reinforced learning based method to craft the attacks following Michel et al. (2019) to define the reward and substitution candidate set.",
"Existing word level translation attacks are mainly white-box, wherein the attacker can access all the information of the victim model.",
"Besides, there is a risk of guiding the attacks to directly use the degradation of reference translation, since the actual references may be changed by word substitution.",
"Thus, there exists few study on the effective word level attack for NMT, especially in the black box setting.",
"This study fills this gap and sheds light on black-box word level NMT attacks.",
"We introduce an appropriate definition of adversarial examples as well as the deriving evaluation measures for the adversarial attacks on neural machine translation (NMT) models.",
"Following our definition and metrics, we propose a promising black-box NMT attack method called the Word Saliency speedup Local Search (WSLS), in which a general definition of word saliency by leveraging the strong representation capability of pre-trained language models is also introduced.",
"Experiments demonstrate that the proposed method could achieve powerful attack performance, that effectively breaks the mainstream RNN and Transformer based NMT models.",
"Further, our method could craft adversaries with strong readability as well as high transferability to the popular online translators.",
"This work is supported by National Natural Science Foundation (62076105) and Microsft Research Asia Collaborative Research Fund (99245180).",
"We thank Xiaosen Wang for helpful suggestions on our work."
] | [
"abstain",
"objective",
"objective",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"other",
"other"
] |
[
"While deep learning models are making fast progress on the task of Natural Language Inference, recent studies have also shown that these models achieve high accuracy by exploiting several dataset biases, and without deep understanding of the language semantics.",
"Using contradiction-word bias and word-overlapping bias as our two bias examples, this paper explores both data-level and model-level debiasing methods to robustify models against lexical dataset biases.",
"First, we debias the dataset through data augmentation and enhancement, but show that the model bias cannot be fully removed via this method.",
"Next, we also compare two ways of directly debiasing the model without knowing what the dataset biases are in advance.",
"The first approach aims to remove the label bias at the embedding level.",
"The second approach employs a bag-of-words sub-model to capture the features that are likely to exploit the bias and prevents the original model from learning these biased features by forcing orthogonality between these two sub-models.",
"We performed evaluations on new balanced datasets extracted from the original MNLI dataset as well as the NLI stress tests, and show that the orthogonality approach is better at debiasing the model while maintaining competitive overall accuracy.",
"1 1 Introduction In this work, we focus on investigating and reducing biases in the task of Natural Language Inference (NLI), where the target of the model is to classify the relations between a pair of sentences into three categories: entailment, neutral and contradiction.",
"With the release of large-scale standard datasets (Bowman et al., 2015; Williams et al., 2018), significant success has been made on 1 Our code and data are available at: https://github.",
"this task, and recent state-of-the-art neural models have already reached competitive performance even compared to humans.",
"However, a number of papers (Gururangan et al., 2018; Poliak et al., 2018; Nie et al., 2019; Naik et al., 2018) have shown that despite the high accuracy on these datasets, these models are far from mastering the required nature of natural language inference.",
"Instead of deeply understanding the sentences in the correct semantic way, these models tend to exploit shortcuts or annotation artifacts in the dataset and actually overfit to these datasets to predict the label using simple patterns.",
"However, most shortcuts are only valid within the datasets and fail to hold for general natural language.",
"Hence, these models fail to generalize to other datasets for the same task (Tal-man and Chatzikyriakidis, 2019), perform badly on challenge analysis datasets (Glockner et al., 2018; McCoy et al., 2019; Wang et al., 2019b), and are fooled by adversarial attacks (Naik et al., 2018).",
"One major cause of this problem is the existence of dataset biases.",
"Since most NLP datasets are often collected and processed by crowdworkers, bias can be added to the data at every step of data collection.",
"For example, when writing contradiction pairs, workers are likely to use negation words such as not', and when creating entailment pairs, workers are likely to keep most of the words in the premise sentence.",
"This results in annotation artifacts' in the dataset (Gururangan et al., 2018).",
"In reality, almost every dataset contains countless such diverse biases.",
"In our paper, we focus on the Multi-Genre Natural Language Inference (MNLI) dataset (Williams et al., 2018) in English, and on two specific kinds of dataset bias: Contradiction Word Bias (CWB) : If the hypothesis sentence contains some specific words (such as negation words) that are always used by the crowd-workers to generate contradiction pairs, then the sentence pair is very likely to be contradiction.",
"Word Overlapping Bias (WOB) : If the premise sentence and the hypothesis sentence have a high word-overlap, then the sentence pair is very likely to be entailment.",
"These two types of biases are selected as the focus of our experiments because: (1) there exist a significant number of samples in the dataset where they are a major problem; (2) they are conceptually easy to understand and relatively easier to evaluate.",
"In our experiments, we not only used current existing evaluation datasets from Naik et al. (2018), but also extracted balanced evaluation datasets from the original data to evaluate these two biases.",
"Although we only focus on these two kinds of dataset biases throughout our experiments, our methods are not specifically designed for these two biases and should be able to reduce other similar lexical biases simultaneously.",
"paper discusses the following three questions: Q1.",
"Is lexical bias a problem that can be solved by only balancing the dataset?",
"Q2.",
"Can the lexical bias problem be solved using existing ideas from the gender bias problem?",
"Q3.",
"What are some promising new modeling directions towards reducing lexical biases?",
"As responses to these three questions, we conduct three lines of experiments.",
"Firstly, we expand the discussion of Q1 by studying whether and how the bias can be reduced by debiasing the dataset.",
"For this, we add new training data which does not follow the bias pattern.",
"This new data can come from two sources, either from the original training set or via manually generated synthetic data.",
"We show that both methods can slightly reduce the model's bias.",
"However, even after adding a large amount of additional data, the model still cannot be completely bias-free.",
"Another critical problem with these data augmentation/enhancement based debiasing methods is that we need to know the specific behaviour of the biases before making some related changes to the dataset.",
"However, in reality, models are always faced with new training datasets containing unknown and inseparable biases.",
"Hence, the answer to Q1 is mostly negative for simple data-level approaches and we also need to focus on designing direct model-debiasing methods.",
"Therefore, we turn our focus to directly debiasing the model (Q2 and Q3).",
"The first method is to debias the model at the lower level, i.e., by directly debiasing the embeddings so that they do not show strong biases toward any specific label.",
"This is one of the most prevalent methods for reducing gender biases, so through the examination of this idea, we aim to compare lexical bias problems to gender bias problems and highlight its uniqueness (hence answering Q2).",
"Finally, we debias the model at the higher level, i.e., by designing another bag-of-words (BoW) sub-model to capture the biased representation, and then preventing the primary model from using the highly-biased lexical features by forcing orthogonality between the main model and the BoW model (via HEX projection (Wang et al., 2019a)).",
"In our experiments, we show that debiasing the prediction part of the model at higher levels using BoW-orthogonality is more effective towards reducing lexical biases than debiasing the model's low-level components (embeddings).",
"This approach can significantly robustify the model while maintaining its overall performance, hence providing a response to Q3.",
"We also present qualitative visualizations using LIME-analysis for the important features before and after applying the BoW-orthogonality projection.",
"Problems with NLI Models and Datasets.",
"Despite the seemingly impressive improvements in NLI tasks, recently a number of papers revealed different problems with these models.",
"Gururangan et al. (2018) showed that annotation artifacts in the datasets are exploited by neural models to get high accuracy without understanding the sentence.",
"Poliak et al. (2018) showed a similar phenomenon by showing models getting good performance but only taking one sentence as the input.",
"Nie et al. (2019) showed that NLI models achieved high accuracy by word/phrase level matching instead of learning the compositionality.",
"Naik et al. (2018) constructed bias-revealing datasets by modifying the development set of MNLI.",
"In our evaluation, besides using the datasets from Naik et al. (2018), we also extract new datasets from the original MNLI dataset to maintain the consistency of input text distribution.",
"Adversarial Removal Methods.",
"Adversarial removal techniques are used to control the content of representations.",
"They were first used to do unsupervised domain adaptation in Ganin and Lempitsky (2015).",
"Xie et al. (2017) later generalized this approach to control specific information learned by the representation.",
"Li et al. (2018) used a similar approach to learn privacy-preserving representations.",
"However, Elazar and Goldberg (2018) showed that such adversarial approach fails to completely remove demographic information.",
"Minervini and Riedel (2018) generate adversarial examples and regularize models based on first-order logic rules.",
"Belinkov et al. (2019a,b) showed that adversarial removal methods can be effective for the hypothesis-only NLI bias.",
"Our focus is on two different lexical biases and our results are complementary to theirs.",
"2 Recently, Wang et al. (2019a) proposed HEX projection to force the orthogonality between the target model and a superficial model to improve domain generalization for image classification tasks.",
"Here, to make the model less lexically biased, we apply the HEX projection with specially-designed NLP model architectures to regularize the representation in our models.",
"Even more recently, Clark et al. (2019) and He et al. (2019) propose to robustify the task model with the help of an additional simple model, using ensembling to encourage cooperation of the two models.",
"On the other hand, our main motivation to compare the advantages/limitations of dataset vs. embedding vs. classifier debiasing methods (against two different types of problematic lexical biases in NLI), and also our classifier debiasing method forces the task model to capture orthogonal information via HEX projection.",
"is also a line of work in NLP on analyzing and reducing gender bias in NLP models.",
"Bolukbasi et al. (2016); Caliskan et al. (2017); Zhao et al. (2018a) studied the bias problem in word embeddings.",
"Zhao et al. (2017) reduced gender bias in visual recognition using corpus-level constraints.",
"Zhao et al. (2018b) discussed the gender bias problem in co-reference resolution.",
"These problems are related to our work, but lexical biases are more complex.",
"Multiple inseparable lexical dataset biases can influence one single example and the same word can have different lexical biases in different contexts.",
"Later in our experiments, we show that these two problems behave differently and we present the need for different solutions.",
"Models naturally learn the biases from the dataset they are trained on.",
"Therefore, as we mentioned in Q1 in Sec. 1, one may first wonder if lexical bias can be completely removed by fixing the source of the bias, i.e., datasets.",
"While collecting large-scale datasets (Bowman et al., 2015; Williams et al., 2018) already takes a lot of time and effort, collecting bias-free datasets is even more time-consuming and hard to control.",
"Therefore, here we focus on getting additional data from currently-available resources.",
"We conducted experiments using two resources of data.",
"The first one is to do data enhance-ment' by repeating samples in the original training data.",
"The second source is data augmentation' by manually creating synthetic data.",
"We follow the construction of existing synthetic bias-revealing datasets to create new samples for the training set so that these targeted biases can be reduced.",
"Data Enhancement by Repeating Training Data.",
"For most kinds of biases, there still exists a small portion of samples that don't follow the bias.",
"Therefore, we reduce biases in datasets by repeating this portion of samples.",
"For CWB, we select non-contradiction samples containing contradiction words (details see Sec. 5.1) in the hypothesis sentence but not in the premise sentence.",
"For the WOB, we select non-entailment samples with highest word overlapping (measured by the Jaccard distance (Hamers et al., 1989) of words).",
"Next, since the number of these unbiased samples may not be large enough, we repeatedly add those selected samples to make the training set more balanced.",
"The results from adding 500 new samples to 50,000 new samples are shown in Sec. 6.1.",
"Data Augmentation by Adding Synthetic Data.",
"Researchers have been using synthetic rules to generate harder or perturbed samples to fool the model.",
"Here, besides using these datasets only as the evaluation set, we also add these samples back to the training set, similar to the concept of adversarial training (Jia and Liang, 2017; Wang et al., 2019c; Niu and Bansal, 2018) where the adversarial examples are added back to the training set so that the resulting model will be more robust to similar adversarial attacks.",
"In our experiments, we follow Naik et al. (2018) to append meaningless sentences at the end of the hypothesis sentence like in Table 1 to create additional new samples.",
"The detailed construction of these samples can be seen in Appendix.",
"By learning from these augmented datasets, the model should also be more robust to certain types of perturbations/biases of the data.",
"In Sec. 6.1, our experiments showed that while this approach can lead to less biased models, it cannot make the model completely bias-free.",
"Another disadvantage of these data enhance-ment/augmentation approaches is that we need to know all the specific kinds of biases in advance.",
"For instance, in order to reduce the CWB for not', one needs carefully balance the samples containing not' in the training set.",
"However, lots of other words will exhibit similar biases (e.g., the model tends to predict neutral when it sees also') and it is impractical to identify and debias the dataset w.r.t. every type of bias.",
"Therefore, besides fixing the dataset, we should also focus on directly debiasing models against lexical biases.",
"Model-level debiasing methods have the advantage that there is no need to know the specific bias type in advance.",
"Here we propose two different methods.",
"The first method focuses on debiasing the content of word/sentence embeddings, where we aim to remove strong bias in the embeddings towards any of the labels so that there will be fewer shortcuts for models to exploit.",
"The second method builds a separate shallow bag-of-words (BoW) sub-model and projects the primary model's representation onto the subspace orthogonal to this BoW sub-model via the HEX projection algorithm (Wang et al., 2019a).",
"Our proposed methods can be applied to a wide range of baseline model architectures.",
"In addition, none of our methods is bias-type specific, so the results on CWB and WOB should generalize to other similar lexical biases.",
"We use sentence-embedding based models as our baseline since they are more controllable, and because the interaction of sentences only appears at the top classifier, which makes it easier to compare the different effects of different regularization.",
"3 Our baseline structures can be divided into three stages.",
"The first stage is to embed the words into word embeddings.",
"The second stage is to get the representations for each sentence.",
"We use three layers of BiLSTM to get the representation.",
"We also added residual and skip-connections as Nie et al. (2019), and find that it leads to better performance.",
"For the final stage, our baseline follows Mou et al. (2016); Conneau et al. (2017) to concatenate these two sentence embeddings, their difference, and their element-wise product as follows: m = [ h 1 ; h 2 ; h 1 h 2 ; h 1 (cid:12) h 2 ] (1) The resulting vector is passed through another multi-layer perceptron (MLP) to get the final classification result.",
"4 Next, we will describe two different methods to directly debias the model.",
"Word embeddings are an important component in all neural NLP models.",
"They contain the most basic semantics of words.",
"Recent studies have shown that removing gender bias from word embeddings can lead to less biased models (Zhao et al., 2018a).",
"In our work, as we discussed in Q2 in Sec. 1, we explore whether similar ideas can be applied to reducing lexical dataset biases.",
"For a large number of lexical dataset biases (e.g., CWB), the model tends to predict the label based only on the existence of certain words.",
"Hence, one 3 Another popular choice of NLI model architecture is the cross-attention based models (Chen et al., 2017; Devlin et al., 2019).",
"In our current work, we choose to only apply our BoW Sub-Model approach on sentence-embedding based models since our approach directly regularizes the representation vector learned by the main model, and hence it is most suitable for models with a single vector containing rich information.",
"On the other hand, cross-attention based models do most of the inference through cross-attention and do not learn such a single vector, making it hard to regularize the model effectively in a similar way.",
"Investigation of similar HEX regularization methods for cross-attention models is future work.",
"4 Our baseline models achieve close to the best sentence embedding based/cross-attention based models reported on the NLI stress tests (Naik et al., 2018) and are hence good starting points for this bias/debias analysis.",
"natural conjecture is that there is a strong bias towards some labels in the word embeddings.",
"Since the label bias is not an attribute of the word, but it is brought in by the model above, hence in order to remove such label bias from the embeddings at training time, we differ from Zhao et al. (2018a) to use the gradient-reversal trick (Ganin and Lempitsky, 2015; Xie et al., 2017).",
"The architecture of this approach is illustrated in Figure",
"1. We denote the embeddings of the two input sequences for our model as w ( a ) = { w ( a ) 1 , w ( a ) 2 , . . . , w ( a ) l a } and w ( b ) = { w ( b ) 1 , w ( b ) 2 , . . . , w ( b ) l b } respectively, where a denotes the premise sentence while b denotes the hypothesis sentence.",
"In order to apply the reverse gradient trick (Ganin and Lempitsky, 2015) to the embeddings, we add a small embedding-debias network (the left blue box in Figure 1) for each of the embedding w i in our model.",
"The embedding-debias network is a simple MLP.",
"Since the other parts of the sentence context may also contribute to the bias, the debiasing network takes both w ( a ) i and the sentence embedding of b (and vice versa for debiasing w ( b ) ) as the input and predicts the label y .",
"Therefore, the total loss of this method is: L ( c , e , ed ) = L c ( c , e ) l a + l b L ed ( e , ed ) Here, is the multitask coefficient.",
"l a and l b are the lengths of two input sentences.",
"L c is the standard classification loss using the main model and L ed is the sum of all the classification loss using the debias network.",
"e are parameters of the embeddings and sentence encoder of the main model, c are parameters of the top classifier of the main model, and ed are parameters of the embedding-debias network.",
"In order to find the optimal parameters, we follow Ganin and Lempitsky (2015) to reverse the gradient for e w.r.t. L ed .",
"Besides this approach, we also tried two variants by changing the input of the debias network.",
"The first one is emb basic , where we only take the single embedding w i as the input.",
"The second one only takes one sentence embedding as the input and is called ind sent .",
"The results of our embedding-debias methods are shown in Sec. 6.2.",
"While debiasing the embeddings can robustify the models against certain biases, it may not be effective for all the lexical biases.",
"Some lexical bias may exist at the deeper compositionality level (e.g., WOB), while debiasing the embeddings can regularize only the most basic semantics units instead of how these semantics units are composed by the model.",
"In addition, removing the label biases may also hurt the useful semantics contained in the embeddings, leading to significant performance drops.",
"A better approach is to leave the embedding intact, but try to regularize how the classifier uses these features.",
"We observe that models exploiting dataset biases in the training set (e.g., CWB and WOB) tend to use very simple and superficial features to make the prediction.",
"These models tend to ignore the order of the words, fail to learn compositionality, and do not have a deep semantic understanding of the sentences.",
"Therefore, we aim to robustify the model by letting it use fewer simple and superficial features.",
"With this motivation, we train a bag-of-words (BoW) model that only captures superficial patterns of the words without any word order/compositionality information.",
"Then we use HEX projection (Wang et al., 2019a) to project the representation of the original primary model to the orthogonal space of the representation of the BoW model.",
"BoW Model.",
"For the BoW sub-model, we first get the embedding of all the words.",
"Then, in order to capture more co-occurrence information of the words, we add a multi-head self-attention layer like the one used in Vaswani et al. (2017) (but without position embeddings), because we empirically find that this improves the performance.",
"Finally, we use mean-pooling among all the vectors to get the BoW sentence-embedding: h bow = 1 l { self att ( w ) } .",
"To get a single representation for the sentence-pair, we used the same concatenation layer as in Eqn 1 and pass the vector through an additional MLP to get the representation u bow .",
"HEX Projection.",
"Next, in order to encourage the MainModel Embeddings BiLSTM BoW Sub-Model HEX Premise Embeddings BiLSTM \u0000 \u0000\u0000\u0000\u0000 Embeddings Self-Attention Embeddings Self-Attention \u0000 \u0000\u0000\u0000 \u0000 \u0000\u0000\u0000\u0000 \u0000 \u0000\u0000\u0000 Hypothesis Label HEX Projection Layer \u0000 \u0000 \u0000 \u0000 \u0000 \u0000 Figure 2: The overall architecture for debiasing the model via orthogonal projection w.r.t. BoW sub-model.",
"primary model to learn better features that are not learn-able by the BoW model, we used the HEX projection layer from Wang et al. (2019a), which was originally proposed to improve the domain generalization performance of computer vision models; here we combine HEX with BoW sub-model to robustify NLI models.",
"With the addition of the BoW sub-model, we can get two representations of the sentence pair u main and u bow .",
"In order to let the final prediction to use high-level features that are to some extent independent of the shallow and high-biased BoW feature, HEX projection layer projects these two representations into orthogonal spaces to achieve the independence.",
"The inputs of the HEX projection layers are the BoW model output u bow and the corresponding output of the main model u main .",
"We use f to denote the final classification network parameterized by .",
"Next, by zero-masking one of the two inputs, the HEX projection layer can receive three different inputs and calculate three different vector outputs: FA = f ([ u bow ; u main ] , ) FP = f ([ 0 ; u main ] , ) FG = f ([ u bow ; 0 ] , ) (2) To ensure that the overall model learns different features than the BoW model, we project the joint output FA to the orthogonal space of FG to get FL : FL = ( I FG ( FTGFG ) 1 FTG ) FA (3) The output learns good representations for both sentences but lies in the orthogonal space of the output got from BoW sub-model's input, thus not overemphasizing on word-pattern information.",
"This vector goes through the softmax layer to calculate the probabilities for each label.",
"Finally, we follow the original paper (Wang et al., 2019a) to minimize a weighted combination of the loss for FL and FG , and use FP for testing.",
"In Sec. 6.2, we show that by adding the BoW sub-model orthogonality, the model can be more robust against CWB and WOB while maintaining competitive overall accuracy.",
"Hence, as a response to Q3 in Sec. 1, our results indicate that debiasing models at the upper level with regularization on the compositionality is a more promising direction against lexical biases.",
"We evaluate our models using both off-the-shelf testing datasets as well as new datasets extracted from the original MNLI dataset.",
"We use the word overlap and the negation sets from the NLI stress tests dataset (Naik et al., 2018).",
"These two evaluation sets from the NLI stress tests modified the original MNLI development set by appending some meaningless phrases (examples shown in Table 1).",
"If the model has certain biases, then the model will be fooled by such perturbations and make the wrong classification.",
"In addition, we also extract samples from the original MNLI development dataset to get bias testing sets with exactly the same data distribution.",
"We first select samples that follow the bias pattern from the matched development set.",
"For CWB, we use not', no', any', never' ,and anything' as five example contradiction words.",
"To make this testing set balanced for labels (contradiction vs non-contradiction for CWB and entailment vs non-entailment for WOB), we move some samples with the same pattern from the training set to this testing set.",
"5 Later we refer to this dataset as Bal .",
"Since the negation dataset from NLI stress tests dataset only considers the word not', it fails to evaluate the bias for other contradiction words.",
"We augment this dataset by creating new samples for other contradiction words.",
"We denote the original NLI stress tests dataset as Stress and this augmented one as Stress* .",
"Please refer to the Appendix for a detailed description of how we chose the example contradiction words and created our test sets.",
"5 While this makes our model's performance incomparable to other literature, we train all the models in our experiments in this same setting to ensure the fairness of our analysis comparisons.",
"All our experiments use the same val/test set.",
"model during training on the MNLI mismatched development dataset and we tune all the hyper-parameters on the NLI Stress mismatch datasets.",
"All the other datasets are only used as test sets and we only report results on these test sets.",
"We use the MNLI matched development dataset to evaluate the overall performance of the model.",
"Overall accuracy is widely used as the only metric for NLI.",
"However, models can get very high accuracy by exploiting the bias patterns.",
"Hence, in order to test how the model performs when it cannot exploit the bias pattern, we focus on model's accuracy on the harder parts of the data (Acc hr) where the bias pattern is wrong 6 .",
"For the balanced testing set, this subset means samples with non-contradiction' label for CWB case and samples with non-entailment' label for the WOB case.",
"For the NLI stress tests dataset 7 , this subset means the samples with non-contradiction' label for the CWB set and the samples with entailment' label for the WOB set.",
"Ideally, for an unbiased model, it should both have competitive overall performance and perform almost equally well on these harder parts of the data.",
"Hence, we focus on maintaining the accuracy on the whole dataset and improving the Acc hr metric.",
"All training details and hyper-parameter settings are presented in Appendix.",
"ob-6 One may wonder if biases can also be evaluated simply using generalization performance.",
"However, good generalization to current datasets (e.g., SNLI (Bowman et al., 2015), MNLI (Williams et al., 2018), SICK (Marelli et al., 2014),",
"etc.) is different from being bias-free.",
"As shown in Gururangan et al. (2018), similar annotation artifacts can appear in multiple different datasets.",
"So by overfitting to common lexical biases across multiple datasets, biased models might still reach higher generalization accuracy.",
"7 Another metric on NLI-stress can be checking the portion of model predictions on the hard data that is correct both before and after adding the extra words.",
"We empirically verified that this metric shows the same result trends as Acc hard.",
"serve similar performance for CWB and WOB, we leave the results for WOB in Appendix.",
"On every dataset, there's a significant gap between Acc and Acc hr, showing the baseline has both strong CWB bias and strong WOB bias.",
"For the data augmen-tation/enhancement experiments, we report results after adding 500/20,000/50,000 additional samples.",
"We demonstrate the effect of adding a small portion of data for the 500 case and the limitation of this method using the 20,000 and 50,000 cases.",
"8 The results are again shown in Table",
"2. We use +origin to denote the results from data enhancement using the original dataset and use +synthetic to denote the results from data augmentation by generating new synthetic data similar to NLI stress tests.",
"9 With a small number of additional data (500), wherever the data comes from, the performance on the balanced testing set remains very close.",
"However, the performance on the NLI stress tests improves significantly when it sees 500 synthetic new samples generated in the same way.",
"The gap between the overall accuracy and the Acc hr on NLI stress tests is reduced to less than 5%, which means that the models can easily learn how the synthetic data is generated through only 500 samples.",
"Next, we compare the performance after adding 20,000 and 50,000 additional data to check the limitation of the improvement from adding additional data.",
"With this amount of additional original data, the Acc hr on the balanced dataset improves and the model is less biased.",
"However, adding 20,000/50,000 synthetic samples doesn't always lead to the improvement on the balanced dataset.",
"This reflects that the generation rules of NLI stress tests dataset are too simple so that training on these adversarial samples is not a good way to robustify the model.",
"However, more natural and diverse synthetic data may be helpful to robustify the models.",
"There is still a significant gap between overall accuracy and Acc hr even after 50,000 sam-8 Adding additional data (e.g., 50,000) can change the label distribution, but we have experimented with different numbers of additional data between 500 and 50,000 and the reported trend always holds.",
"9 We run all the experiments 5 times and report the mean.",
"ples.",
"Also, the effect of adding the last 30,000 data is very small, indicating a clear limitation of this method.",
"Thus, doing simple data augmenta-tion/enhancement only using the currently available resources is insufficient to fully debias the model.",
"In addition, one has to carefully select which data to add for each different bias, so we need to also design inherently more robust models.",
"Debiasing Embeddings (Lower Level Model De-biasing).",
"We compared three variants of debiasing embeddings in Table",
"3. Empirically, we observe that training the whole model with the debias network from a pre-trained baseline can significantly improve the stability of results, so we perform our experiments from one baseline with average performance for fair comparisons.",
"The multi-task coefficient controls the trade-off between high accuracy and little bias.",
"Here we report the results with = 1 , which we find is one good balance point.",
"From both tables, none of the methods achieved a significant improvement on the Acc hr metrics.",
"The best results come from the emb basic approach, but even this method only achieves small improvement on the Acc hr metric for CWB but does worse on WOB and has a comparable loss on overall Acc.",
"We do not observe any significantly larger improvements with smaller or larger .",
"We also tried other techniques to further stabilize the training (e.g., freezing the main model when training, using different optimization algorithms), but we observe no significant improvement.",
"remove the male bias from the word doctor' to make the embedding gender-neutral), it does not help in debiasing certain lexical biases.",
"Directly removing information from the embedding only slightly debiases the model but also hurts the overall performance.",
"The difference in these results highlights the difference between gender bias and lexical bias problems.",
"As shown in these experiments, lexical biases cannot be effectively reduced at the embedding level.",
"We argue that this is because a majority of lexical biases appear at the compositionality level.",
"For example, for WOB, a biased model will predict entailment entirely relying on the overlapping word embeddings on both sides.",
"Here, even when we make the embeddings completely unbiased, as long as the upper model learns to directly compare the overlapping of embeddings on both sides, there will still exist a strong WOB bias in the model.",
"Hence, in order to robustify models towards lexical bias, we need to develop methods that regularize the upper-interaction part of the model.",
"Model Debiasing).",
"Results for adding the BoW sub-model are shown in Table",
"4. Here, we also show that the improvement trend holds regardless of minor hyper-parameter changes in the model (number of layers).",
"On both CWB and WOB, the model shows a large improvement on Acc hr for both Bal and stress-test datasets.",
"We achieve close or higher Acc on all the bias testing sets and the overall Acc is only 1.4%/1.3% lower than the baseline, showing that adding a BoW sub-model orthogonality will only slightly hurt the model.",
"In conclusion, this approach significantly robustifies the model against CWB and WOB while maintaining competitive overall performance.",
"In comparison to the debiasing embeddings results, we can see that instead of regularizing the content in the word embeddings, regularizing the model's compositionality at the upper interaction level is a more promising direction for debiasing lexical biases.",
"We have also tried combining this method with the data-level debiasing approach above but get no further improvement.",
"10 6.3 Qualitative Feature Analysis We use LIME (Ribeiro et al., 2016) to qualitatively visualize how orthogonal projection w.r.t. BoW sub-model changes the features used by the model.",
"We selected one example from the CWB Bal dataset to see how applying the BoW model with HEX corrects previous mistakes.",
"From Fig. 3, we can see that before applying the BoW sub-model (the upper part of the figure), the model predicts the contradiction label almost solely based on the existence of the word no in the hypothesis.",
"However, after applying our BoW sub-model with HEX projection, our model can give higher importance to other useful features (e.g., the match of the two bad tokens, and the match of important past-tense temporal words such as passed and longer in the premise-hypothesis pair) despite the fact that no still has high influence towards the contradiction label.",
"Another example from the CWB Stress* dataset can be seen in Appendix.",
"We study the problem of lexical dataset biases using WOB and CWB as two examples.",
"We first showed that lexical dataset biases cannot be solved by simple dataset changes and motivate the importance of directly designing model-level changes to solve this problem.",
"For model-level changes, we first show the ineffectiveness of embedding-debiasing approaches, thus highlighting the uniqueness of lexical bias against gender bias problems.",
"Next, 10 We also tried some initial simple ensembles of 2 different initializations of BoW sub-models, so that we can potentially regularize against a more diverse set of lexicon biases.",
"When training, the main model is paired with each BoW sub-models to go through each HEX layers and then the output logits are averaged to get the final logits.",
"This ensembling results also outperform the baseline significantly and is higher than the single BoW Sub-Model in WOB Stress, but equal or worse in the other cases.",
"We leave the exploration of different/better ways of ensembling to future work.",
"showing the 6 most important features used by the model.",
"we robustify the model by forcing orthogonality between a BoW sub-model and the main model and demonstrate its effectiveness through several experiments.",
"Since none of our methods is bias-type specific, we believe these results can also be generalized to other similar lexical biases.",
"Finally, we would like to point out that our methods and results here do not mean to belittle the importance of collecting clean/unbiased data.",
"We strongly believe in the importance of unbiased data for model design and evaluation.",
"However, some biases are inherent and inevitable in the natural distribution of the task (e.g., for NLI, it is natural that sentences with high overlapping are most likely entailment pairs).",
"Therefore, our work stresses that it is also very important to encourage the development of models that are unlikely to exploit these inevitable biases/shortcuts in the dataset.",
"Neither model-level debiasing nor data-level debiasing alone is the conclusive solution for this problem.",
"Joint efforts are needed for promoting unbiased models that learn true semantics; and we hope our paper can encourage more work towards this important direction.",
"We thank Snigdha Chaturvedi, Shashank Srivas-tava, and the reviewers for their helpful comments.",
"This work was supported by DARPA YFA17-D17AP00022, NSF-CAREER Award 1846185, ONR Grant N00014-18-1-2871.",
"The views in this article are the authors', not of the funding agency."
] | [
"abstain",
"objective",
"objective",
"method",
"abstain",
"abstain",
"objective",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"method",
"method",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"method",
"method",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"method",
"result",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"method",
"other",
"objective",
"other",
"other",
"other",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"result",
"method",
"abstain",
"method",
"abstain",
"method",
"other",
"other",
"other"
] |
[
"Bingyu Wang Khoury College of CS Northeastern University [email protected]",
"Kechen Qin Khoury College of CS Northeastern University [email protected]",
"Abstract Extreme Multi-label classification (XML) is an important yet challenging machine learning task, that assigns to each instance its most relevant candidate labels from an extremely large label collection, where the numbers of labels, features and instances could be thousands or millions.",
"XML is more and more on demand in the Internet industries, accompanied with the increasing business scale / scope and data accumulation.",
"The extremely large label collections yield challenges such as computational complexity, inter-label dependency and noisy labeling.",
"Many methods have been proposed to tackle these challenges, based on different mathematical formulations.",
"In this paper, we propose a deep learning XML method, with a word-vector-based self-attention, followed by a ranking-based AutoEncoder architecture.",
"The proposed method has three major advantages: 1) the autoencoder simultaneously considers the inter-label dependencies and the feature-label dependencies, by projecting labels and features onto a common embedding space; 2) the ranking loss not only improves the training efficiency and accuracy but also can be extended to handle noisy labeled data; 3) the efficient attention mechanism improves feature representation by highlighting feature importance.",
"Experimental results on benchmark datasets show the proposed method is competitive to state-of-the-art methods.",
"In multi-label classification (Tsoumakas and Katakis, 2007; Zhang and Zhou, 2014), one assigns multiple labels to each instance.",
"Multi-label classification has many real-word applications: for example, a movie may be associated with multiple genres, a web page may contain several top-ics, and an image can be tagged with a few objects.",
"In these classification tasks, labels often exhibit complex dependencies: for example, Documentary and Sci-Fi are usually mutually exclusive movie genres, while Horror and Thriller are typically highly correlated.",
"Predicting labels independently fails to capture these dependencies and suffers suboptimal performance (Tsoumakas and Katakis, 2007; Ghamrawi and McCallum, 2005; Li et al., 2016).",
"Several methods that capture label dependencies have been proposed, including Conditional Random Fields (CRF) (Lafferty et al., 2001; Ghamrawi and McCallum, 2005), Classifier Chains (CC) (Read et al., 2011; Dembczyn-ski et al., 2010), Conditional Bernoulli Mixtures (CBM) (Li et al., 2016), and Canonical Correlated AutoEncoder (C2AE) (Yeh et al., 2017).",
"However, these methods typically only work well on small-to-medium scale datasets.",
"Extreme multi-label classification (XML) is a multi-label classification task in which the number of instances, features and labels are very large, often on the order of thousands to millions (Zubi-aga, 2012; Bhatia et al., 2015).",
"It has numerous real-world applications such as merchandise tagging and text categorization.",
"Although the label vocabulary is large, typically each instance only matches a few labels.",
"The scale of the classification task, the inter-dependent labels, and label sparsity all pose significant challenges for accurate and efficient classification.",
"Many methods have been proposed for extreme multi-label classification.",
"We group them into different categories and describe representative methods in each category.",
"Independent Classification: A popular method is to divide the multi-label classification problem into multiple binary classification problems (Tsoumakas and Katakis, 2007; Hariharan et al., 2012; Babbar and Scholkopf, 2017; Yen et al., 2016, 2017).",
"A typical implementation is to treat labels independently and train one-vs-all classifiers for each of the labels.",
"These independent classifiers can be trained in parallel and thus are computationally efficient in practice.",
"Ignoring the inter-label dependency also enables efficient optimization algorithm, which further reduces computational cost.",
"However, ignoring label dependency inherently limits prediction accuracy.",
"A competitive method in this category is called PD-Sparse (Yen et al., 2016), with a variant of the Block-Coordinate Frank-Wolfe training algorithm that exploits data sparsity and achieves complexity sub-linear in the number of primal and dual variables.",
"PD-Sparse (Yen et al., 2016) shows better performance with less training and prediction time than 1-vs-all Logistic Regression or SVM on extreme multi-label datasets.",
"Tree Based Classifiers: Following the success of tree-based algorithms in binary classification problems, people also proposed tree-based algorithms for multi-label classification (Agrawal et al., 2013; Weston et al., 2013; Prabhu and Varma, 2014), which achieve promising prediction accuracy.",
"Similar to decision trees, these methods make classification decisions in each branch split.",
"Different from decision trees, each split evaluates all features, instead of one, to make a decision.",
"Also, each decision is for a subset of labels rather than one label.",
"Finally, via ensembling and parallel implementation, trees can boost their prediction accuracy with practically affordable computational cost.",
"Among these tree based classifiers, FastXML (Prabhu and Varma, 2014) further optimizes an nDCG-based ranking loss function and achieves significantly higher accuracy than other peer methods.",
"Embedding: A major difficulty of extreme multilabel classification is the large number of labels.",
"When labels are inter-dependent, one can attempt to find a lower dimensional latent label space from which one can fully reconstruct the original label space.",
"Over the past decade, many methods were proposed to find this latent label space.",
"In early work, methods were proposed to linearly project the original label space into a lower-dimension space and reconstruct predictions from that space (Tai and Lin, 2012; Balasubramanian and Lebanon, 2012).",
"However, there are two assumptions: (1) the label dependency is linear and (2) the label matrix is low-rank, which do not always hold, as reflected by the low prediction accuracy of these methods.",
"To overcome the limitation of the linear assumption, different methods were proposed using non-linear embeddings, including kernels, sub-sampling (Yu et al., 2014), feature-aware (Lin et al., 2014; Yeh et al., 2017) and pairwise distance preservation (Bhatia et al., 2015).",
"Among these methods, SLEEC (Bhatia et al., 2015) stands out for less training time and higher accuracies.",
"SLEEC introduces a method for learning a small ensemble of local pairwise distance preserving embeddings which allows it to avoid the low-rank and linear-dependency as-sumption.",
"Deep Learning: Deep learning has not been well studied for XML, although it has achieved great successes in binary and multi-class classification problems (Lin et al., 2017; Kim, 2014).",
"FastText (Grave et al., 2017) reconstructs a document representation by averaging the embedding of the words in the document, followed by a softmax transformation.",
"It is a simple but very effective and accurate multi-class text classifier, as demonstrated in both sentiment analysis and multi-class classification (Grave et al., 2017).",
"However, FastText may not be directly applicable for more complicated problems, like XML.",
"BoW-CNN (Johnson and Zhang, 2014) learns powerful embedding of small text regions by applying CNN to high-dimensional text data.",
"The embedding of all regions are sent to one or multiple convolutional layers, a pooling layer and the output layer at the end.",
"XML-CNN (Liu et al., 2017) achieves computational efficiency by training a deep neural network with a hidden bottleneck layer much smaller than the output layer.",
"However, this method has a few drawbacks.",
"First, it is trained using the binary cross entropy loss.",
"This loss tends to be sensitive to label noise, which is frequently observed in extreme multi-label data.",
"Since the label vocabulary is large, it is quite common for human annotator to miss relevant tags.",
"When the classifier's prediction (which might be correct) disagrees with the annotation, the cross entropy loss can potentially assign an unbounded penalty to the classifier during training procedure.",
"The second issue is that because labels are trained independently as separate binary classification tasks, their prediction probabilities/scores may not be directly comparable.",
"This is problematic because in many applications the requirement is to rank all labels according to their relevance, as opposed to making an independent binary decision on each label.",
"The third defect is that XML-CNN requires raw documents as input since it adopts the CNN structure on top of sentences (Kim, 2014); this is problematic when datasets are given in other formats such as bag-of-words for text.",
"C2AE (Yeh et al., 2017) uses a ranking loss as the training objective.",
"But the ranking loss employed there needs to compare all (positive label, negative label) pairs, and therefore does not scale well to extreme data.",
"Furthermore, C2AE only takes the bag-of-words representation (one-hot encoding) as the input, which makes it harder to learn powerful representations from extreme multi-label dataset.",
"Our Contribution In this paper, we propose a new deep learning method to address extreme multi-label classification.",
"Our contributions are as follows: Motivated by the recent success of attention techniques, we propose an efficient attention mechanism that can learn rich representations from any type of input features, including but not limited to bag-of-words, raw documents and images.",
"Inspired by C2AE, our proposed model projects both features and labels onto common latent spaces wherein correlations between features and labels are exploited.",
"By decoding this latent space into the original label space in the prediction stage, the dependencies between labels are implicitly captured.",
"We propose a margin-based ranking loss that is simultaneously more effective for extreme settings and more tolerant towards noisy labeling.",
"In this section, we introduce the data format in XML, and the proposed networks, including the marginal-based ranking loss and attentions.",
"In XML, we are given a set of label candidates Y = { 1 , 2 , . . . , L } .",
"The dataset D consists of features and labels: D = { ( x i , y i ) } Ni =1 , wherein N is number of data, and each instance x RV ( V is the feature dimension) matches a label subset y Y , which can be written as a binary vector y = { 0 , 1 } L , with each bit y l representing the presence or absence of label l .",
"Given such dataset, our goal is to build a classifier c : RV { 0 , 1 } L , mapping an instance to a subset of labels with arbitrary size.",
"Inspired by the C2AE (Yeh et al., 2017), we propose a Ranking-based Auto-Encoder ( Rank-AE ), as depicted in Figure 1.",
"Similar to C2AE, Rank-AE includes three mapping functions to be trained: a mapping from input features x to feature embeddings x h , denoted as F ( x ) , where h is the embedding size; an encoder from output labels y to label embeddings y h as E ( y ) ; a decoder from label embeddings y h to output labels y (cid:48) , written as D ( y h ) .",
"The proposed model is built on two assumptions: first, each instance can be represented from two different aspects, features x and labels y , so there exists a common latent space between x and y ; second, labels can be reproduced by an autoencoder.",
"Based on these two assumptions, we design the object function as below: L ( D ) = min F , E , DL h ( x h , y h ) + L ae ( y, y (cid:48) ) (1) wherein loss L h ( x h , y h ) aims to find the common latent space for input x and output y and L ae ( y, y (cid:48) ) enforces the output to be reproducible.",
"is a hyper-parameter to balance these two losses.",
"During the training, the model learns a joint network including F , E and D to minimize the empirical loss Eq (1).",
"During inference, a given input x will be first transformed into a vector in latent space x h = F ( x ) , which will then be fed into the label decoder to compute the predictions y = D ( x h ) .",
"It is worth mentioning that although the label encoder E is ignored during the prediction, it is able to exploit cross-label dependency during the label embedding stage (Yeh et al., 2017).",
"Recent work (Ku-rata et al., 2016; Baker and Korhonen, 2017) also show that using co-occurring labels information to initialize the neural network can further improve accuracy in multi-label classification.",
"Learning Common Embedding ( L h ) .",
"Minimizing the common hidden space loss L h has been proposed based on different considerations (Zhang and Schneider, 2011; Yeh et al., 2017; Shen et al., 2018), ranging from canonical correlation analysis to alignment of two spaces with a perspective of cross-view.",
"Since the hidden space is usually small and requires less computational cost, we simply employ the mean squared loss for L h .",
"Reconstructing Output ( L ae ) .",
"Unlike L h with small space, L ae loss usually involves a large number of labels.",
"Moreover, L ae also directly affects the classification performance significantly since different loss functions lead to their own properties (Hajiabadi et al., 2017).",
"Accordingly, solving such problems with large scale and desirable properties presents open challenges in three aspects: 1) how to improve time efficiency, 2) how to produce comparable labels scores and 3) how to deal with noise labels.",
"Unfortunately, most of the related deep learning methods only target one or two aspects.",
"C2AE attempts to minimize the number of misclassified pairs between relevant and irrelevant labels, as a result its computational complexity is quadratic with number of labels in the worst case; also it fails to scale well on large number of input features or labels due to its inefficient implementation 1 .",
"XML-CNN (Liu et al., 2017) achieves computational efficiency by training a deep neural network with hidden layers much smaller than the output layer with binary cross-entropy loss (BCE), which has linear complexity in number of labels.",
"Despite this, BCE loss could neither capture label dependencies nor produce directly comparable label scores, since each label is treated inde-1 https://github.com/dhruvramani/ C2AE-Multilabel-Classification pendently.",
"Moreover, BCE loss tends to be sensitive to label noise, which is frequently observed in XML data (Reed et al., 2014; Ghosh et al., 2017).",
"To void the aforementioned issues, we propose a marginal-based ranking loss in AutoEncoder: L ae ( y, y (cid:48) ) = LP ( y, y (cid:48) ) + LN ( y, y (cid:48) ) (2) LP ( y, y (cid:48) ) = (cid:88) n N ( y ) max p P ( y ) ( m + y (cid:48) n y (cid:48) p ) + (3) LN ( y, y (cid:48) ) = (cid:88) p P ( y ) max n N ( y ) ( m + y (cid:48) n y (cid:48) p ) + (4) wherein N ( y ) is the set of negative label indexes, P ( y ) is the complement of N ( y ) , and margin m [0 , 1] is a hyper-parameter for controlling the minimal distance between positive and negative labels.",
"The loss consists of two parts: 1) LP targets to raise the minimal score from positive labels over all negative labels at least by m ; 2) LN aims to penalize the most violated negative label under all positive labels by m .",
"The proposed loss has the following attractive properties: 1) having linear complexity in number of labels O ( L ) ; 2) capturing the relative rankings between positive and negative labels; 3) tolerating the noisy labels with a tunable hyper-parameter m .",
"To explain the last property, assume y (cid:48) n and y (cid:48) p are the predicted probabilities bounded in [0 , 1] , then on one extreme case with noise-free labels and m = 1 , all positive labels are raised to probability 1 , while negatives are penalized to 0 ; on the other extreme case with all random labels, e.g. from i.i.d. Bernoulli distribution, setting m = 0 indicates that the annotated labels are completely random noises.",
"Extracting rich feature representations in XML is helpful for predicting more complicated labels structures, but on the other hand, requires an efficient and feasible method.",
"A recent work (CBAM) (Woo et al., 2018) proposes a block attention module, with a Channel-Attention and a Spatial Attention for images tasks only, wherein Channel-Attention emphasizes information from channels, e.g. RBG, and Spatial-Attention pays attention to partial areas in an image.",
"By sequentially applying channel and spatial attentions, CBAM is shown to be effective in images classification and object detection.",
"We take advantage of the attentions in CBAM and apply it on textual data.",
"In our proposed attention module, it also consists of spatial-wise and channel-wise attentions.",
"First, we force spatial-wise attention to attend on a list of important words in a way that simply multiply word embeddings by term-frequency or tf-idf (whichever is provided in the feature ma-trix).",
"It is worth noting that spatial-wise module does not involve any parameters, but it ef-ficiently captures the importance of words with numerical statistics, like tf-idf.",
"We demonstrate spatial-wise attention on the left side of Figure 2, where the input x = ( I, V ) contains bag-of-words vector I = ( w 1 , w 2 , . . . , w n ) T R n and tf-idf vector V = ( v 1 , v 2 , . . . , v n ) T R n .",
"Bag-of-words I are fed into an embedding layer E = ( e 1 , e 2 , . . . , e n ) T R n C to get the word embeddings, where e j RC is word embedding vector of w j .",
"Then we multiply word embeddings by V to obtain weighted word embeddings: V (cid:48) = ( v 1 e 1 , v 2 e 2 , . . . , v n e n ) T R n C .",
"The channel attention is designed to emphasize the significant aspects by assigning different weights on bits in a word embedding.",
"For example, in the word embedding of apple, some of the bits may reflect fruit, while others may indicate the company name.",
"To achieve this, we adopt the excitation network from the SENet (Hu et al., 2017) with a slight increase in model complexity.",
"The excitation network includes two fully connected layers with a non-linearity activation function in between, see the top-right part in Figure 2: AT = (cid:0) F 2 ( F 1 V (cid:48) T ) (cid:1) = ( a 1 , a 2 , . . . , a n ) (5) wherein A R n C , and refer to the two activation functions, ReLU and Sigmoid , F 1 R Cr C and F 2 RC Cr are the two fully connected layers, with word embedding size C and Dataset train test label feature cardinality inst/L Delicious* 12,920 3,185 983 500 19.03 312 Mediamill* 30,993 12,914 101 120 4.38 1,902 RCV* 623,847 155,962 2,456 47,236 4.79 1,219 IMDb 27,417 6,740 28 115,554 2.4 2,937 EURLex 15,539 3,909 3,993 5,000 5.31 26 Wiki10 14,146 6,616 30,938 101,938 18.64 9 Table 1: Dataset Characteristics: train, test, label and feature are the numbers of training, testing, labels and features respectively; cardinality is the average number of label per instance; inst / L is the average number of instances per label.",
"reduction ratio r .",
"After obtaining the attention matrix A , we can apply those attentions to the weighted word embeddings to get a re-scaled word embedding matrix M : M = V (cid:48) A = ( m 1 , m 2 , . . . , m n ) T R n C , which is computed via the element-wise product.",
"The obtained attention matrix A introduces dynamics conditioned on the input weighted word embeddings and further boosts feature discriminability.",
"The last step is to feed the re-scaled embedding matrix into an average pooling to obtain the feature embedding x (cid:48) RC .",
"With the proposed spatial-wise and channel-wise attentions, Rank-AE can learn rich feature representations in an efficient way.",
"Dataset .",
"Our experiments are conducted on six extreme multi-label datasets and their characteristics are shown in Table 1, among which IMDb is crawled from online movie database 2 and the rest five datasets are downloaded from the extreme classification repository 3 .",
"For datasets from the repository, we adopt the provided train/test split, and for IMDb we randomly choose 20% of the data as test set and the rest of 80% as training set.",
"For all datasets, we reserve another 20% of training data as validation for tuning hyper-parameters.",
"After tuning, all models are trained on the entire training set.",
"Among these datasets, three of them are only provided with BoW feature matrix: Delicious, Mediamill (dense feature matrix extracted from image data) and RCV, which are only feasible for the non-deep learning methods (SLEEC, FastXML, PDSparse) and Rank-AE.",
"We provide 2 https://www.imdb.com/ 3 http://manikvarma.org/downloads/XC/ XMLRepository.html both feature matrix and raw documents for IMDb, EURLex and Wiki10, which are feasible for both deep learning and non-deep learning methods.",
"For those data with both formats, we remove the words from the raw documents that do not have corresponding BoW features so that the vocabulary size is the same for both deep and non-deep learning methods.",
"Evaluation Metrics .",
"To evaluate the performances of each model, we adopt the metrics that have been widely used in XML: Precision at top k ( P @ k ), and the Normalized Discounted Cummu-lated Gains at top k ( n @ k ) (Bhatia et al., 2015; Prabhu and Varma, 2014; Yen et al., 2016; Liu et al., 2017).",
"P @ k is a measure based on the frac-tion of correct predictions in the top k predicted scoring labels and n @ k is a normalized metric for Discounted Cumulative Gain: P @ k = 1 k (cid:88) l rank k ( y ) y l (6) DCG @ k = (cid:88) l rank k ( y ) y l log ( l + 1) (7) nDCG @ k = DCG @ K (cid:80) min( k, | y | ) l =1 1 log( l +1) (8) wherein the rank k returns k largest indices of the prediction y in a descending order, and | y | is the number of positive labels in ground truth.",
"In the results, we report the average P @ k and n @ k on testing set with k = 1 , 3 , 5 respectively.",
"Hyper-parameters .",
"In Rank-AE, we use the fixed neural network architecture, with two fully connected layers in both Encoder and Decoder , and one fully connected layer following Embedding & Atten network in Feature Embedding .",
"We also fix most of the hyper-parameters, including hidden dimension h (100 for small number of labels data and 200 for large ones), word embedding size C = 100 , and reduction ratio r = 4 .",
"The remaining hyper-parameters, such as balance between L h and L ae , margin m in L ae , and others (decay, learning rate) in the optimization algorithms, are tuned on validation set.",
"In addition, if the vocabulary for BoW is available, e.g. IMDb and Wiki10, the Word Embedding component is initialized by Glove 4 , a pre-trained word embeddings of 100 dimensions; if it is not, e.g. Mediamill, Delicious and RCV, a random initialization is employed.",
"For the existing methods with the same train/test split, we take the scores from the original papers for SLEEC, FastXML and PD-Sparse directly.",
"For the new datasets and splits, the hyper-parameters are tuned on the validation set for all methods, as suggested in their papers.",
"We evaluate the proposed Rank-AE with other six state-of-the-art methods, SLEEC, FastXML, PD-Sparse, FastText, Bow-CNN and XML-CNN, which are the leading methods among their categories.",
"Among them, FastText, Bow-CNN and XML-CNN only take raw documents, which are not available for Delicious, Mediamill and RCV datasets.",
"For Rank-AE, we adopt the raw text as the input for IMDb, and feature matrix for the rest.",
"The performances evaluated on P @ k and n @ k with k = 1 , 3 , 5 are summarized in Table 2",
"(a) and",
"(b) separately.",
"As reported, Rank-AE reaches the best performances on two datasets (IMDb and EURLex) out of 6 datasets, while SLEEC achieves the best performances on Mediamill and Wiki10, and FastXML performs the best on Delicious and RCV.",
"In general, SLEEC and FastXML are very competitive to each other in non-deep learning methods, but PD-Sparse performs worse.",
"Rank-AE always performs better than PD-Sparse with at least 1% increase, up to almost 20% improvement on Delicious data.",
"When compared with FastXML, Rank-AE outperforms on 4 datasets with 1% to 10% growth, but underperforms on Delicious and RCV with 1% decrease.",
"SLEEC, as the best non-deep learning method in our experiments, performs almost identical to Rank-AE, but on IMDb data, it performs 7% 15% less than non-deep methods, and even worse than Rank-AE.",
"Comparing Rank-AE with deep learning methods, we narrow down to three datasets with available raw documents: IMDb, EURLex and Wiki10.",
"As shown in Table 2, FastText and Bow-CNN, not planned for XML but for multi-class, perform much worse than XML-CNN and Rank-AE as expected.",
"On the other hand, XML-CNN achieves close performance to Rank-AE: with similar performance on IMDb dataset, but lower scores on EURLex and Wiki10 with 2% drop in P @ k and n @ k .",
"In spite of this, Rank-AE, trained on feature matrix for EURLex and Wiki10, surprisingly performs better than XML-CNN on raw data.",
"In the comparisons, there is no such method that Dataset Metrics SLEEC FastXML PD-Sparse FastText Bow-CNN XML-CNN Rank-AE Delicious P@1 67.59 69.61 51.82 --69.26 P@3 61.38 64.12 44.18 --62.72 P@5 56.56 59.27 38.95 --57.63 Mediamill P@1 87.82 84.22 81.86 --86.53 P@3 73.45 67.33 62.52 --70.17 P@5 59.17 53.04 45.11 --55.44 RCV P@1 90.25 91.23 90.08 --90.9 P@3 72.42 73.51 72.03 --72.82 P@5 51.88 53.31 51.09 --52.05 IMDb P@1 51.37 66.45 66.84 69.55 66.59 75.55 75.91 P@3 34.46 48.32 46.29 48.76 48.42 52.59 52.66 P@5 27.34 36.28 35.04 36.53 36.56 38.90 38.48 EURLex P@1 79.26 71.36 76.43 71.51 64.99 76.38 79.52 P@3 64.30 59.90 60.37 60.37 51.68 62.81 65.14 P@5 52.33 50.39 49.72 50.41 42.32 51.41 53.18 Wiki10 P@1 85.88 83.03 81.03 68.86 81.16 84.11 83.6 P@3 72.98 67.47 57.36 54.65 50.67 70.24 72.07 P@5 62.70 57.76 44.10 47.61 36.03 59.87 62.07 Rank Score avg 2.83 3.33 4.56 4.56 5.78 2.56 1.78 Dataset Metrics SLEEC FastXML PD-Sparse FastText Bow-CNN XML-CNN Rank-AE Delicious n@1 67.59 69.61 51.82 --69.26 n@3 62.87 65.47 46.00 --64.16 n@5 59.28 61.90 42.02 --60.39 Mediamill n@1 87.82 84.22 81.86 --86.53 n@3 81.50 75.41 70.21 --78.36 n@5 79.22 72.37 63.71 --75.28 RCV n@1 90.25 91.23 90.08 --90.9 n@3 88.86 89.63 88.50 --89.29 n@5 89.49 90.33 88.79 --89.75 IMDb n@1 51.37 66.45 66.84 69.55 66.59 75.55 75.91 n@3 49.75 67.14 64.84 68.47 67.26 74.02 73.5 n@5 54.43 71.72 69.69 72.99 72.07 78.48 77.37 EURLex n@1 79.26 71.36 76.43 71.51 64.99 76.38 79.52 n@3 68.13 62.87 64.31 63.32 55.03 66.28 68.76 n@5 61.60 58.06 58.78 58.56 49.92 60.32 62.33 Wiki10 n@1 85.88 83.03 81.03 68.86 81.16 84.11 83.6 n@3 76.02 75.35 62.62 56.72 56.14 73.52 74.78 n@5 68.13 63.36 52.03 51.19 45.29 65.50 67.18 Rank Score avg 2.83 3.22 4.33 4.67 5.78 2.56 1.89 Table 2: Comparisons with other methods ( P @ k and n @ k are reported in the top and bottom tables respectively).",
"could perform the best on all datasets.",
"We discover that each dataset has its own intrinsic properties, such as diversity of labels, number of features, average number of relevant labels per instance and average number of training instances per label, see Table 1.",
"All those properties will affect training procedure, for example, how much flexibility a model should be in order to explain labels well by the given training data.",
"Because those factors are always changing from data to data, they also in-fluence the performances on different models.",
"In order to have a reasonable comparisons, we report the average ranking score for each method.",
"To compute the average ranks, we first rank the methods based on their performance in each row in Table 2, then average them through all rows, and report the final ranking scores in the last row of each table.",
"The average ranking scores show that Rank-AE is the best model with ranking scores 1 .",
"78 in P @ k and 1 .",
"89 in n @ k .",
"As mentioned previously, noisy labels in XML are a quite common issue in the real-world applications (Yeh et al., 2017; Ghosh et al., 2017), but our proposed marginal ranking loss naturally mitigates this problem.",
"Since IMDb is a real-world dataset with relatively clean labels, we conduct the noise experiments on it.",
"In the experiments, we control the noise labels in two different ways: 1) missing labels: changing each positive label from y l = 1 to y l = 0 with certain rate, 2) both 4045505560657075 p@1 40 42 44 46 48 50 52 p@3 FastXML PD-Sparse XML-CNN Rank-AE BCE-AE 32 34 36 38 p@5 0 10 20 30 40 50 60 4045505560657075 n@1 0 10 20 30 40 50 60 55 60 65 70 75 n@3 0 10 20 30 40 50 60 60 65 70 75 Missing Rate P e r f o r m a n c e ( % ) n@5 50 55 60 65 70 75 p@1 35 40 45 50 p@3 25.0 27.5 30.0 32.5 35.0 37.5 p@5 0 10 20 30 40 50 60 50 55 60 65 70 75 n@1 0 10 20 30 40 50 60 45 50 55 60 65 70 75 n@3 0 10 20 30 40 50 60 50 55 60 65 70 75 80 Missing and Invalid Rate P e r f o r m a n c e ( % ) n@5 Figure 3: Comparisons on noisy labelling IMDb data.",
"missing and invalid labels: flipping either from positive to negative or from negative to positive with a noise rate.",
"The noise rates are varied from 0% to 60% on 80% of the training set, and the rest of 20% is noise-free validation set for model selection.",
"We select five algorithms: FastXML, PD-Sparse, XML-CNN, Rank-AE and BCE-AE, wherein BCE-AE is our proposed method but using binary cross-entropy loss in L ae ( y, y (cid:48) ) .",
"Comparing BCE-AE with Rank-AE can be used to verify whether the robustness to label noise is due to the use of marginal ranking loss.",
"The performances are reported on the same clean test set, shown in Figure 3.",
"Rank-AE consistently outperforms other four approaches and has the best robustness tolerating noise labels.",
"Besides, FastXML and PD-Sparse are more tolerant to missing noises than XML-CNN, which may due to XML-CNN has greater capacity and thus more prone to over-fitting the noise.",
"Furthermore, when comparing Rank-AE with BCE-AE, both of which share the same structure but have different loss functions, the proposed marginal-based ranking loss seems to be robuster than binary cross-entropy loss.",
"Ablation Study .",
"The effectiveness and robustness of Rank-AE have been demonstrated in the previous section.",
"However, it is not clear to us 50.0 52.5 55.0 57.5 60.0 62.5 65.0 67.5 70.0 Delicious 45 50 55 60 65 70 75 80 85 90 Mediamill No attn No loss Rank-AE 40 50 60 70 80 90 RCV P@1 P@3 P@5 35 40 45 50 55 60 65 70 75 80 IMDb P@1 P@3 P@5 50 55 60 65 70 75 80 85 EURLex P@1 P@3 P@5 20 30 40 50 60 70 80 Precision at Top KP e r f o r m a n c e ( % ) Wiki10 Figure 4: Precision at top k comparisons: No attn (/) is no attention Rank-AE; No loss (+) is using binary cross entropy lose instead of marginal ranking loss; Rank-AE (x) is our proposed model.",
"yet that if the effectiveness benefits from the proposed components, such as attention mechanism and marginal ranking loss.",
"To further understand the impacts from these two factors, we conduct a controlling experiment with three different settings: 1) removing the Attention component A in Figure 2 from Rank-AE, in which case V (cid:48) is directly passed to the average pooling to obtain x (cid:48) , called No attn ; or 2) examining the performances by replacing the marginal ranking loss ( L ae ) with a binary cross entropy loss, named No loss ; or 3) keeping the original Rank-AE without any change.",
"In Figure 4, P @ k is reported on the six datasets for the ablation experiment, because n @ k is similar to P @ k , thus eliminated here.",
"The comparisons results show that Rank-AE without any change works better than the other two on all datasets consistently, especially on Wiki10.",
"First, channel-attention extracts richer information from the word embeddings by introducing the channel weights.",
"Thus, it is more suitable when classification tasks become more complicated and a word more likely represents multiple aspects.",
"Second, Rank-AE gains some advantage of tolerating noise labels with marginal ranking loss comparing to BCE loss.",
"We could even further infer that IMDb and RCV may have relatively less noise labels since the performance does not benefit much from the marginal ranking loss.",
"Channel-Attention Visualization .",
"Our channel-attention is implemented by an excitation network, which is adopted from SENet (Hu et al., 2017) and only applied to images before.",
"To demonstrate its effectiveness and feasibility on textual data, we Figure 5: Visualization for Attention in Rank-AE.",
"employ the visualization tool (Lin et al., 2017) to highlight important words based on the attention output.",
"Specifically, we run our method on IMDb dataset, wherein each instance is a movie story associated with relevant genres as labels.",
"Instead of extracting V (cid:48) matrix using the proposed spatial-wise attention, we obtain a fixed size embeddings from a bidirectional LSTM on variable length of sentence, fed to our channel-attention network.",
"Through the channel-attention network, we can observe the attention matrix A for each input document.",
"By summing up the attention weights of each word embedding vector, we can visualize the overall attention for that word with the visualization tool 5 .",
"We randomly select three movies from IMDb testing set (See Figure 5).",
"By looking at the highlighted regions, we can see that the proposed channel-attention is able to focus more on the words that are highly related to the topics.",
"In this paper, we propose a marginal ranking loss, which not only predicts comparable labels scores between labels, more suitable for ranking metrics,",
"5 The visualization tool is provided by",
"but also consistently performs better on noisy labeling data, with both missing and invalid labels.",
"In addition, the dual-attention component allows Rank-AE to learn more powerful feature representations efficiently.",
"By integrating those components, Rank-AE usually achieves the best or the second best on six benchmark XML datasets comparing with other outstanding methods in state-of-the-art.",
"Most of this work was done when Bingyu Wang and Wei Sun were interning at JD.Com Inc.",
"We thank Javed Aslam, Virgil Pavlu and Cheng Li from Khoury College of Computer Sciences at Northeastern University for comments that greatly improved the manuscript.",
"We would also like to show our gratitude to our anonymous NAACL-HLT reviwers for the helpful suggestions to make the paper better."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"objective",
"objective",
"objective",
"objective",
"method",
"method",
"other",
"method",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"other",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"Abstract First-order meta-learning algorithms have been widely used in practice to learn initial model parameters that can be quickly adapted to new tasks due to their efficiency and effectiveness.",
"However, existing studies find that meta-learner can overfit to some specific adaptation when we have heterogeneous tasks, leading to significantly degraded performance.",
"In Natural Language Processing (NLP) applications, datasets are often diverse and each task has its unique characteristics.",
"Therefore, to address the overfitting issue when applying first-order meta-learning to NLP applications, we propose to reduce the variance of the gradient estimator used in task adaptation.",
"To this end, we develop a variance-reduced first-order meta-learning algorithm.",
"The core of our algorithm is to introduce a novel variance reduction term to the gradient estimation when performing the task adaptation.",
"Experiments on two NLP applications: few-shot text classification and multi-domain dialog state tracking demonstrate the superior performance of our proposed method.",
"Meta-learning has recently emerged as a promising approach in solving many natural language processing tasks, such as few-shot text classification (Obamuyide and Vlachos, 2019; Bao et al., 2019), low resource language understanding (Gu et al., 2018; Dou et al., 2019; Yu et al., 2020a), and multi-domain dialogue systems (Qian and Yu, 2019; Huang et al., 2020).",
"In particular, model-agnostic meta-learning (MAML) (Finn et al., 2017), a widely-used meta-learning approach, trains an initial model that can be adapted to a new task with a small number of optimization steps and training data.",
"However, MAML requires the computation of second-order derivatives, which can be costly for reinforcement learning and NLP applications.",
"Therefore, numerous computationally-efficient MAML variants (Finn et al., 2017; Li et al., 2017; Nichol et al., 2018; Antoniou et al., 2018; Zintgraf et al., 2019; Song et al., 2020) have been proposed in recent years.",
"First-order meta-learning (Finn et al., 2017; Nichol et al., 2018) is a widely-used method in practice because it is easy to implement, eliminates computationally-intensive second-order derivatives in MAML, and achieves state-of-the-art performance.",
"Although meta-learning including first-order meta-learning has shown promising performances in many applications (Triantafillou et al., 2019), it still somewhat struggles to learn on diverse task distributions (Triantafillou et al., 2020; Rebuffi et al., 2017; Yu et al., 2020c).",
"For first-order meta-learning, it consists of task adaptation and meta updates.",
"Task adaptation aims to obtain a task-specific model for each task by performing several optimization steps based on the current meta model.",
"Then, the meta update aggregates the gradient information of task-specific models to obtain a new meta model.",
"It has been observed in many previous works (Zhao et al., 2018; Karimireddy et al., 2019; Charles and Konecn`y, 2020) that local update methods, including first-order meta-learning, performing multiple optimization steps on local data can lead to overfitting to atypical local data.",
"In the context of first-order meta-learning, due to the large variance of the gradient estimator, task adaptation will drive task-specific models to move away from each other, resulting in that the gradients used in meta update have diverse directions.",
"Furthermore, since the difference in gradient magnitudes will also be large, the task with a much larger gradient in magnitude will dominate the task adaptation.",
"As a result, the meta update will overfit to this dominating task.",
"Similar issues have been studied in multi-task learning: Yu et al. (2020b) showed that conflicting gradients, i.e., two gradients that have a negative cosine similarity, can lead to significantly degraded performance when the difference in gradient magnitudes is large.",
"The above gradient variance issue, i.e., the large variance from the gradient estimator, is significant in NLP applications since many NLP datasets have diverse properties, and the tasks for meta-learning in NLP applications also have their unique characteristics.",
"For example, the MultiWOZ dataset (Budzianowski et al., 2018) for dialog systems and the Spider dataset (Yu et al., 2018) for semantic parsing, both consist of complex and cross domain examples.",
"To address the aforementioned gradient variance issue in NLP applications when applying first-order meta-learning approaches, we propose a variance-reduced first-order meta-learning (VFML) algorithm.",
"The key idea of our algorithm is that we leverage a novel variance reduction term in the task adaptation steps to reduce the variance of the gradient estimator.",
"We evaluate our proposed method on two NLP applications: few-shot text classification and domain adaptation in multi-domain dialog state tracking.",
"We experiment on several benchmark datasets, finding that our method produces models that can achieve better performances than the baseline Reptile (Nichol et al., 2018).",
"Let T = {T i } i I be the set of all tasks and I be the task index set.",
"Suppose T i is drawn from T with probability p i , and we use p to denote the probability distribution over T .",
"Our goal is to find an initial model such that it will have a small loss on a new task T i after a few steps of updates.",
"Therefore, we want to solve the following problem min R d E i p [ L i ( f Ki ( ))] , (2.1) where f Ki ( ) is the function that updates the initial model parameter for K steps on task T i .",
"To solve the problem in equation 2.1, MAML uses task adaptation, i.e., f Ki ( ) , and the following meta update based on sampled tasks",
"where is the step size, I b is the index set of the sampled tasks, and f Ki ( ) is usually K steps of gradient descent.",
"A more efficient and effective MAML variant is the first-order method (Finn et al., 2017; Nichol et al., 2018).",
"For instance, Finn et al. (2017) proposed to replace the Hessian matrix in meta update with an identity matrix, which leads to First-order MAML (FOMAML).",
"Nichol et al. (2018) proposed Reptile to further simplify FO-MAML by using the the following meta update = (cid:80) i I b ( (cid:48) i ) / |I b | , where (cid:48) i = f Ki ( ) .",
"In this work, we propose a new method based on Reptile to improve the performance of first-order meta-learning methods.",
"Our proposed algorithm for meta-learning is illustrated in Algorithm 1.",
"In the following discussion, we use L i, B it to denote the mini-batch stochastic gradient for task i and B it is the sample index set.",
"The main idea of our method is to construct Algorithm 1 Variance-reduced First-order Meta-learning (VFML) Algorithm input initialization 0 , initial variance reduction term v 0 , step size: , , iteration numbers: T , K , parameters: , 1: for t = 0 , 1 , . . . , T 1 do 2: Sample Tasks I t I with |I t | = m 3: for i I t do 4: w i = Task Adaptation ( t , v t , , K, , i ) 5: end for 6: Update t +1 = t + 1 m (cid:80) i I t (cid:0) w i t (cid:1) 7: Update v t +1 = 1 m (cid:80) i I t L i, B it ( t +1 ) + (1 ) (cid:0) v t 1 m (cid:80) i I t L i, B it ( t ) (cid:1) 8: end for output T a variance reduction term v , which is motivated by the stochastic recursive momentum technique proposed in (Cutkosky and Orabona, 2019).",
"v will be used in the task adaptation step (line 4 in Algorithm 1) to reduce the variance of the gradient estimator.",
"More specifically, we use the gradient estimator g ik = L i, B ik ( w ik ) + (1 ) v (line 3 in Algorithm 2) to update the task-specific model for task T i .",
"g ik is a weighted sum of the mini-batch stochastic gradient L i, B ik ( w ik ) and the variance reduction term v , and (1 ) is the weight for v .",
"When = 1 , it reduces to Reptile.",
"We initialize the variance reduction term v 0 by averaging the gradients from a set of tasks which are randomly sampled and computed using the initialization 0 .",
"L i ( ) (cid:13)(cid:13) 2 2 21 and E (cid:13)(cid:13) L i ( ) L ( ) (cid:13)(cid:13) 2 2 22 , where L ( ) = E [ L i ( )] .",
"21 is the variance of using L i, B ik to estimate the gradient L i for task T i .",
"22 is the variance introduced by the dissimilarity between tasks.",
"Intuitively, the variance of the gradient estimator in Reptile, i.e., E (cid:13)(cid:13) L i, B ik ( w ik ) L ( w ik ) (cid:13)(cid:13) 2 2 , will be determined by the following quantity O ( 21 + 22 ) .",
"In addition, the variance of the gradient estimator in VFML, i.e., E (cid:13)(cid:13) g ik L ( w ik ) (cid:13)(cid:13) 2 2 , will be determined by O (cid:0) 21 + 2 22 + (1 ) 2 ( 2 (cid:48) 2 2 + (1 ) 2 2 t +1 ) (cid:1) , where 2 t +1 = E (cid:107) t +1 t (cid:107) 22 , (cid:48) 22 = E (cid:13)(cid:13) (cid:80) i I t L i ( t ) /m L ( t ) (cid:13)(cid:13) 2 2 .",
"If we have a large number of examples for each task, then 21 will be small, and the variance of the gradient estimator in Reptile will be determined by O ( 22 ) .",
"When we have very diverse task distributions, 22 will be large, which can lead to a significant degradation in performance.",
"However, for VFML, the variance will be dominated by O (cid:0) 2 22 +(1 ) 2 ( 2 (cid:48) 2 2 +(1 ) 2 2 t +1 ) (cid:1) .",
"Since (cid:48) 22 can be much smaller than 22 and 2 t +1 goes to zero as our algorithm convergences, the variance of g ik can be much smaller than 22 by choosing appropriate parameters , .",
"Therefore, the role of the variance reduction term v is to alleviate the variance introduced by the task dissimilarity.",
"We evaluate our proposed method on one simulation experiment and two NLP applications: text classification and dialog state tracking.",
"wave regression (Finn et al., 2017; Nichol et al., 2018).",
"Our goal is to learn a neural network that can quickly adapt to a given sine wave function after a few adaptation steps.",
"We follow the same experimental setup in the previous work (Nichol et al., 2018), and we compare our proposed method with Reptile (Nichol et al., 2018) in terms of the mean square error between the output of the adapted neural network and the sine wave function.",
"Parameters: For both methods, we sample 10 tasks at each outer loop iteration and use 10 examples, i.e., b = 10 , to compute the mini-batch stochastic gradients.",
"We choose K = 3 , = 0 .",
"01 for the task adaptation step, and choose = 1 for the meta update.",
"For our proposed method, we choose by searching the grid { 0 .",
"1 , 0 .",
"3 , 0 .",
"5 , 0 .",
"7 , 0 .",
"9 } and by { 0 .",
"05 , 0 .",
"1 , 0 .",
"3 , 0 .",
"5 , 0 .",
"7 , 0 .",
"9 , 1 .",
"0 } .",
"Results: Figures",
"1(a) and",
"1(b) shows the training and test accuracy versus the number of iterations for our method and Reptile.",
"Figures",
"1(c) and",
"1(d) illustrate the adaptation results for both methods.",
"Figures",
"1(a) and",
"1(b) show that VFML can reduce the iteration numbers and achieve better performance in terms of training and test accuracy than Reptile.",
"Figures",
"1(c) and",
"1(d) illustrates that our proposed method can quickly converge to a given sine wave function.",
"These results validate the superiority of VFML.",
"We consider two text classification datasets: Amazon (He and McAuley, 2016) and FewRel (Han et al., 2018).",
"For Amazon dataset, it consists of customer reviews from 24 product categories, and we follow the previous work (Bao et al., 2019) to sample 1000 reviews for each category.",
"For this dataset, our goal is to classify a given review into its corresponding product category.",
"FewRel is a relation classification dataset, and each example is a sentence annotated with a head entity, a tail entity, and their relation.",
"For FewRel , we aim to predict the relation between the head and tail in a given sentence.",
"We follow the experimental setup in previous work (Bao et al., 2019).",
"We consider the N -way K -shot setting, where N is the number of classes in each task, and K is the number of examples in the class.",
"model proposed in (Bao et al., 2019).",
"More specifically, we use a CNN as the embedding model to generate the input representation and a one-hidden-layer neural network with 300 units and ReLU activation as the classifier.",
"Parameters: For both Reptile and our method, we choose K by searching the grid { 1 , 3 , 5 , 10 } , by { 0 .",
"01 , 0 .",
"05 , 0 .",
"1 , 0 .",
"3 , 0 .",
"5 } for the task adaptation step, and choose = 1 for the meta update.",
"For our proposed method, we choose by searching the grid { 0 .",
"1 , 0 .",
"3 , 0 .",
"5 , 0 .",
"7 , 0 .",
"9 } and by { 0 .",
"05 , 0 .",
"1 , 0 .",
"3 , 0 .",
"5 , 0 .",
"7 , 0 .",
"9 , 1 .",
"0 } .",
"Results: Table 1 summarizes the comparisons of different methods on Amazon and FewRel datasets for text classification.",
"The results are averaged over 10 runs.",
"In the 5-way 5-shot setting, our proposed method can achieve 1% and 0 .",
"8% improvements in terms of classification accuracy on Amazon and FewRel datasets, respectively.",
"5-way 50-shot settings.",
"These two settings are used to evaluate our proposed method's performance when the variance of the gradient estimator is dominated by the variance introduced by the task dissimilarity.",
"The results show that, when we have 50 shots, our proposed method can achieve 2 .",
"02% and 2 .",
"5% gains on classification accuracy on Amazon and FewRel datasets, respectively.",
"The results in 10 -shot and 50 -shot settings validate the effectiveness of the variance reduction term, i.e., it is used to alleviate the variance of the gradient estimator introduced by the task dissimilarity.",
"We also compare our proposed method with the MAML and FO-MAML methods proposed in (Finn et al., 2017) on Amazon and FewRel datasets in 5-way 5-shot settings.",
"Table 2 shows that our method outperforms these two baselines.",
"We also test our VRML method on the task of multi-domain dialog state tracking (DST).",
"We experiment on the MultiWOZ (Budzianowski et al., 2018), a large scale, multi-domain human-human dialog state tracking dataset.",
"It had been introduced to help facilitate research to solve the DST problem.",
"This corpus contains 8438 multi-turn dialogues with on average of 13.7 turns per dialogue.",
"Multi-domain dialog state tracking in MultiWOZ is a challenging task for meta-learning, due to the differences in dialogues between each domain.",
"For example, the dialog states, and user utterances for hotel and train are quite different.",
"We use the most frequent five domains: ( restaurant , hotel , attraction , taxi , train ).",
"We follow the same setup in (Huang et al., 2020) by training on three source domains: hotel , restaurant and train , and testing on 1% of the target domains: ( taxi , attraction ).",
"We compare our method with Reptile and the train-from-scratch, i.e., we train a randomly initialized model using data from the target domain.",
"We use joint and slot accuracy (Wu et al., 2019) to evaluate different methods.",
"Joint accuracy measures the accuracy of dialogue states, where a dialogue state is correctly predicted only if all the values for ( domain , slot ) pairs are correctly predicted.",
"Slot accuracy measures the accuracy of each ( domain , slot , value ) tuples for the dialog state.",
"Baseline models: We quantify the benefits of different meta-learning algorithms by comparing the results on top of the TRADE model architecture (Wu et al., 2019).",
"TRADE is an encoder-decoder model utilizing two BiGRUs to encode sequences of dialogue turns, and then generating corresponding (domain, slot, value) tuples.",
"We set the hidden size of the encoder and decoder to be 400 and use Glove embedding (Pennington et al., 2014).",
"Parameters: For both Reptile and our method, we choose K by searching the grid { 1 , 3 , 5 } , by { 0 .",
"01 , 0 .",
"05 , 0 .",
"1 } for the task adaptation step, and choose = 1 for the meta update.",
"For our proposed method, we choose by searching the grid { 0 .",
"1 , 0 .",
"3 , 0 .",
"5 , 0 .",
"7 , 0 .",
"9 } and by { 0 .",
"05 , 0 .",
"1 , 0 .",
"3 , 0 .",
"5 , 0 .",
"7 , 0 .",
"9 , 1 .",
"0 } .",
"Following the previous work (Wu et al., 2019; Huang et al., 2020), we set batch size to 32, dropout rate to 0.2.",
"For the finetune step, we search the batch size by the grid { 4 , 8 , 16 , 32 } and setp size by { 0 .",
"01 , 0 .",
"05 , 0 .",
"1 } .",
"We early stop the training of both methods when the validation accuracy converges.",
"Results: Table 3 reports the joint and slot accuracy for different methods.",
"The results show that, when we have 1% of the target domain data for fine-tuning, our proposed method can achieve 2.44% and 1.01% improvements in slot and joint accuracy compared with Reptile for Attraction .",
"Compared with train-from-scratch, we can obtain 15.27% and 10.59% gains in slot and joint accuracy.",
"Similar improvements can be obtained for Taxi .",
"Analysis: We also consider the case when we have more target domain data for finetuning.",
"Table 3 shows that the more target domain data we have, the more gains our method can obtain.",
"For example, when we have 10% data for Taxi , our method can achieve 6.64%/13.13% improvements in slot/joint accuracy compared with train-from-scratch.",
"Compared with Reptile, we can obtain 3.32%/1.09% gains in slot/joint accuracy.",
"Note that there is no change of performance for the train-from-scratch method on 1%/5%/10% Taxi data, due to the small size of the Taxi dataset.",
"If we train on the entire Taxi data, the joint/slot accuracy would be 75.61%/89.61%.",
"These results show that meta-learning indeed helps when the target data is small, and VRML is very effective on using the small amount of target data compared to Reptile.",
"We propose a novel first-order meta-learning method to reduce the variance of the gradient estimator used in task adaptation for NLP tasks.",
"We show in both few-shot text classification and DST that our method can achieve better performance than existing methods.",
"It is interesting to further study domain adaptation methods built upon our new algorithm."
] | [
"abstain",
"result",
"abstain",
"objective",
"objective",
"objective",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"result",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"objective"
] |
[
"Negation and uncertainty modeling are longstanding tasks in natural language processing.",
"Linguistic theory postulates that expressions of negation and uncertainty are semantically independent from each other and the content they modify.",
"However, previous works on representation learning do not explicitly model this independence.",
"We therefore attempt to disentangle the representations of negation, uncertainty, and content using a Variational Autoencoder 1 .",
"We find that simply supervising the latent representations results in good disentanglement, but auxiliary objectives based on adversarial learning and mutual information minimization can provide additional disentanglement gains.",
"In formal semantics, negation and uncertainty are operators whose semantic functions are independent of the propositional content they modify (Cann, 1993a,b) 2 .",
"That is, it is possible to form fluent statements by varying only one of these aspects while leaving the others the same.",
"Negation, uncertainty, and content can thus be viewed as disentangled generative factors of knowledge and belief statements (see Figure 1).",
"Disentangled representation learning (DRL) of factors of variation can improve the robustness of representations and their applicability across tasks (Bengio et al., 2013).",
"Specifically, negation and uncertainty are important for downstream NLP tasks such as sentiment analysis (Benamara et al., 2012; Wiegand et al., 2010), question answering (Yatskar, 2019; Yang et al., 2016), and information extraction (Stenetorp et al., 2012).",
"Disentangling nega-1 We make our implementation and data available at https://github.com/jvasilakes/ disentanglement-vae 2 Specifically, the propositional content can be represented by a variable, such as p .",
"tion and uncertainty can therefore provide robust representations for these tasks, and disentangling them from content can assist tasks that rely on core content preservation such as controlled generation (Logeswaran et al., 2018) and abstractive summarization (Maynez et al., 2020).",
"Still, no previous work has tested whether negation, uncertainty, and content can be disentangled, as linguistic theory suggests, although previous works have disentangled attributes such as syntax, semantics, and style (Balasubramanian et al., 2021; John et al., 2019; Cheng et al., 2020b; Bao et al., 2019; Hu et al., 2017; Colombo et al., 2021).",
"To fill this gap, we aim to answer the following research questions: RQ1: Is it possible to estimate a model of statements that upholds the proposed statistical independence between negation, uncertainty, and content?",
"RQ2: A number of existing disentanglement objectives have been explored for text, all giving promising results.",
"How do these objectives compare for enforcing disentanglement on this task?",
"In addressing these research questions, we make the following contributions:",
"1. Generative Model: We propose a generative model of statements in which negation, uncertainty, and content are independent latent variables.",
"Following previous works, we estimate this model using a Variational Autoencoder (VAE) (Kingma and Welling, 2014; Bowman et al., 2016) and compare existing auxiliary objectives for enforcing disentanglement via a suite of evaluation metrics.",
"2. Simple Latent Representations: We note that negation and uncertainty have a binary function (positive or negative, certain or uncer-tain).",
"We therefore attempt to learn corresponding 1-dimensional latent representations for these variables, with a clear separation between each value.",
"3. Data Augmentation: Datasets containing negation and uncertainty annotations are relatively small (Farkas et al., 2010; Vincze et al., 2008; Jimnez-Zafra et al., 2018), resulting in poor sentence reconstructions according to our preliminary experiments.",
"To address this, we generate weak labels for a large number of Amazon 3 and Yelp 4 reviews using a simple nave Bayes classifier with bag-of-words features trained on a smaller dataset of English reviews annotated for negation and uncertainty (Konstantinova et al., 2012) and use this to estimate our model.",
"Details are given in Section 4.1.1.",
"We note that, in contrast to other works on negation and uncertainty modeling, which focus on token-level tasks of negation and uncertainty cue and scope detection, this work aims to learn statement -level representations of our target factors, in line with previous work on text DRL.",
"We here provide relevant background on negation and uncertainty processing, disentangled representation learning in NLP, as well as discussion of how this study fits in with previous work.",
"Negation and uncertainty help determine the asserted veracity of statements and events in text",
"(Saur and Pustejovsky, 2009; Thompson et al., 2017; Kilicoglu et al., 2017), which is crucial for downstream NLP tasks that deal with knowledge and belief.",
"For example, negation detection has been shown to provide strong cues for sentiment analysis (Barnes et al., 2021; Ribeiro et al., 2020) and uncertainty detection assists with fake news detection (Choy and Chong, 2018).",
"Previous works on negation and uncertainty processing focus on the classification tasks of cue identification and scope detection (Farkas et al., 2010) using sequence models such as conditional random fields (CRFs) (Jimnez-Zafra et al., 2020; Li and Lu, 2018), convolutional and recurrent neural networks (CNNs and RNNs) (Qian et al., 2016; Adel and Schtze, 2017; Ren et al., 2018), LSTMs (Fancellu et al., 2016; Lazib et al., 2019), and, most recently, transformer architectures (Khandelwal and Sawant, 2020; Lin et al., 2020; Zhao and Bethard, 2020).",
"While these works focus mostly on learning local representations of negation and uncertainty within a sentence, we attempt to learn global representations that encode high-level information regarding the negation and uncertainty status of statements.",
"There is currently no agreed-upon definition of disentanglement.",
"Early works on DRL attempt to learn a single vector space in which each dimension is independent of the others and represents one ground-truth generative factor of the object being modeled (Higgins et al., 2016).",
"Higgins et al. (2018) give a group-theoretic definition, according to which generative factors are mapped to independent vector spaces.",
"This definition relaxes the earlier assumption that representations ought to be single-dimensional and formalizes the notion of disentanglement according to the notion of invariance.",
"Shu et al. (2019) decompose the invariance requirement into consistency and restrictiveness, which describe specific ideal properties of the in-variances between representations and generative factors.",
"In addition to independence and invariance, interpretability is an important criterion for disentanglement.",
"Higgins et al. (2016) point out that while methods such as PCA are able to learn independent latent representations, because these are not representative of interpretable factors of variation, they are not disentangled .",
"We therefore want our learned representations to be predictive of meaningful factors of variation.",
"We adopt the 8381 x t n c u T Figure 2: Graph of the generative model.",
"Previous works on DRL for text all use some form of supervision to enforce informativeness of the latent representations.",
"Hu et al. (2017), John et al. (2019), Cheng et al. (2020b), and Bao et al. (2019) all use gold-standard labels of the generative factors, while other works employ similarity metrics (Chen and Batmanghelich, 2020; Balasubramanian et al., 2021).",
"In contrast, our approach uses weak labels for negation and uncertainty generated using a classifier trained on a small set of gold-standard data.",
"These previous works on text DRL all use a similar architecture: a sequence VAE (Kingma and Welling, 2014; Bowman et al., 2016) maps inputs to L distinct vector spaces, each of which are constrained to represent a different target generative factor via a supervision signal.",
"We also employ this overall architecture for model estimation and use it as a basis for experimenting with existing disentanglement objectives based on adversarial learning (John et al., 2019; Bao et al., 2019) and mutual information minimization (Cheng et al., 2020b), described in Section 3.4.",
"However, unlike these previous works, which learn high-dimensional representations of all the latent factors, we aim to learn 1-dimensional representations of the negation and uncertainty variables in accordance with their binary function.",
"We describe our overall model in Section 3.1.",
"Section 3.2 enumerates three specific desiderata for disentangled representations, and sections 3.3 and 3.4 describe how we aim to satisfy these desiderata.",
"We propose a generative model of statements according to which negation, uncertainty, and content are independent latent variables.",
"A diagram of our Figure 3: The proposed architecture corresponding to the LELBO + LINF objective (see Section 3.4).",
"are given in Appendix A. We use a sequence VAE to estimate this model (Kingma and Welling, 2014; Bowman et al., 2016).",
"Unlike a standard autoencoder, the VAE imposes a prior distribution on the latent representation space Z (usually a standard Gaussian) and replaces the deterministic encoder with a learned approximation of the posterior q ( z | x ) parameterized by a neural network.",
"In addition to minimizing the loss between the input and reconstruction, as in a standard AE, the VAE uses an additional KL divergence term to keep the approximate posterior close to the prior distribution.",
"In our implementation, three linear layers map the final hidden state of a BiLSTM encoder to three sets of Gaussian distribution parameters ( , ), which parameterize the negation, uncertainty, and content latent distributions { n, u, c } , respectively.",
"Because we map each input to three distinct latent spaces, we include three KL divergence terms in the Evidence Lower BOund (ELBO) training objective, given in Equation (1).",
"LELBO ( , ) = E q ( z | x ) (cid:104) log p ( x | z ) (cid:105) + (cid:88) { n,u,c } KL (cid:104) q ( ) ( z ( ) | x ) || p ( z ( ) ) (cid:105) (1) where denotes the encoder's parameters, the 8382 decoder's parameters, p ( z ( ) ) is a standard Gaussian prior, and the hyper-parameters weight the KL divergence term for each latent space L .",
"The latent representations z are sampled from normal distributions defined by these parameters using the reparameterization trick (Kingma and Welling, 2014), i.e., z ( ) = ( ) ( ) + N ( 0 , I ) .",
"The latent representations are then concatenated z = [ z ( n ) ; z ( u ) ; z ( c ) ] and used to initialize an LSTM decoder, which aims to reconstruct the input.",
"A visualization of our architecture is given in Figure 3 and implementation details are given in Appendix E. We use 1-dimensional negation and uncertainty spaces and a 62-dimensional content space for a total latent size of 64.",
"Notably, we do not supervise the content space, unlike previous works (John et al., 2019; Cheng et al., 2020b), which supervise it by predicting the bag of words of the input.",
"Such a supervision technique would hinder disentanglement by encouraging the content space to be predictive of the negation and uncertainty cues.",
"Therefore, in our model we define three latent spaces { n, u, c } but use signals from only 2 target generative factors k { n, u } .",
"We aim to satisfy the following desiderata of disentangled representations put forth by previous works.",
"1. Informativeness : the representations should be predictive of the ground-truth generative factors (Higgins et al., 2016; Eastwood and Williams, 2018);",
"2. Independence : the representations for each generative factor in question should lie in independent vector spaces (Higgins et al., 2018);",
"3. Invariance : the mapping from the data to the representations should be invariant to changes in other generative factors (Higgins et al., 2018; Shu et al., 2019); The following sections detail how our model enforces these desiderata.",
"Following Eastwood and Williams (2018), we measure the informativeness of a representation by its ability to predict the corresponding generative factor.",
"Similar to previous works on DRL for text (John et al., 2019; Cheng et al., 2020b), we train supervised linear classifiers 5 on each latent space and back-propagate the prediction error.",
"Thus, in addition to the ELBO objective in Equation (1), we define informativeness objectives for negation and uncertainty.",
"where y ( k ) is the true label for factor k , y ( k ) is the classifier's prediction, ( k ) are the parameters of this classifier, and BCE is the binary cross-entropy loss.",
"1. Informativeness (INF): This is based on the hypothesis that if negation, uncertainty, and content are independent generative factors, the informativeness objective described in Section 3.3 will be sufficient to drive independence and invariance.",
"This approach was found to yield good results on disentangling style from content by Balasubramanian et al. (2021).",
"2. Adversarial (ADV): The latent representations should be predictive of their target generative factor only.",
"Therefore, inspired by John et al. (2019), we train additional adversarial classifiers on each latent space that try to predict the values of the non-target generative factors, while the model attempts to structure the latent spaces such that the predictive distribution of these classifiers is a non-predictive as possible.",
"3. Mutual-information minimization (MIN): A natural measure of independence between two variables is mutual information (MI).",
"Therefore, this objective minimizes an upper-bound estimate of the MI between each pair of latent spaces, following (Cheng et al., 2020a,b; Colombo et al., 2021).",
"5 Implemented as single-layer feed-forward neural networks with sigmoid activation.",
"Adversarial Objective.",
"The adversarial objective (ADV) consists of two parts: 1) adversarial classifiers which attempt to predict the value of all non-target factors from each latent space; 2) a loss that aims to maximize the entropy of the predicted distribution of the adversarial classifiers.",
"For a given latent space , a set of linear classifiers predict the value of each non-target factor k = , respectively, and we compute the binary cross-entropy loss for each.",
"Where ( ,k ) are the parameters of the adversarial classifier predicting factor k from latent space and y ( ,k ) is the corresponding prediction.",
"For example, we introduce two such classifiers for the content space = c , one to predict negation and one to predict uncertainty, k { n, u } .",
"Importantly, the prediction errors of these classifiers are not back-propagated to the rest of the VAE.",
"We impose an additional objective for each adversarial classifier, which aims to make it's predicted distribution as close to uniform as possible.",
"We do this by maximizing the entropy of the predicted distribution (Equation (4)) and back-propagating the error, following John et al. (2019) and Fu et al. (2018).",
"As the objective is to maximize this quantity, the total adversarial objective is",
"The ADV objective aims to make the latent representations as uninformative as possible for nontarget factors.",
"Together with the informativeness objective, it pushes the representations to specialize in their target generative factors.",
"MI Minimization Objective.",
"The MI minimization (MIN) objective focuses on making the distributions of each latent space as dissimilar as possible.",
"We minimize the MI between each pair of latent spaces according to Equation (6).",
"where ICLUB ( i ; j ) is the Contrastive Learning Upper-Bound (CLUB) estimate of the MI (Cheng",
"et al., 2020a).",
"Specifically, we introduce a separate neural network to approximate the conditional variational distribution p ( i | j ) , which is used to estimate an upper bound on the MI using samples from the latent spaces.",
"The full model objective along with relevant hy-perparameters weights is given in Equation (7).",
"Our hyperparameter settings and further implementation details are given in Appendix E. L = LELBO + INFLINF + ADVLADV + MINLMIN (7) In the sections that follow, we experiment with different subsets of the terms in the full objective and their effects on disentanglement.",
"We train a model using only the ELBO objective as our disentanglement baseline.",
"We describe our datasets, preprocessing, and data augmentation methods in Section 4.1.",
"Section 4.2 describes our evaluation metrics and how these target the desiderata for disentanglement given in Section 3.2.",
"We use the SFU Review Corpus (Konstantinova et al., 2012) as our primary dataset.",
"This corpus contains 17,000 sentences from reviews of various products in English, originally intended for sentiment analysis, annotated with negation and uncertainty cues and their scopes.",
"Many of the SFU sentences are quite long ( > 30 tokens), and preliminary experiments revealed that this results in poor reconstructions.",
"We therefore took advantage of SFU's annotated statement conjunction tokens to split the multi-statement sentences into single-statement ones in order to reduce the complexity and increase the number of examples.",
"Also to reduce complexity, we remove sentences > 15 tokens following previous work (Hu et al., 2017), resulting in 14,000 sentences.",
"We convert all cue-scope annotations to statement-level annotations.",
"Multi-level uncertainty annotations have been shown to be rather inconsistent and noisy, achieving low inter-annotator agreement compared to binary ones (Rubin, 2007).",
"We therefore binarize the certainty labels following (Zerva, 2019).",
"Despite the efforts above, we found the SFU corpus alone was insufficient for obtaining fluent reconstructions.",
"We therefore generated weak negation and uncertainty labels for a large amount of additional Amazon and Yelp review data using two nave Bayes classifiers with bag-of-words (BOW) features 6 .",
"These classifiers were trained on the SFU training split to predict sentence level negation and uncertainty, respectively.",
"The Amazon and Yelp datasets fit the SFU data distribution well, being also comprised of user reviews, and have been used in previous works on text DRL with good results (John et al., 2019; Cheng et al., 2020b) 7 .",
"Statistics for the combined SFU+Amazon dataset are summarized in Appendix C. In Appendix D, we provide a complementary evaluation on a combined SFU+Yelp dataset.",
"Evaluating disentanglement of the learned representations requires complementary metrics of the desiderata given in Section 3.2: informativeness, independence, and invariance.",
"For measuring informativeness, we report the precision, recall, and F1 score of a logistic regression model trained to predict each of the ground-truth labels from each latent space, following Eastwood and Williams (2018).",
"We also report the MI between each latent distribution and factor, as this gives additional insight into informativeness.",
"For measuring independence, we use the Mutual Information Gap (MIG) (Chen et al., 2018).",
"The MIG lies in [0 , 1] , with higher values indicating a greater degree of disentanglement.",
"Details of the MIG computation are give in Appendix E.1.",
"We evaluate invariance by computing the Pearson's correlation coefficient between each pair of latent variables using samples from the predicted latent distributions.",
"It also important to evaluate the ability of the models to reconstruct the input.",
"Specifically, we target reconstruction faithfulness (i.e., how well the input and reconstruction match) and fluency.",
"We evaluate faithfulness in terms of the ability of 6 Implementation details and evaluation of these classifiers is given in Appendix B 7 Due to computational constraints, we randomly sample 100,000 weakly annotated Amazon examples for the final dataset.",
"Preliminary experiments with larger numbers of Amazon examples suggested that 100k is sufficient for our purposes.",
"the model to preserve the negation, uncertainty, and content of the input.",
"Negation and uncertainty preservation are measured by re-encoding the reconstructions, predicting the negation and uncertainty statuses from the re-encoded latent values, and computing precision, recall, and F1 score against the ground-truth labels 8 .",
"Following previous work, we approximate a measure of content preservation in the absence of any explicit content annotations by computing the BLEU score between the input and the reconstruction (self-BLEU) (Bao et al., 2019; Cheng et al., 2020b; Balasubramanian et al., 2021).",
"We evaluate fluency of the reconstruction by computing the perplexities (PPL) under GPT-2, a strong, general-domain language model (Radford et al., 2019).",
"Finally, we evaluate the models' ability to flip the negation or uncertainty status of the input.",
"For each test example, we override the value of the latent factor we want to change to represent the opposite of its ground-truth label.",
"The ability of the model to control negation and uncertainty is measured by re-encoding the reconstructions obtained from the overridden latents, predicting from the re-encoded latent values, and computing accuracy against the opposite of the ground-truth labels.",
"In the following, Section 5.1 reports the disentanglement results and Section 5.2 reports the faithfulness and fluency results.",
"Section 5.3 discusses how these results address the two research questions proposed in Section",
"1. 5.1 Disentanglement The informativeness of each latent space with respect to each target factor is shown in Table 1 given as predictive performance and MI.",
"The baseline ELBO objective alone fails to disentangle.",
"It puts almost all representation power in the content space, which is nevertheless still uninformative of the negation and uncertainty factors, with low MIs and F1s.",
"The model using the INF auxiliary objective does, however, manage to achieve good disentanglement: the negation and uncertainty spaces are highly informative of their target factors and uninformative of the non-target factors 9 .",
"However, the content space is still slightly predictive of negation and uncertainty, with F1s of 0.684 and 0.649, respectively.",
"This improves with 9 Experiments using LELBO + LADV or LELBO + LMIN did not improve over LELBO alone.",
"the ADV and MIN objectives, where the content space shows near-random prediction performance of negation and uncertainty, with slightly improved prediction performance of the negation and uncertainty spaces for their target factors.",
"These results are corroborated by the visualizations in Figure 4, which show clear separation by classes in the negation and uncertainty latent distributions but no distinction between classes in the content space.",
"Additionally, we note the good predictive performance of the negation and uncertainty latents, despite their simple, 1-dimensional encoding.",
"see that the INF objective alone results in decent disentanglement, with median MIG values 0 .",
"4 .",
"The ADV and MI objectives give similar increases in MIG, up to 0 .",
"55 for both negation and uncertainty, and their combination, ADV+MIN, improves MIG further, up to 0 .",
"6 , suggesting that these objectives are complementary.",
"We demonstrate the invariance of our models' negation and uncertainty representations in Table",
"2. While the ELBO objective alone results in highly covariant negation and uncertainty latent distributions (0.706), this drops significantly under INF (0.200) with additional reduction contributed by the ADV and MIN objectives (0.159).",
"Table 3 reports the self-BLEU and perplexity for each disentanglement objective.",
"Example reconstructions are given in Table 9.",
"These results show that the models are quite consistent regarding content reconstruction on the train set, but this consistency drops on dev and test.",
"While the ADV and MIN objectives provide disentanglement gains over INF, the BLEU scores betray a possible trade off of slightly poorer content preservation, despite better perplexities.",
"While self-BLEU indicates the consistency of the reconstructions with respect to content, it does not necessarily indicate consistency of the reconstructions with respect to negation and uncertainty, which often differ from their opposite value counterparts by a single token.",
"The consistency of the INF and INF+ADV+MIN models with respect to these factors is reported in Table 4.",
"The INF objective alone is only somewhat consistent, with re-encoded F1s of 0.830 and 0.789 for negation and uncertainty respectively.",
"The auxiliary objectives improve these considerably, to 0.914 and 0.893, + LINF + LADV + LINF + LMIN Factor Pass P R F1 P R F1 n 1 0.969 0.965 0.967 0.959 0.975 0.967 2 0.816 0.848 0.830 0.920 0.908 0.914 u 1 0.959 0.961 0.960 0.970 0.981 0.975 2 0.767 0.820 0.789 0.930 0.864 0.893 Table 4: Consistency of the decoder with the ground-truth values of negation and uncertainty evaluated on the test set.",
"Table 5 shows the accuracies of each model on the controlled generation task, split by transfer direction.",
"We found that for both negation and uncertainty modifying the status of the input works well in only one direction: from negated to positive, uncertain to certain.",
"Changing a sentence from negated to positive or from uncertain to certain generally requires the removal of cue tokens (e.g., not, never, might ), while the opposite directions require their addition .",
"Via linear regressions between the content representations and number of tokens, we found that the content space is highly informative of sentence length, which effectively bars the decoder from adding the required negation or uncertainty tokens 10 .",
"A manual review of correctly and incorrectly modified sentences suggested that the decoder attempts to represent the negation/uncertainty status by modifying tokens in the input, rather than adding or removing them, in order to satisfy the length constraint.",
"When removal is required, the cue token is often simply replaced by new tokens consistent with the representation.",
"The inclusion of nega-tion/uncertainty cue tokens, however, only seems to occur when it is possible to change an existing token to a cue token.",
"Details of the linear regressions as well as example successful/failed transfers are given in Appendix C.3.",
"RQ1: Is it possible to learn disentangled representations of negation, uncertainty, and content?",
"The results suggest that it is indeed possible to estimate a statistical model in which negation, uncertainty, and content are disentangled latent variables according to our three desiderata outlined in Section 3.2.",
"Specifically, Table 1 shows high informativeness of the negation and uncertainty spaces across objectives, and the poor predictive ability of each latent space for non-target factors suggests independence.",
"Figure 5 further suggests independence across models, with median MIG scores in the 0.4-0.6 range.",
"Finally, the low covariances in Table 2 demonstrates the invariance of the latent representations to each other.",
"compare for this task?",
"Notably, the INF objective alone results in good disentanglement according to our three desiderata, suggesting that supervision alone is sufficient for disentanglement.",
"Still, the addition of the ADV and MIN objectives resulted in slightly more informative (Table 1) and independent (Table 2) representations.",
"While the self-BLEU scores reported in Table 3 suggest that content preservation is generally maintained across auxiliary objectives, small dips are seen in those using the MIN objective.",
"This trend also holds for perplexity, suggesting that while the MIN objective can contribute to disentanglement gains, it may result in poorer reconstructions.",
"Motivated by linguistic theory, we proposed a generative model of statements in which negation, uncertainty, and content are disentangled latent variables.",
"We estimated this model using a VAE, comparing the performance of existing disentanglement objectives.",
"Via a suite of evaluations, we showed that it is indeed possible to disentangle these factors.",
"While objectives based on adversarial learning and MI minimization resulted in disentanglement and consistency gains, we found that a decent balance between variable disentanglement and reconstruction ability was obtained by a simple supervision of the latent representations (i.e., the INF objec-tive).",
"Also, our 1-dimensional negation and uncertainty representations achieved high predictive performance, despite their simplicity.",
"Future work will explore alternative latent distributions, such as discrete distributions (Jang et al., 2017; Dupont, 2018), which may better represent these operators.",
"This work has some limitations.",
"First, our model does not handle negation and uncertainty scope, but rather assumes that operators scope over the entire statement.",
"Our model was estimated on relatively short, single-statement sentences to satisfy this assumption, but future work will investigate how operator disentanglement can be unified with models of operator scope in order to apply it to longer examples with multiple clauses.",
"Second, while our models achieved high disentanglement, they fell short on the controlled generation task.",
"We found that this was likely due to the models memorizing sentence length, constraining the reconstructions in way that is incompatible with the addition of negation and uncertainty cue tokens.",
"(Bosc and Vincent, 2020) also noticed this tendency for sentence length memorization in VAEs and future will will explore their suggested remedies, such as encoder pretraining.",
"This paper is based on results obtained from a project, JPNP20006, commissioned by the New Energy and Industrial Technology Development Organization (NEDO).",
"This work was also supported by the University of Manchester President's Doctoral Scholar award, a collaboration between the University of Manchester and the Artificial Intelligence Research Center, the European Research Council (ERC StG DeepSPIN 758969), and by the Fundao para a Cincia e Tecnologia through contract UIDB/50008/2020."
] | [
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"method",
"objective",
"objective",
"method",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"other",
"other",
"objective",
"other",
"abstain",
"objective",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"objective",
"result",
"abstain",
"abstain",
"objective",
"objective",
"result",
"result",
"abstain",
"other",
"other"
] |
[
"In this theme paper, we reflect on the progress of Automated Writing Evaluation ( AWE ), using Ellis Page's seminal 1966 paper to frame the presentation.",
"We discuss some of the current frontiers in the field, and offer some thoughts on the emergent uses of this technology.",
"In a seminal paper on the imminence of automated grading of essays, Page (1966) showed that a high correlation between holistic machine and human scores is possible.",
"He demonstrated automated scoring of 276 essays written by high school students by a system with 32 features, resulting in a multiple R = 0 : 65 between machine and average human score, after adjustment.",
"He also provided a thoughtful discussion of his ambitions for automated scoring and of the possible objections.",
"Page made the case that automated evaluation of student writing is needed to take some of the evaluation load off the teachers and to provide students evaluations of their (potentially multiple) drafts with a fast turnaround.",
"He then appealed to the then-burgeoning interest and fascination with machine learning to argue for the feasibility of such an enterprise, namely, that machines can learn how to give the right grades to essays, if trained on an expert-scored sample.",
"As part of the feasibility argument, Page emphasized the need to carefully define the goal so that success can be judged appropriately.",
"The goal is not a real master analysis of the essay the way a human reader would do but merely an imitation that would produce a correlated result (using what Page called proxes approximations).",
"Page considered this goal to be both useful and achievable.",
"Page's minimal desiderata have certainly been achieved AWE systems today can score in agreement with the average human rater, at least in some contexts.",
"1 For example, Pearson's Intelligent Essay Assessor (IEA) scores essays written for the Pearson Test of English (PTE) as well as for other contexts: IEA was developed more than a decade ago and has been used to evaluate millions of essays, from scoring student writing at elementary, secondary and university level, to assessing military leadership skills. 2 Besides sole automated scoring as for PTE, there are additional contexts where the automated score is used in addition to a human score, such as for essays written for the Graduate Record Examination (GRE ) 3 or for the Test of English as a Foreign Language (TOEFL ).",
"4 Does this mean that the problem of AWE is solved?",
"Well, not exactly.",
"Page did anticipate some difficulties for AWE systems.",
"It is instructive to see where we are with those.",
"What about the gifted student who is offbeat and original?",
"Won't he be overlooked by the computer?",
"(Page, 1966)",
"puter than with an (average) human reader, because originality is a subjective construct.",
"Thus, once research uncovers objective and measurable aspects of original writing, relevant features can be added into an AWE system; finding such aspects, as well as measuring them, is still work in progress.",
"While no current operational scoring system we are aware of is specifically looking for originality, research into aspects of writing that are often considered original is taking place.",
"For example, using data from different tests, Beigman Klebanov and Flor (2013a) and Beigman Klebanov et al. (2018) found that the extent of metaphor use (proportion of metaphorically used words in an essay) correlates with essay quality; Littlemore et al. (2014) likewise found that more skilled writers use metaphor more often.",
"Song et al. (2016) observed a positive correlation between use of parallelism syntactically similar and semantically related constructors, often used for emphasis or to enhance memorability in student essays.",
"Some pioneering work has been done on comparing writing that is recognized as outstanding (through receiving prestigious prizes) vs writing that is merely good in the domain of scientific journalism (Louis and Nenkova, 2013).",
"Once various indicators of originality can be successfully measured, additional work may be necessary to incorporate these measurements into scoring ecosystems since such indicators may only occur infrequently.",
"One way to achieve this would be to compute a macro feature that aggregates multiple such indicators, another would be to direct such essays to a human rater for review.",
"Won't this grading system be easy to con?",
"Can't the shrewd student just put in the proxies which will get a good grade?",
"(Page, 1966)",
"Certainly, students can and do employ gaming strategies to discover and exploit weaknesses of AWE systems.",
"Such strategies can involve repeating the same paragraphs over and over, varying sentence structure, replacing words with more sophisticated variants, re-using words from the prompt, using general academic words, plagiarizing from other responses or from material found on the Internet, inserting unnecessary shell language linguistic scaffolding for organizing claims and arguments, and automated generation of essays (Powers et al., 2001; Bejar et al., 2013, 2014; Higgins and Heilman, 2014; Sobel et al., 2014).",
"Such strategies are generally handled by building in filters or flags for aberrant responses (Higgins et al., 2006; Zhang et al., 2016; Yoon et al., 2018; Cahill et al., 2018).",
"However, developers of AWE systems can never anticipate all possible strategies and may have to react quickly as new ones are discovered in use, by developing new AWE methods to identify them.",
"This cat-and-mouse game is particularly rampant in the context of standardized testing ( x 3.2).",
"This is one of the reasons standardized tests are often not scored solely by an AWE system but also by a human rater.",
"We are talking awfully casually about grading subject matter like history.",
"Isn't this a wholly different sort of problem?",
"Aren't we supposed to see that what the students are saying makes sense, above and beyond their using commas in the right places?",
"( Page, 1966)",
"Indeed, work has been done over the last decade on automated evaluation of written responses for their content and not their general writing quality ( Sukkarieh and Bolge, 2008; Mohler et al., 2011; Ziai et al., 2012; Basu et al., 2013; Madnani et al., 2013; Ramachandran et al., 2015; Burrows et al., 2015; Sakaguchi et al., 2015; Madnani et al., 2016; Pad, 2016; Madnani et al., 2017a; Riordan et al., 2017; Kumar et al., 2017; Horbach et al., 2018; Riordan et al., 2019).",
"Scoring for content focuses primarily on what students know, have learned, or can do in a specific subject area such as Computer Science, Biology, or Music, with the fluency of the response being secondary.",
"For example, some spelling or grammar errors are acceptable as long as the desired specific information (e.g., scientific principles, trends in a graph, or details from a reading passage) is included in the response.",
"Note that most current content scoring systems ascertain the correctness\" of a response based on its similarity to other responses that humans have deemed to be correct or, at least, high-scoring; they do not employ explicit fact-checking or reasoning for this purpose. Concerns about specific content extends to other cases where the scoring system needs to pay 7798 attention to details of genre and task not all essays are five-paragraph persuasive essays; the specific task might require assessing whether the student has appropriately used specific source materials (Beigman Klebanov et al., 2014; Rahimi et al., 2017; Zhang and Litman, 2018) or assessing narrative (Somasundaran et al., 2018) or reflective (Beigman Klebanov et al., 2016a; Luo and Litman, 2016), rather than persuasive, writing. 2.2.4 Feedback Page emphasized the importance of feedback, and considered the following to be the sort of feedback that can almost be programmed right now (original italics): John [. . . ], please correct the following misspellings: believe, receive.",
"Note the ie/ei problem.",
"You overuse the words interesting, good, nice; then was repeated six times.",
"Check trite expressions.",
"All of your sentences are of the subject-verb variety and all are declarative.",
"Reconstruct.",
"Check subject-verb agreement in second paragraph.",
"You had trouble with this in your last paper.",
"Title lacking.",
"Do the following related assignments for tomorrow . . . (Page, 1966) Today a substantial amount of writing feedback, particularly about spelling and grammar, is incorporated into widely used text editors such as Mi-crosoft Word, Google Docs, and Overleaf.",
"Dedicated writing assistance software such as ETS's Writing Mentor 5 (Burstein et al., 2018), ASU's Writing Pal 6 (Roscoe and McNamara, 2013; Allen et al., 2014), ETS' Criterion 7 (Burstein et al., 2004), Grammarly's Writing Assistant, 8 Cam-bridgeEnglish's Write & Improve, 9 Ginger's Essay Checker, 10 TurnItIn's Revision Assistant, 11 Vantage Learning's MY",
"Access!, 12 Pearson's My Writing Lab Writing Practice Module and WriteToLearn 13,14 typically go beyond grammar 5 https://mentormywriting.org/ 6 http://www.adaptiveliteracy.com/writing-pal 7 http://www.ets.org/criterion 8 https://www.grammarly.com/ 9 https://writeandimprove.com/ 10 https://www.gingersoftware.com/essay-checker 11 https://www.turnitin.com/products/ revision-assistant 12 http://www.vantagelearning.com/products/ my-access-school-edition/ 13 https://www.pearsonmylabandmastering.com 14 http://wtl.pearsonkt.com and spelling.",
"15 Such tools provide feedback on discourse structure (Criterion), topic development and coherence (Writing Mentor), tone (Writing Assistant, Rao and Tetreault (2018)), thesis relevance (Writing Pal), sentence spicing through suggestions of synonyms and idioms (Ginger's Sentence Rephraser), and style & argumentation-related feedback (Revision Assistant).",
"Can we then put a green check-mark against Page's agenda for automated feedback, which may magnify and disseminate the best human capacities to criticize, evaluate, and correct?",
"Alas, not yet; research on effectiveness of automated feedback on writing is inconclusive (En-glert et al., 2007; Shermis et al., 2008; Grimes and Warschauer, 2010; Choi, 2010; Roscoe and McNamara, 2013; Wilson and Czik, 2016; Wilson, 2017; Bai and Hu, 2017; Ranalli et al., 2017).",
"One potential reason for the different outcomes is difference in user populations feedback that works for L1 writers might not work for L2 writers; differences in ages, skill levels, presence or absence of learning disabilities could all play a role.",
"Adjustment of the evaluation methodology to the specific purpose of the writing assistance tool is another issue for consideration; we will return to this issue in x 4. 3 Going off the Page So far, Page's outline of the promises and challenges of AWE have provided a good framework for surveying the field.",
"There are also a number of developments that were not mapped on Page's chart; we turn to reviewing those next.",
"In order to advance the work on understanding and assessing writing quality, there is clearly a need for a multi-lingual perspective, since methods developed for one language or dialect may not work for another.",
"This consideration does not appear in Page (1966), yet it is an active line of subsequent work.",
"While most of the research we cited so far has been on English, various aspects of writing evaluation, e.g., annotation, detection of various types of errors, and building AWE systems, have been researched for a variety of languages: Song et al. (2016), Rao et al. (2017), Shiue et al. (2017) worked with data in Chinese, 15 Writing Pal does not provide specific grammar and spelling feedback.",
"Lorenzen et al. (2019) in Danish, Berggren et al. (2019) in Norwegian, Amorim and Veloso (2017) in Portuguese, Stymne et al. (2017) in Swedish, Berkling (2018) and Weiss and Meurers (2019) in German, Mezher and Omar (2016) in Arabic, Kakkonen et al. (2005) in Finnish, Loraksa and Peachavanish (2007) in Thai, Lemaire and Dessus (2001) in French, and Ishioka and Kameda (2006) in Japanese.",
"The list is by no means exhaustive; see Flor and Cahill (2020) for a recent review.",
"The use of automated evaluation technology envisioned by Page was as a service to reduce a teacher's burden; to eventually lift from the shoulders of the English teacher, that brave and harried soul, his perpetual pressure of unassigned papers, or his unassuaged guilt.",
"While such use has certainly been made (Burstein et al., 2004; Grimes and Warschauer, 2010), the most visible use case for AWE technology has arguably evolved to be in the context of standardized testing, be it for a test of English such as TOEFL or PTE, a broader, more advanced psychometric examination such as the GRE or GMAT, or for professional licensure such as AICPA or PRAXIS .",
"This development of often high-stakes usage has led to somewhat different challenges from those that Page had anticipated.",
"These challenges generally fall under the purview of the field of educational measurement (Bennett and Bejar, 1998; Clauser et al., 2002; Williamson et al., 2012): How to ensure that the automatic scores assigned to test takers are (1) valid , i.e., they actually measure the skill that the test developer designed the test to measure, (2) defensible , i.e., there is a reasonably clear explanation of why test takers received the particular scores they did, and (3) fair to all the test takers.",
"We address each of these challenges separately below.",
"Note that an additional challenge of high-stakes usage, not elaborated on here, is how to architect scoring systems for large-scale, low-latency use which requires them to be reliable, scalable, flexible, and attentive to the choice of software and application frameworks (Madnani et al., 2018).",
"Page declares that he is not after generating measures of what the true characteristics of the essays are, as ordinarily discussed by human raters but rather is content to settle for the correlates of",
"these true characteristics.",
"Page seems to do away rather quickly with trying to measure the actual thing the set of all and only true characteristics of essays, or trins .",
"Why is that?",
"He explains: Notwithstanding the wonders of the computer, we have to develop a strategy in order to tell the computer what to do.",
"The difficult part is the development of this strategy.",
"It is difficult because we do not really understand what the psychological components are in the judgment of essays.",
"It is easy enough to get per-sons to expound authoritatively on such judgment, but the fuzziness and inutility of their thinking becomes at once evident when the effort is made to translate it into a computer program.",
"(Page, 1966)",
"Page's argument is that we do not know precisely enough what the human raters are doing to try and implement that.",
"Some work on rater cognition has already been done in the early 1950s and 1960s, e.g., in the context of the College Entrance Examination Board's development of the General Composition Test.",
"Diederich et al. (1961) had 53 distinguished individuals from various academic disciplines and beyond (English, Social Science, Natural Science, Law, Writers and Editors, Business Executives) sort student essays in order of merit, with no definition thereof, instructing readers as follows: Use your own judgment as to what constitutes writing ability.",
"Do not assume that we want you to do this or that.",
"We want you to use whatever hunches, intuitions, or preferences you normally use in deciding that one piece of writing is better than another.",
"You need not even act as a representative of your field, since individuals in any field have varying tastes and standards.",
"Readers were also asked to a write brief comments on anything that they liked or disliked about the essay, on as many essays as possible.",
"For the study, a sample of U.S. college freshmen were asked to write essays in response to four topics as part of homework.",
"A total of 300 essays addressing two topics were chosen for the analyses, sampled so as to make sure that the full range of abilities is represented (approximated via SAT Verbal 7800 scores).",
"The researchers performed a factor analysis on the matrix of pairwise correlations among the readers, and identified groups of readers (fac-tors) that represent five schools of thought about writing quality.",
"Analyzing the comments made by readers who belong to the different schools of thought, they identified five categories that were each prioritized by one of the groups of readers: 1. Ideas (including relevance, clarity, quantity, development, persuasiveness) 2. Form (including spelling, organization, analysis, coherence) 3. Flavor (including style, originality, quality of ideas, interest, sincerity) 4. Mechanics (including punctuation, grammar, sentence structure, phrasing) 5. Wording (including felicity of expression, comments on specific word choices, cliches) It is based on such findings above that general scoring criteria have emerged ( Deane, 2013) and morphed into scoring rubrics .",
"These are explicit criteria set by and for human raters for evaluating essays.",
"For example, to score highly on the GRE Issue essay-writing task, 16 one typically: articulates a clear and insightful position on the issue in accordance with the assigned task develops the position fully with compelling reasons and/or persuasive examples sustains a well-focused, well-organized analysis, connecting ideas logically conveys ideas fluently and precisely, using effective vocabulary and sentence variety demonstrates superior facility with the conventions of standard written English (i.e., grammar, usage and mechanics), but may have minor errors In the current practice of automated scoring of standardized tests, developers of a scoring engine often need to provide a construct validity argument in order to show that what the system is measuring is actually aligned with the writing con-struct the actual set of writing skills that the test is supposed to measure.",
"Some of the items in a human-oriented scoring rubrics are amenable to reasonably direct implementation, often with the help of human-annotated gold standard data such as misspellings (Flor, 2012; Flor and Futagi, 2013) and specific grammar errors (Rozovskaya and Roth, 2010; Leacock et al., 2014).",
"It might be the case that the system would miss some grammar errors and declare an error where there is none, but a grammar assessment system can be built for identifying specific, observable instances of errors that a human reader focused on Mechanics would likely pick upon.",
"For other items in a rubric, one might need to drill down, articulate a reliable guideline for humans to assess that particular aspect of the essay, annotate a substantial enough number of essays using the guidelines to make machine learning possible, and then find automatically measurable properties of essays that would provide information relevant to that particular aspect of essay quality.",
"This would be a mix between what Page called a prox and a trin , in that a particular, intrinsically interesting, aspect of an essay can be identified reliably by humans, and an automated system can learn how to approximate that particular construct.",
"Such approaches have been developed for organization (well-organized) (Burstein et al. , 2003), coherence (well-focused, conveys ideas fluently) ( Burstein et al., 2010; Somasun-daran et al., 2014), grammaticality (facility with conventions) ( Heilman et al., 2014), thesis clarity (clarity) (Persing and Ng, 2013) as well as aspects of scoring rubrics that are more task-specific, e.g., argumentation (clear position, with compelling reasons) ( Stab and Gurevych, 2014; Ghosh et al., 2016; Beigman Klebanov et al., 2017; Stab and Gurevych, 2017; Carlile et al., 2018), use of evidence in the context of source-based writing (Rahimi et al., 2017).",
"Finally, for some rubric items, it is not clear exactly how to reliably translate the relevant aspect of the writing construct into annotations guidelines, and so proxes might be employed.",
"For example, consider Page's argument for capturing diction (appropriate word choice) through word frequency a writer who can use many different words, including rarer and often semantically nuanced ones, is likelier to make precise word choices than a writer who uses a more limited vocabulary.",
"Attempts to capture topicality (Beigman Klebanov et al., 2016b) or development 7801 (Beigman Klebanov and Flor, 2013b; Somasun-daran et al., 2016) through properties of vocabulary distribution without human annotation of topicality and development exemplify such approaches.",
"Recent research has shown that more sophisticated machine learning models might perform better than simple regression-based models when it comes to predictive accuracy (Chen and He, 2013; Cummins et al., 2016; Taghipour and Ng, 2016; Alikaniotis et al., 2016; Dong et al., 2017; Dasgupta et al., 2018; Jin et al., 2018).",
"However, unlike linear regression where stakehold-ers can understand how much each feature used in the model contributed to the predicted score, many of the more complex models are essentially black boxes and do not really lend themselves to post-hoc interpretability (Lipton, 2016).",
"Although interpretability is an active area of research in the machine learning literature ( Ribeiro et al., 2016; Koh and Liang, 2017; Doshi-Velez and Kim , 2017), it currently lags behind the research on machine learning methods.",
"For this reason, some automated scoring systems used for high-stakes standardized testing like ETS's e-Rater ( Attali and Burstein, 2006) still use some variant of least squares linear regression as the machine learning model to predict test taker scores.",
"It would probably not be an overstatement to say that fairness in AI is quickly becoming its own sub-field, with a new annual ACM conference on Fairness, Accountability, and Transparency having been inaugurated in 2018 17 and relevant research appearing at many impactful publication venues, such as Science ( Caliskan et al., 2017), NIPS (Pleiss et al., 2017; Kim et al., 2018), ICML (Kearns et al., 2018), ACL (Hovy and Spruit, 2016; Sun et al., 2019; Sap et al., 2019), KDD (Speicher et al., 2018), AAAI (Zhang and Barein-boim, 2018), and others (Dwork et al., 2012; Hajian and Domingo-Ferrer, 2013).",
"There is also recent work that examines fairness and ethical considerations when using AI in an education (May-field et al., 2019; Gardner et al., 2019).",
"In the context of assessment, fairness considerations dictate that the test reflects the same construct(s) for the entire test taking population, that 17 https://facctconference.org/ scores from the test have the same meaning for all the test taking population, and that a fair test does not offer undue advantages (or disadvantages) to some individuals because of their characteristics such as those associated with race, ethnicity, gender, age, socioeconomic status, or linguistic or cultural background or the test characteristics itself, e.g., the different prompts shown to different test-takers at test time.",
"The educational measurement community has long been studying fairness in automated scoring (Williamson et al., 2012; Ramineni and Williamson, 2013; AERA, 2014) and recent progress made by the NLP community towards enhancing the usual accuracy-based evaluations with some of these psychometric analyses from computing indicators of potential biases in automatic scores across various demographic sub-groups to computing new metrics that incorporate measurement theory to produce more reliable indicators of system performance is quite promising (Mad-nani et al., 2017b; Loukina et al., 2019).",
"Page's gedankenexperiment on the potential of automated essay evaluation in a classroom context no doubt appeared audacious in 1966 but nothing back then could have prepared his readers to the pervasiveness of technology we are experiencing today.",
"Today you can very literally carry your AWE system in your pocket; you can even carry several.",
"You can use them (almost) at any time and at any place not only in classrooms, but at home, at work, and even while texting with a friend.",
"This is perhaps the biggest issue that Page's vision did not address: the possibility of universal availability and the concomitant co-optation of a tool beyond its original intended purpose.",
"Much like the calculator invented by Blaise Pascal to help his father with the tedious arithmetic of tax collection ended up freeing people from the burden of figuring out their intended tip at a restaurant through mental arithmetic, a future writing aid meant to help a student improve his argument writing assignment for a class could end up being used by a lawyer for composing his closing argument.",
"Since such usages are on the horizon, we should consider the implications now.",
"Once an invention is out in the open, it is difficult to predict what specific uses people would put it to.",
"How do we go about evaluating the tool if we don't know what the user's goal is?",
"While it isn't possible to anticipate all specific uses, it is possible, we believe, to consider the types of uses that suggest different evaluation strategies.",
"From the current vantage point, we see three types of uses.",
"The first use is where a consequential decision about the writer or a related entity (such as a class or a school) is being made based on the written product.",
"This use is exemplified by the application of automated scoring in a standardized testing context to decide on admissions to an institution of higher education or the granting of a professional licenses; other cases such as course placement decisions, coursework grading, or even extension of a job offer (where the submission of a writing sample is a part of the job application process) would belong to this type of use.",
"In all such cases, the automated system needs to provide valid and fair scores (or other types of feedback), since the livelihood or professional trajectory of people might depend on the outcome.",
"We have dealt with the particulars of this case in detail in x 3.2.",
"The second type of use is one where the focus is on the final product, namely, the actual piece of writing produced following the writer's use of AWE technology.",
"In this context, it does not much matter exactly what part of the final product is due to the human and which part is due to the machine perhaps the machine only corrected misspellings, or suggested improvements for the human to vet, or maybe the human only contributed the very first ideation, and the machine has done the rest.",
"Perhaps all the human writer contributed was the thesis (I think school should start at 8 rather than 7') and then clicked submit' to get back an essay making a cogent and convincing case in support of the thesis.",
"Mining large textual databases for arguments and evaluating them are feasible today as recently demonstrated by IBM's Debater technology 18 (Rinott et al., 2015; Levy et al., 2017; Gretz et al., 2019); introduce some figuration to 18 https://www.research.ibm.com/ artificial-intelligence/project-debater/ make it more appealing (Veale et al., 2017; Veale, 2018) and storify it (Riegl and Veale, 2018; Radford et al., 2019), et voil!",
"This type of use is essentially a machine's augmentation of human ability, and is hinted at, for example, in a customer testimonial for Grammarly: Grammarly allows me to get those communications out and feel confident that I'm putting my best foot forward. Grammarly is like a little superpower, especially when I need to be at 110%.",
"The human presumably remains at the same level of ability, but the product of the machine-human collaboration is superior to what the human alone could have produced.",
"In this context, the primary evaluation criterion for AWE is the fitness of the resulting communication to its purpose, or, at least, some evidence of improvement of the product over the human's first draft.",
"Indeed, measurements of improvement across drafts and evidence of students' making corrections following feedback are often used for evaluation (Attali, 2004; Lipnevich and Smith, 2008; Foltz et al., 2014; Chapelle et al., 2015).",
"Within the product-centered evaluation paradigm, there could be various specific objectives other than the improvement of the holistic quality of the piece of writing; it could be an increase in the speed of production, or the maximization of click-through rate in an advertisement text, for example.",
"The third type of use for AWE software is to help the writer improve his or her writing skill.",
"Scores or other types of feedback are designed, in this context, to provide tutoring or guidance, not for fixing specific problems in the current piece of writing but to help the user learn more general skills that would make the first draft of their next essay better than the first draft of their current essay.",
"Evaluation of a tool though a demonstration of skill-improvement the efficacy of the tool is a complicated endeavor.",
"To demonstrate that the observed improvement in skill is specifically due to the use of the writing tool, and not due to something else happening in students' life and education at the same time requires a research design that can take other potential sources of variation in outcomes into account, such as the one used in randomized controlled studies often used to as-7803 sess interventions, including in education (Con-nolly et al., 2018); some such studies have been performed with respect to AWE tools (Rock, 2007; Wilson and Roscoe, 2020).",
"A tool that allows for monitoring of improvement in skill (even if the improvement is due to other factors such as school instruction or participation in some activity or community) could also be useful in the broader context of skill-oriented use, as the learner and the teacher would be able to tell that improvement is happening, even if we do not know exactly why.",
"Improvement in important aspects of learning such as motivation and self-efficacy could also provide value to the learner (Grimes and Warschauer, 2010; Wilson and Roscoe, 2020).",
"One could argue that an ideal automated writing assistant would support all the different goals at once help one produce better writing, help one learn, and do both in a psychometrically responsible fashion benefits are not restricted to certain types of users more than others so that decision-making based on the outcome of the usage of the",
"tool can also be supported.",
"Indeed, the uses are not necessarily mutually exclusive.",
"For example, the human augmentation and consequential decision use cases could apply at the same time.",
"It is possible that, at some future point in time, spelling will be deemed to lie outside of the construct targeted by the consequential assessment of writing and spell-correction software will be made available to test-takers.",
"However, this would require a careful examination of the impact of correction on the distributions and interpretations of the scores.",
"In particular, Choi and Cho (2018) found that manually-vetted correction of spelling errors yielded a significant increase in scores assigned to the essays by trained raters, and that, even after controlling for the error quantity and quality predictors, the magnitude of the average gain in the score was smaller for responses with higher original scores.",
"Add to the mix the finding that automated spelling correction system is more accurate on essays that are of better quality to begin with (Flor, 2012), and it's likely that the automated assessment of an automatically spell-corrected version of an essay might show an unexpected relationship with original scores that would need to be closely examined for bias or for an increase in construct-irrelevant variance.",
"It is also possible that the effect of using a tool optimized for one use case could be the opposite of what another use case requires.",
"If use it or lose it' has any truth to it, a potential consequence of extensive, consistent, and pervasive human augmentation for producing superior written products is an adverse impact on the skill of the human in the human-machine team.",
"If the near universal adoption of calculators is any guide, once a skill (long division) can be reliably outsourced to a machine, humans stop valuing it in daily practice and, therefore, might set out to lose it in the long run.",
"19 Spelling is a likely candidate writing skill where reliable access to high quality correction software could make humans stop worrying about it rather than invest effort in improving it.",
"Many of the tools mentioned in x 2.2.4 seem to position themselves somewhere between the skill-improvement and the product-improvement use cases, perhaps assuming that quantity will eventually turn into quality, namely, extensive work on improving the written product might lead to internalization and generalization of the skill to new contexts.",
"This might or might not be true.",
"Feedback that helps the user fix an error quickly by pointing it out and by suggesting a correction might be good in a product-oriented context, but not in a skill-oriented context; letting the user pinpoint and fix the error himself or herself might be a better skill-development strategy ( Hyland and Hyland, 2006).",
"According to Graham and Perin (2007) meta-analysis of writing interventions for adolescents, explicit grammar instruction tended to be ineffective; this finding is cited by the developers for Writing Pal to support their decision to forgo giving explicit feedback on grammar ( McNamara et al. , 2013), in contrast to most other AWE systems that do provide such feedback.",
"In his visionary paper from 1966, Ellis Page provided a proof-of-concept demonstration of the possibility of automated grading of essays, as well",
"19 1989 Curriculum and Evaluation Standards for School Mathematics from the National Council of Teachers of Mathematics recommend in the Summary of Changes to Content and Emphasis in K-4 Mathematics (p.21) decreasing the attention devoted to long division specifically and to com-plex paper-and-pencil computations in general; the recommendation for grades 5-8 is likewise to decrease emphasis on tedious paper-and-pencil computations (p.71).",
"https: //archive.org/details/curriculumevalua00nati .",
"The document has sparked substantial controversy, including with regards to long division (Klein and Milgram, 2000).",
"as outlined some potential challenges to its adoption.",
"Subsequent research and practice have delivered on Page's minimum desiderata for an AWE system; current research is working to address the outstanding challenges dealing with a variety of languages, content domains, and writing tasks.",
"The field of AWE has thus progressed according to the trajectory charted by Page to a large extent, though not completely.",
"In particular, while Page imagined the main use case of AWE to be in the service of a harried English teacher and his feedback-thirsty students, in reality, the most visible use case has arguably evolved to be automated scoring of essays for standardized testing, which, in turn, has led to new challenges, such as ensuring the validity and fairness of scores.",
"The other development that Page could not anticipate is the sheer pervasiveness of technology in people's daily lives; AWE software can be made available not only in classrooms to be used under the watchful eye of the English teacher, but (al-most) anywhere and at any time, including on mo-bile devices.",
"While it is difficult to predict specific uses people would find for such software, we outlined a number of types of use, depending on the goal:",
"(a) consequential decision making about the user;",
"(b) delivery of the best possible written product in partnership with the user; and",
"(c) assisting the user in improving her writing skills.",
"We believe that we, as researchers, can help users find value in our technology by considering the goals, engaging partners from other relevant disciplines, and designing the tools as well as their e valu ations to focus on specific types of use.",
"We would like to thank our colleagues Anastassia Loukina, Jill Burstein, Aoife Cahill, and Isaac Bejar, as well as ACL reviewers and area chair, for their thoughtful comments on earlier drafts of this paper."
] | [
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"other"
] |
[
"Recognizing temporal relations among events and time expressions has been an essential but challenging task in natural language processing.",
"Conventional annotation of judging temporal relations puts a heavy load on annotators.",
"In reality, the existing annotated corpora include annotations on only salient event pairs, or on pairs in a fixed window of sentences.",
"In this paper, we propose a new approach to obtain temporal relations from absolute time value (a.k.a. time anchors ), which is suitable for texts containing rich temporal information such as news articles.",
"We start from time anchors for events and time expressions, and temporal relation annotations are induced automatically by computing relative order of two time anchors.",
"This proposal shows several advantages over the current methods for temporal relation annotation: it requires less annotation effort, can induce inter-sentence relations easily, and increases informativeness of temporal relations.",
"We compare the empirical statistics and automatic recognition results with our data against a previous temporal relation corpus.",
"We also reveal that our data contributes to a significant improvement of the downstream time anchor prediction task, demonstrating 14.1 point increase in overall accuracy.",
"Temporal information extraction is becoming an active research field in natural language processing (NLP) due to the rapidly growing need for NLP applications such as timeline generation and question answering (Llorens et al., 2015; Meng et al., 2017).",
"It has great potential to create many practical applications.",
"For example, SemEval-2015 Task 4 (Minard et al., 2015) collects news articles about a target entity and the task required participants automatically ordering the events involving that entity in a timeline.",
"The timeline representation of news can help people more easily comprehend a mass of information.",
"This work aims to contribute to such timeline applications by extracting temporal information in specific domains like news articles.",
"TimeBank 1 (Pustejovsky et al., 2003) is the first widely used corpus with temporal information annotated in the NLP community.",
"It contains 183 news articles that have been annotated with events, time expressions and temporal relations between events and time expressions.",
"The annotation follows the TimeML 2 specification (Saur et al., 2006).",
"Along with the TimeBank and other temporal information corpora, a series of competitions on temporal information extraction (TempEval-1,2,3) (Verhagen et al., 2009, 2010; UzZaman et al., 2012) are attracting growing research efforts.",
"A majority of temporal information corpora adopt temporal links (TLINKs) to encode temporal information in documents.",
"A TLINK denotes a temporal relation between mentions, i.e., events, time expressions and document creation time (DCT) (Setzer, 2002).",
"However, annotating TLINKs is a painful work, because annotation candidates are quadratic to the number of mentions in a document.",
"The original TimeBank only annotated those salient mention pairs judged by annotators, while the definition of salient is not necessarily clear.",
"Annotators had to face a complicated task; identify salient mention pairs, and label temporal relations.",
"For solving this, many dense annotation schemata are proposed to force annotators to annotate more or even complete graph pairs.",
"However, dense annotation is time-consuming and unstable human judgments 1 https://catalog.ldc.upenn.edu/LDC2006T08 2 http://www.timeml.org/ 1833 on salient pairs are not improved at all.",
"As a consequence, a high proportion of vague or no-link pairs appears in these dense corpora such as TimeBank-Dense (Cassidy et al., 2014).",
"In this work, we propose a new approach to obtain temporal relations from time anchors, i.e. absolute time value, of all mentions.",
"We assume that a temporal relation can be induced by comparing the relative temporal order of two time anchors (e.g. YYYY-MM-DD ) in a time axis.",
"We use pre-defined rules (Section 3) to generate temporal order (TORDER) relations (e.g. BEFORE, AFTER, SAME DAY, etc.) by taking two annotated time anchors as input.",
"This proposal requires the annotation of time anchors, of which the annotation effort is linear with the number of mentions.",
"This is the first work to obtain temporal relations shifted from the annotation of individual mentions, which is distinguished from most annotation work of manually annotating mention pairs.",
"This approach brings several advantages over the current temporal relation annotation.",
"First, as long as time anchors of all mentions in a document are given, our pre-defined rules can induce the temporal relations for all the quadratic pairs.",
"This skips the step of identifying salient pairs.",
"Second, annotating the time anchors is relatively easy, as the annotation work is linear to the number of mentions.",
"Third, the automatic generation rules can provide flexible relation types based on our definition and this increased informativeness might contribute positively to downstream tasks.",
"In our first evaluation (Section 4), we compare the correspondence and difference between the new TORDERs and conventional TLINKs.",
"The comparison of empirical statistics shows the new data is label balanced, contains informative relations and reduces vague relations.",
"Besides, the classification performance suggests the new data achieve reasonable accuracy, although accuracy numbers are not directly comparable.",
"Many text processing tasks are often requiring to know time anchors when events occurred in a timeline.",
"In Section 5, we evaluate the data in a downstream time anchor prediction task (Reimers et al., 2016) by using the temporal relation recognizers separately trained with TORDERs or TLINKs.",
"The main results show that the recognizer trained with our TORDERs significantly outperforms the recognizer trained with the TLINKs by 14.1 point exact match accuracy.",
"TimeBank started a wave of data-driven temporal information extraction research in the NLP community.",
"The original TimeBank only annotated relations judged to be salient by annotators and resulted in sparse annotations.",
"Subsequent TempEval-1,2,3 competitions (Verhagen et al., 2009, 2010; UzZaman et al., 2012) mostly relied on TimeBank, but also aimed to improve coverage by annotating relations between all events and time expressions in the same sentence .",
"However, most missing relations between mentions in different sentences are not considered.",
"In order to solve the sparsity issue, researchers started the work towards denser annotation schema.",
"Bramsen et al. (2006) annotated multi-sentence segments of text to build directed acyclic graphs.",
"Kolomiyets et al. (2012) annotated temporal dependency structures, though they only focused on relations between pairs of events.",
"Do et al. (2012) produced the densest annotation and the annotator was required to annotate pairs as many as possible.",
"Cassidy et al. (2014) proposed a compulsory mechanism to force annotators to label every pair in a given sentence window.",
"They performed the annotation (TimeBank-Dense) on a subset (36 documents) of TimeBank, which achieved a denser corpus with 6.3 TLINKs per event and time expression, comparing to 0.7 in the original TimeBank corpus.",
"However, it raises the issue that hand-labeling all dense TLINKs is extremely time-consuming and the unclear definition of salient is not improved at all.",
"The majority of the temporal relation classifiers focus on exploiting a variety of features to improve the performance in TimeBank.",
"Laokulrat et al. (2013) extracted lexical and morphological features derived from WordNet synsets.",
"Mani et al. (2006); D'Souza and Ng (2013) incorporated semantic relations between verbs from VerbOcean.",
"Recently, more researchers move on to diverse approaches on the TimeBank-Dense corpus.",
"Chambers et al. (2014) proposed a multi-sieve classifier composed of several rule-based and machine learning based sieves ranked by their precision.",
"Mirza and Tonelli (2016) started to mine the value of low-dimensional word embeddings by concatenating them with traditional sparse feature 1834 vectors to improve their classifier.",
"Inspired by the success of the deep learning work in the similar task: relation extraction, Cheng and Miyao (2017) proposed the shortest dependency path based Bi-directional Long short-term memory (Hochreiter and Schmidhuber, 1997) (Bi-LSTM) to achieve state-of-the-art performance in the TimeBank-Dense corpus, which is adopted for the experiments in this paper.",
"There are two reasons to use this classifier: 1) inter-sentence temporal relations are well treated.",
"2) only word, part-of-speech and dependency relation embeddings are required as input.",
"A related task: Cross-Document Event Ordering (Minard et al., 2015) aims to order the events involving a target entity in a timeline given written news in English.",
"Compared to traditional TLINKs, annotating time anchors of events is intuitively more straightforward in such tasks.",
"Reimers et al. (2016) proposed an annotation scheme, which requires annotators to infer the exact time of each individual event.",
"They distinguished events that occur at a Single-Day from that span over Multi-Day by setting the granularity as one day.",
"For Single-Day events, the event time is written in the format YYYY-MM-DD ' when the precise event time can be determined.",
"Otherwise, they required annotators to narrow down the possible time as precisely as possible.",
"An imprecise Single-Day event can be annotated as a tuple (af-ter, before) , e.g. (after 1998-10-02, ) ', (, before 2000-01-31) ' or (after 1998-10-02, before 2000-01-31) '.",
"In the case of Multi-Day , an event is annotated as a tuple (begin, end) , where begin and end are represented with Single-Day .",
"For instance of a sentence: The economy created jobs at a surprisingly robust pace in January, the government reported on Friday, evidence that America's economic stamina has withstood any disruption caused so far by the financial tumult in Asia.",
"The Multi-Day event created is annotated as (begin=1998-01-01, end=1998-01-31) .",
"The Single-Day event reported is annotated as the same day as DCT (1998-02-06) .",
"The imprecise Multi-Day event disruption is annotated as (be-gin=(, before1998-02-06), end=(, before1998-02-06)) as the event must have occurred before the Figure 1: Anchoring events in a timeline time of this news, but the precise begin and end dates cannot be inferred from the text.",
"Time anchors have the capability of anchoring all the events from a document into the same timeline as shown in Figure 1. They annotated the time anchors of total 1,498 events from 36 documents of TimeBank-Dense.",
"In temporal information retrieval, Berberich et al. (2010) proposed a four-tuple representation (earliest begin', latest begin', earliest end', latest end') for uncertain time expression (e.g. 1990s') in order to integrate such temporal information into language model.",
"In the time anchor annotation, an event in 1990s' will be annotated as a Multi-Day event with imprecise begin and end points, i.e. (begin=(after 1990-01-01, before1999-12-31), end=(after 1990-01-01, before1999-12-31)) , which is quite similar to their four-tuple representation.",
"TimeML states that TLINKs present a temporal relation between event to event, event to time expression, and event to DCT.",
"The sparse TLINK coverage in the majority of temporal information corpora is attributed to the unstable identification of salient pairs by human annotators.",
"Denser annotation schemata somehow improved sparseness, but the annotation work became very time-consuming.",
"These issues plague the development of temporal information extraction work.",
"Our temporal order (TORDER) proposal is designed with the goal of solving unstable recognition of salient pairs and reducing annotation effort.",
"We hypothesize that a temporal relation can be automatically computed by comparing the relative temporal order between two time anchors (e.g. YYYY-MM-DD ) in a time axis.",
"We propose a set of pre-defined generation rules, which have the capability to rigorously induce a TORDER by taking the two annotated time anchors as input.",
"Annotat-1835 TORDER Condition Two precise S 1 and S 2 BEFORE if S 1 < S 2 AFTER if S 1 > S 2 SAME DAY if S 1 = S 2 A precise S 1 and an imprecise S 2 ( after 2 , before 2 ) BEFORE if S 1 after 2 AFTER if S 1 before 2 VAGUE other cases Two imprecise S 1 ( after 1 , before 1 ) and S 2 ( after 2 , before 2 ) BEFORE if before 1 after 2 AFTER if after 1 before 2 PVAGUE if before 1 = before 2 and after 1 = after 2 VAGUE other cases Table 1: Definition of the temporal orders between two Single-Day events.",
"ing time anchors of individual mentions extremely reduces annotation effort, as it is linear with mention numbers.",
"As long as time anchors are given, our pre-defined rules can induce the temporal relations for all the quadratic pairs, which skips the step of identifying salient pairs.",
"TimeBank contains the normalized date YYYY-MM-DD ' of time expressions and DCT, but does not include events' time.",
"Our proposal of inducing a TORDER by comparing two time anchors requires the time anchor annotation of events in the same granularity as time expressions and DCT.",
"Therefore, annotating the events with YYYY-MM-DD ' is a reasonable setting and one day is used as the minimal granularity of annotation.",
"We choose the annotation (Reimers et al., 2016) of the day-level time anchors of events as the source of our automatic TORDER generator.",
"In the case that a corpus can provide more specific time information YYYY-MM-DD, hh-mm-ss ' (e.g. this morn-ing, three o'clock in the afternoon), our TORDER generator can be flexible to handle this information as long as the time anchors of all mentions are annotated in the same granularity.",
"For the clear demonstration of the definition of the auto-generated temporal order, we separately describe the generation of the pairs with two Single-Day mentions, and the pairs involving Multi-Day mentions.",
"In this paper, TORDER labels are written in the upper-case bold font to be distinguished from TLINK labels written in the lower-case italic font.",
"Table 1 introduces the definition of temporal orders between two Single-Day pairs S 1 and S 2 .",
"PVAGUE (i.e. partially vague) denotes that two imprecise time anchors are equivalent.",
"For instance, we cannot induce a clear temporal relation between two events both occur-TORDER Condition A Single-Day S 1 and a Multi-Day M 2 ( begin 2 , end 2 ) BEFORE if S 1 BEFORE begin 2 AFTER if S 1 AFTER end 2 IS INCLUDED if S 1 AFTER / SAME DAY begin 2 and S 1 BEFORE / SAME DAY end 2 VAGUE other case Two Multi-Day M 1 ( begin 1 , end 1 ) and M 2 ( begin 2 , end 2 ) BEFORE if end 1 BEFORE begin 2 AFTER if begin 1 AFTER end 2 SAME SPAN if begin 1 SAME DAY begin 2 and end 1 SAME DAY end 2 IS INCLUDED if begin 1 AFTER / SAME DAY begin 2 and end 1 BEFORE / SAME DAY end 2 (*) INCLUDES if begin 1 BEFORE / SAME DAY begin 2 and end 1 AFTER / SAME DAY end 2 (*) PVAGUE if begin 1 PVAGUE / SAME DAY begin 2 and end 1 PVAGUE / SAME DAY end 2 (*) VAGUE other cases Table 2: Definition of the temporal orders involving Multi-Day events M ( begin, end ) .",
"ring on (,before1998-02-06) , but nevertheless both events provide partially equivalent date information 1998-02-06' .",
"It can possibly provide useful information for the future processes of classification or time inference.",
"PVAGUE in the Multi-Day definition takes the same consideration.",
"In order to introduce the temporal orders involving Multi-Day events, a Multi-Day event M is denoted as a tuple of two Single-Day dates ( begin, end ) .",
"A temporal order between a Single-Day S 1 and Multi-Day M 2 ( begin 2 , end 2 ) can be derived by computing the temporal order of two Single-Day S 1 and begin 2 , or S 1 and end 2 first.",
"All the types of temporal orders involving Multi-Day events are defined in Table 2. One additional INCLUDES relation that Multi-Day event includes a Single-Day event can be obtained by reversing the symmetric ISINCLUDED.",
"The example of automatically computing temporal orders can be demonstrated by using the events in Figure 1. Both Multi-Day created and disruption are clearly BEFORE the Single-Day reported , because reported is AFTER the end dates of created and disruption .",
"The relation between created and disruption is induced as VAGUE, as the imprecise begin , end of disruption cannot be determined with a relation to created .",
"In this paper, the definition adopts a similar relation set to TLINK for the purpose that we can perform fair comparison and evaluation in the next two sections.",
"However, our inducing proposal can be very scalable to introduce more temporal relations.",
"For instance, Allen's interval algebra (Allen, 1990) defines starts', finish' relations, which are not included in our current defini-1836 tion.",
"We can easily extend our definition by detecting whether two time anchors have the equivalent begin or end points.",
"Our inducing proposal takes human annotated time expressions and normalized values as inputs to generate TORDER relations as the training data of the next processes (e.g. classification).",
"In the case of processing raw texts, we can perform detection and normalization of time expressions by using existing temporal taggers, e.g. Heidel-Time (Strotgen and Gertz, 2015), SUTime (Chang and Manning, 2012), etc. 4 Comparison of TORDERs and TLINKs Fairly evaluating the TORDER's capability of encoding temporal order information compared to the existing data is difficult but necessary work.",
"This section provides empirical statistics of TORDER and TLINK annotations, and compare the performance of automatic recognition.",
"Additionally, we evaluate these two frameworks in a downstream task performance in Section 5.",
"Our new TORDERs are formally similar to the conventional TLINKs, as both state a temporal relation between two mentions.",
"BEFORE and AFTER represent that one mention occurs before or after in a timeline, which is close to before and after .",
"INCLUDES and IS INCLUDED are more clearly conditioned as a Single-Day or Multi-Day mention occurs during the other Multi-Day mention, compared to includes and is included .",
"SAME DAY and SAME SPAN are designed for the one-day minimal granularity.",
"Ideally, these two relations will include simultaneous and other TLINKs with two mentions occurring in the same day.",
"VAGUE and PVAGUE state that our generation rules cannot induce the relations, similar to vague (i.e. annotators cannot judge the relations).",
"The one-day minimal granularity is the main reason causing the difference between TORDER and TLINK types.",
"For a sentence: I went to sleep after taking a bath .",
"According to the TimeML specification, sleep is obviously after bath .",
"But in the one-day granularity, the relation is shifted to SAME DAY.",
"This brings the obstacle that we cannot measure whether the temporal information encoded in TORDERs is more informative than TLINKs by directly comparing the classification results.",
"Our TORDER definition shows the capability of capturing some relations which cannot be encoded by TLINK.",
"For instance: Stocks rose , pushing the Dow Jones industrial average up 72.24 points, to 8,189.49, leaving the index within 70 points of its record.",
"These TLINKs among the three events are annotated as vague in TimeBank-Dense, as the annotators cannot state their temporal orders.",
"However, we can easily obtain SAME DAY relations, since their day-level time anchors are the same.",
"Imprecisely represented time anchors (e.g. after YYYY-MM-DD ) are the major drawback of losing temporal order information.",
"For instance: America's economic stamina has withstood any disruption ...",
"The TLINK between withstood and disruption is annotated as after .",
"While both of them were annotated as the same time anchor (begin=before 1998-02-06, end= before 1998-02-06) , our TORDER generator induced a PVAGUE relation and temporal order information is lost.",
"The hypothesis that our proposal skipping the unstable manual identification of salient pairs can reduce the VAGUE relations in the new data.",
"This can be measured by comparing the numbers of the TORDER and TLINK relations on the same mention pairs.",
"If the observation of a part of vague TLINKs induced as non-VAGUE TORDERs in the new data can be found, it will be the evidence.",
"Depending on the text domain, TLINKs or TORDERs can be advantageous in different scenarios.",
"TLINKs can capture the temporal ordering information between events, when time expressions are often absent in the documents such as novels and narratives.",
"But the annotation work is time consuming and a part of relations will be neglected by the unstable human identification of salient pairs.",
"TORDERs have the capability of capturing more informative relations by skipping the salient pairs recognition and need less annotation effort.",
"But they require that the events can be anchored in a timeline from a document (e.g. often the case of news articles) and imprecise time anchors cause some information loss.",
"Investigating the quality of auto-generated TORDERs is important to demonstrate the value of this research.",
"In this section, we empirically compare the statistics of the auto-generated TORDERs and human-annotated TLINKs.",
"Theoretically, a TORDER between two mentions with any distance in a document can be automatically computed.",
"However, it is important to make the new data in a comparable manner to the existing data.",
"In this paper, we follow the process of TimeBank-Dense (Cassidy et al., 2014) to generate the complete graph of the 10,007 mention pairs in the same and adjacent sentences.",
"The TORDER data used in this paper are publicly available 3 and our scalable generation method can be easily applied for inducing relations of longer distance pairs.",
"Table 3 shows the comparison between the numbers of the TimeBank-Dense TLINKs and the new TORDERs.",
"One observation as we expected is that our approach captures new relations for a considerable part of the mention pairs that were judged as v ( vague ) in the human-annotated 3 https://github.com/racerandom/temporalorder TLINKs.",
"542 vague relations are induced as AFTER in the new TORDERs, as well as other relation types.",
"However, a part of nonvague TLINKs are shifted to VAGUE TORDERs.",
"This matches our description of the imprecise time anchor issue.",
"It is a trade-off between the part of mention pairs obtaining richer temporal information and the part of pairs losing information.",
"That is the reason why we need a downstream task (i.e. Time Anchors Prediction in Section 5) to measure how much temporal order information is encoded in TORDERs and TLINKs.",
"The shift of TLINK relations to SAME DAY due to the one-day minimal granularity setting can also be clearly observed.",
"Figure 2 shows the label distributions of the auto-generated TORDERs and the TimeBank-Dense TLINKs.",
"We investigate the statistics of Event-Event, Event-Time, and Event-DCT pairs.",
"The TimeBank-Dense corpus is obviously sparser due to the high proportion of vague in all three types of pairs.",
"Our TORDERs show a more balanced distribution of labels, which suggests that this method possibly encodes more informative temporal orders compared to the traditional TLINKs.",
"In particular, TORDERs show extremely rare VAGUE labels in Event-DCT pairs.",
"When given the precise Single-Day DCT of a document, our proposal to compare the temporal order between the time anchor of a event and the DCT manages to avoid the most unstable judgments made by the human annotators in the Event-DCT pairs.",
"Although the different definition of TORDERs from TLINKs makes direct comparison difficult, the more balanced distribution of TORDERs can possibly provide more informative classification results to benefit the downstream tasks.",
"Although the classification results of TORDERs and TLINKs are not directly comparable, they can show some evidence whether TORDERs is functional to provide temporal order information.",
"Table 4 shows the Bi-LSTM classification results with the data split 4 (Chambers et al., 2014) (27 training/validation documents, 9 testing docu-ments).",
"The classification system achieves fairly high F1 0.631 in Event-DCT and 0.485 in Event-Time on the SAME DAY temporal orders, which are the main information source to predict the precise time of events.",
"The performance on AFTER, BEFORE temporal orders are close to the TLINKs in number, but not meaningfully comparable.",
"The high proportion of vague in the TLINKs results in biased predictions.",
"When we use a more meaningful evaluation Nonvague ' overall, the TLINKs performance drops sharply.",
"Generally, the classification results suggest that our proposal of auto-generated TORDERs has sufficient capability to encode temporal information, which can be well 4 https://github.com/nchambers/caevo/blob/master/src/mai n/java/caevo/Evaluate.java classified from the textual inputs.",
"In this section, we describe a two-step system trained with the existing TLINKs and our data to challenge a downstream time anchor prediction task.",
"The different performance can be seen as the evidence whether our auto-generated TORDERs can capture comparable temporal information to the human-annotated TLINKs.",
"Predicting the time of events from the news articles is an attractive goal, which is a necessary step towards automatic event timeline extraction.",
"Reimers et al. (2016) bring the task of time anchor prediction, which aims to predict the time anchor of each Single-Day event given a document.",
"They use a general two-step process to determine the event anchors as shown in Figure 3.",
"Given a set of documents with events and time expressions already annotated, the system first obtains a list of possible times for each event.",
"Then, the most precise time is selected for each event.",
"A serious issue is that their baseline system still depends on the TimeBank-Dense TLINK classifier and the time anchor annotation is only used for the final evaluation.",
"That leaves the space to consider a new method without relying on the human-annotated TLINKs.",
"Our auto-generated TORDERs are a natural alternative to TLINKs to provide the similar temporal order information of mention pairs, but with less annotation efforts.",
"The second-step selection rules just need a slight modification to replace the previous TLINK types with the new TORDER types.",
"In this work, we adopt a similar two-step architecture.",
"The first-step temporal order classifier is designed to provide the temporal relations of the mention pairs in a document.",
"The second-step selects the most precise time by taking all Event-Time and Event-DCT relations of a target event as input.",
"For instance in Figure 3, the second-step received a set of relations e.g. ( is included, DCT ) , ( is included, F riday ) and ( vague, January ) of reported .",
"For the system trained with the TimeBank-Dense TLINKs, we adopt the same selection algorithm as described in (Reimers et al., 2016).",
"When the system is trained 1839 Event Type Source TORDER Gold TORDER TLINK Gold TLINK Exact Partial Exact Partial Exact Partial Exact Partial Precise Event-DCT 0.586 0.866 0.739 0.866 0.387 0.570 0.525 0.545 Event-Time 0.384 0.555 0.577 0.619 0.216 0.288 0.412 0.447 All 0.660 0.870 0.835 0.930 0.444 0.611 0.595 0.617 Imprecise Event-DCT 0.351 0.631 0.530 0.647 0.234 0.395 0.364 0.449 Event-Time 0.074 0.217 0.119 0.184 0.051 0.133 0.200 0.227 All 0.299 0.642 0.509 0.686 0.252 0.429 0.444 0.517 Overall Event-DCT 0.482 0.762 0.619 0.769 0.319 0.493 0.454 0.503 Event-Time 0.259 0.419 0.393 0.444 0.149 0.255 0.326 0.358 All 0.501 0.769 0.646 0.822 0.360 0.530 0.528 0.573 Table 5: The comparison of the cross-validation performance in the time anchor prediction task.",
"with the TORDERs, we slightly modified the algorithm by replacing the TLINK relations with similar TORDER relations.",
"SAME DAY replaces simultaneous to predict precise dates, although their definition is quite different.",
"We perform a 6-fold cross-validation strategy to predict all the TORDERs and TLINKs of the mention pairs in the 36 documents of the TimeBank-Dense corpus.",
"In each run, we split 30 documents for training and validation to predict the other 6 test documents.",
"We define two evaluation metrics, i.e. Exact Match accuracy and Partial Match accuracy to measure the performance in this task as follows: exact match = # Number of the exact match predictions # Total number of the test samples partial match = # Number of the partial match predictions # Total number of the test samples We define two partial match cases: 1) a precise (1998-02-06) is partial match with an imprecise (after 1998-02-06) , if the date values are the same.",
"2) (after 1998-02-06) is partial match with (after 1998-02-06, before 1998-02-21) , if one is a part of the other.",
"Table 5 summarizes the main results of the two-step time anchor prediction system trained with TORDER and TLINK data.",
"Precise', Impre-cise' and Overall' denote the results of predicting time anchors of precise events, imprecise events, and overall performance.",
"Event-DCT' or Event-Time' denotes the second-step selection takes only Event-DCT or Event-Time pairs as input, which helps us to investigate how much information is provided by the different types of pairs for predicting the final time anchors.",
"The new TORDERs show significantly superior out-performance in all three settings (i.e. only Event-DCT pairs, only Event-Time pairs, or Event-DCT + Event-Time), compared to the TLINKs.",
"With both Event-DCT and Event-Time temporal order information, the system achieves the highest overall exact match and partial match accuracy.",
"The Event-DCT, Event-Time pairs are the source of temporal information for predicting time anchors .",
"The system only using the Event-DCT achieves surprisingly high accuracy, particularly on the TORDER partial match accuracy of the 1840 Exact Partial CAEVO 0.442 0.553 Bi-LSTM TLINK 0.437 0.550 Bi-LSTM TORDER 0.586 0.811 Table 6: The comparison to the state-of-the-art dense TLINK classifier precise events.",
"The reason is that most events reported in news articles usually occur in precisely the same day as DCT.",
"Therefore, the TORDER Event-DCT is benefited from the low proportion of vague relations, which sharply outperforms the TLINK Event-DCT by 16.3% overall exact match .",
"However, the contribution of the Event-Time to the overall might be underestimated in this task somehow.",
"The TORDER Event-Time still beats the TLINKs by 11% overall exact match and 16.4% overall partial match.",
"Furthermore, the Event-Time encoding the temporal information within 1-sentence window in our experiments can be easily strengthen by our TORDER proposal to introduce more inter-sentence pairs.",
"In this section, we perform an additional experiment to make a comparison to a system with the first-step replaced by a state-of-the-art dense TLINK classifier CAEVO (Chambers et al., 2014).",
"We adopt the data split setting in Section 4.3 for three classifiers: CAEVO, Bi-LSTM classifier trained with TLINKs and Bi-LSTM classifier trained with TORDERs.",
"The results are summarized in Table 6.",
"CAEVO achieves the exact match accuracy slightly better than the Bi-LSTM model trained with the TLINKs.",
"The Bi-LSTM model trained with the TORDERs sharply outperforms the other two systems by approximate 14% exact match accuracy and approximate 26% in partial match accuracy.",
"In this paper, we propose a new approach to obtain temporal relations based on time anchors (i.e. absolute time value) of mentions in news articles.",
"Our pre-defined generation rules can automatically induce TORDER relations by comparing the temporal order of two time anchors in a timeline.",
"The requirement of our proposal for annotating time anchors is much easier compared to conventional methods, as the annotation effort is linear with the number of mentions.",
"The TORDER data used in this paper are publicly available.",
"The analysis, empirical comparison and classification results of the new TORDERs and the TimeBank-Dense TLINKs show our new data achieve the low VAGUE proportion, the informative relation types and the balanced label distribution.",
"We perform the second evaluation of using the temporal relation classifier to complete the downstream task of time anchor prediction in news articles.",
"The main results show our TORDERs significantly outperform the TLINKs in this task, which suggests our proposal has the capability to encode informative temporal order information with less annotation effort.",
"The main limitation of TORDER is that events are required to be anchored in a timeline.",
"Strotgen and Gertz (2016) introduce the highly different characteristics of time expressions in four domains of text.",
"It suggests that our proposal is difficult to be applied in some domains.",
"One possible solution is to adopt a hybrid annotation method to annotate a target event towards the most relevant event (TLINK-style), when temporal information is absent in its context.",
"Although this work is motivated for contributing to timeline applications, evaluating this proposal in the temporal question answering is also valuable.",
"SAME DAY could be harmful because this task possibly requires to know the exact order between two events occurring on the same day.",
"It is worth conceiving a more general solution to improve the limitations of TORDER in the future work.",
"We would like to thank the anonymous reviewers for their valuable comments and thank Jason Bennett for useful discussions and proofreading."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"objective",
"method",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"method",
"objective",
"method",
"result",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"other"
] |
[
"Masked language models have quickly become the de facto standard when processing text.",
"Recently, several approaches have been proposed to further enrich word representations with external knowledge sources such as knowledge graphs.",
"However, these models are devised and evaluated in a monolingual setting only.",
"In this work, we propose a language-independent entity prediction task as an intermediate training procedure to ground word representations on entity semantics and bridge the gap across different languages by means of a shared vocabulary of entities.",
"We show that our approach effectively injects new lexical-semantic knowledge into neural models, improving their performance on different semantic tasks in the zero-shot crosslingual setting.",
"As an additional advantage, our intermediate training does not require any supplementary input, allowing our models to be applied to new datasets right away.",
"In our experiments, we use Wikipedia articles in up to 100 languages and already observe consistent gains compared to strong baselines when predicting entities using only the English Wikipedia.",
"Further adding extra languages lead to improvements in most tasks up to a certain point, but overall we found it non-trivial to scale improvements in model transferability by training on ever increasing amounts of Wikipedia languages.",
"Pretrained Multilingual Masked Language Models (MMLMs) such as mBERT (Devlin et al., 2019), XLM-R (Conneau et al., 2020) and their variants have achieved state-of-the-art results across diverse natural language understanding tasks.",
"Typically, a MMLM model is pretrained on very large amounts of raw text in different languages using the masked language modelling (MLM) objective and is further finetuned on (usually limited amounts of) task data.",
"In the zero-shot crosslingual setting, which is our focus in this paper, a MMLM is finetuned on the target task using data in a single language (e.g., English) and is evaluated on the same task but in different languages (e.g., non-English languages).",
"We introduce the multilingual Wikipedia hyperlink prediction objective to contextualise words in a text with entities and concepts from an external knowledge source by using Wikipedia articles in up to 100 languages.",
"Hyperlink prediction is a knowledge-rich task designed to (1) inject semantic knowledge from Wikipedia entities and concepts into the MMLM token representations, and (2) with a similar motivation as the translated language modelling loss of Conneau and Lample (2019), i.e., to inject explicit language-independent knowledge into a model trained via self-supervised learning, but in our case without parallel data .",
"We devise a training procedure where we mask out hyperlinks in Wikipedia articles and train the MMLM to predict the hyperlink identifier similarly to standard MLM but using a hyperlink vocabulary of 250 k concepts shared across languages.",
"We use the state-of-the-art MMLM XLM-R-large (Conneau et al., 2020) and show that by adding an add-on training step using Wikipedia hyperlink prediction we consistently improve several zero-shot crosslingual natural language understanding tasks across a diverse array of languages: crosslingual Word Sense Disambiguation in 18 languages including English (XL-WSD; Pasini et al., 2021); the crosslingual Word-in-Context task (XL-WiC; Raganato et al., 2020) in 12 non-English languages; and in 7 tasks from the XTREME benchmark (Hu et al., 2020) in up to 40 languages.",
"Recently, Zhang et al. (2019, ERNIE) and Peters et al. (2019, KnowBERT) devised different methods to incorporate entities from external knowledge graphs into masked language model (LM) training.",
"Since then, several works followed (Wang et al., 2021; Sun et al., 2020; Xiong et al., 2020; Yamada T a sks MLE Target Task EN Target Task EN Loss T a sks MLE Target Task EN Target Task EN Loss bn:00021494n Computer science (EN), Informatica (IT), (ZH), ... bn:00002705n [CLS] ho anni studiato [ENT] per diversi .",
"et al., 2020) showing increasingly better performance than masked LMs that rely on information from raw text only.",
"Nevertheless, all these methods were proposed for a single language 1 and cannot be easily applied to transfer learning in a zero-shot crosslingual setting.",
"Notation Let x 1: m = MMLM( x 1: m ) be contextualised word representations for some input text x 1: m with m words, and computed with a pretrained MMLM.",
"Let x n : k ( n 1 , k m ) be a subsequence of contextualised word representations of a single hyperlink x n : k consisting of k n words.",
"In our working example we use a single hyperlink x n : k for simplicity, but in practice there may be multiple hyperlinks in the input x 1: m .",
"Data We download and preprocess Wikipedia articles in 100 languages, and extract all hyperlinks in the text.",
"We use BabelNet (Navigli and Ponzetto, 2010) a large multilingual knowledge base comprising WordNet, Wikipedia, and many other resources to map Wikipedia articles in different languages about the same subject onto unique iden-tifiers.",
"For instance, regardless of their language all computer science articles are mapped to the same identifier h t , in this case bn:00021494n .",
"2 After each article is mapped to a single identifier, we create prediction targets for every hyperlink by using the identifier of its referenced article.",
"See Appendix A for more details.",
"Wikipedia Hyperlink Prediction Our main goal is to use the rich semantic knowledge contained in the multilingual Wikipedias' structure to improve language model pretraining.",
"Our approach can be seen as intermediate-task training (Phang et al., 2018, 2020) where we use Wikipedias' hyperlinks as labelled data to further finetune a pretrained MMLM model before training it one last time in the actual target task of interest.",
"Motivated by recent studies on pretrained language encoders demonstrating that semantic features are highlighted in higher layers (Raganato and Tiede-mann, 2018; Jawahar et al., 2019; Cui et al., 2020; Rogers et al., 2021), we further train only the last two layers of the MMLM.",
"Moreover, similarly to the MLM procedure, we replace the hyperlink tokens x n : k by the [MASK] token or by a random token 80% and 10% of the time, respectively (De-vlin et al., 2019).",
"Since the number of Wikipedia articles is very large, we only consider the most frequent 250 k referenced articles h t as possible hyperlinks in our model and we use the adaptive softmax activation function to speed-up training (Grave et al., 2017).",
"Our objective allows us to consider text-entity alignments during training only.",
"At prediction time, instead, we simply feed the model with raw text with no need of precomputed alignments.",
"This makes our model easy to use and to adapt to many different scenarios.",
"For more details on the model architectures and objective, see Appendix B. 3 Experimental Setup We use XLM-R-large (Conneau et al., 2020) as our MMLM, which is pretrained on a large volume of raw multilingual corpora using MLM training.",
"We propose three different model architectures which differ in how the input to the hyperlink classification head is computed.",
"In Token we use the vector representation of each token in the hyperlink text x i , i [ n, k ] as input to the prediction head.",
"In Concat CLS we use the concatenation [ x i ; x CLS ] of the representation of each word in the hyperlink x i , i [ n, k ] with the [CLS] token representation as input to the prediction head.",
"Finally, in Replace CLS the input to the prediction head is the representation of each word in the hyperlink x i , i [ n, k ] with probability p r or the [CLS] token representation x CLS with probability 1 p r .",
"More details on the architectures in Appendix B.1.",
"We follow a sequential, three steps approach to training and evaluating our models.",
"We first finetune the pretrained MMLM on the Wikipedia hyperlink prediction task, then finetune again this time on the target-task training data in English, and fi-nally evaluate the model on non-English target-task evaluation data in a zero-shot crosslingual setting (see Figure 1).",
"We use Wikipedia articles in different sets of languages (Section 3.3) and experiment with many diverse target tasks (Section 3.4).",
"We experiment using only English ( Wiki EN ), 15 different languages ( Wiki 15 ), or 100 Wikipedia languages ( Wiki 100 ).",
"By doing that,",
"i) we include a monolingual albeit resource-rich baseline ( Wiki EN ),",
"ii) we investigate the impact of including a varied mixture of languages from different families ( Wiki 15 ), and",
"iii) we also experiment if going massively multilingual has a noticeable impact on crosslingual transferability ( Wiki 100 ).",
"Word Sense Disambiguation We follow the zero-shot crosslingual setting of Pasini et al. (2021, XL-WSD), which includes 17 languages plus English, i.e., we train on the English SemCor (Miller et al., 1993) dataset merged with the Princeton WordNet Gloss corpus 3 and test on all available languages (Miller et al., 1993; Raganato et al., 2017; Edmonds and Cotton, 2001; Snyder and Palmer,",
"3 http://wordnetcode.princeton.edu/ glosstag.shtml",
"2004; Pradhan et al., 2007; Navigli et al., 2007; Agirre et al., 2010; Navigli et al., 2013; Moro and Navigli, 2015; Pociello et al., 2008; Simov and Osenova, 2010; Bentez et al., 1998; Huang et al., 2010; Raffaelli et al., 2008; Pedersen et al., 2009; Postma et al., 2016; Vider and Orav, 2002; Guino-vart, 2011; Mihaltz et al., 2008; Isahara et al., 2008; Yoon et al., 2009; Fiser et al., 2012).",
"Word-in-Context We use the crosslingual Word-in-Context dataset (XL-WiC; Raganato et al., 2020) with data in 12 diverse languages.",
"The task is to predict whether an ambiguous word that appears in two different sentences share the same meaning.",
"We finetune the model on the English WiC (Pilehvar and Camacho-Collados, 2019) dataset and evaluate on the 12 XL-WiC languages.",
"XTREME The XTREME (Hu et al., 2020) evaluation suite contains diverse tasks in up to 40 different languages.",
"We perform crosslingual evaluation on: question answering (XQuAD; MLQA; TyDiQA; Artetxe et al., 2020; Lewis et al., 2020; Clark et al., 2020), natural language inference (XNLI; Conneau et al., 2018), paraphrase detection (PAWS-X; Yang et al., 2019), part-of-speech tagging (POS; Nivre et al., 2018), and named entity recognition (NER; Pan et al., 2017).",
"As standard in the two unsupervised sentence retrieval tasks, BUCC (Zweigenbaum et al., 2018), and Tatoeba (Artetxe and Schwenk, 2019), XLM-R is tested considering the output of its 14 -th layer, which, however, is not tuned during our intermediate task.",
"We therefore do not report results on these tasks.",
"4 Task Architectures Across all the tasks, we finetune transformer-based models by adding a classification head for each task.",
"Results on XL-WSD and XL-WiC tasks (Tables 1 and 2) suggest that our models have a better grasp of word-level semantics than XLM-R, which does not have explicit semantic signals during its pretraining.",
"This is consistent across languages and hyperlink prediction architectures, also when compared to the baseline XLM-R additionally finetuned using MLM training on in-domain Wikipedia data.",
"Our best models outperform the baselines in both tasks by several points.",
"Interestingly, training on 4 More details in Appendix B.2.",
"15 languages tends to slightly outperform training on all 100 languages on XL-WSD, but on XL-WiC results with our best models trained on 100 languages outperforms all other configurations most of the time by a reasonable margin.",
"These results corroborate our hunch that the intermediate task injects semantic knowledge within the neural model.",
"In Table 3, we confirm that our models preserve the sentence-level comprehension capabilities of the underlying XLM-R architecture and that it performs either comparably or favourably to the baselines in the XTREME benchmark, across target tasks and languages.",
"Training on the English Wikipedia only can be surprisingly effective at times (Tables 2 and 3), and training on 100 languages shows more consistent improvements only on XL-WiC but fails to lead to similar improvements on other tasks.",
"We note that performance on XL-WSD is similar when using 15 or 100 languages, while our evaluation using XTREME shows that performance is slightly worse when using 100 languages compared to using 15 languages only.",
"We conjecture this could be due to the fact we finetune only the last two layers of XLM-R (see Appendix B), so the model retains most of the multilingual knowledge it learned dur-XNLI PAWS-X POS NER XQuAD MLQA TyDiQA Avg.",
"ing pretraining (Liu et al., 2019; Hao et al., 2019).",
"We also hypothesise that the English Wikipedia size (in number of words) and quality (in coverage of our hyperlink vocabulary) may also be a reason why training solely on English already brings large gains in transfer to other tasks.",
"For comparison, the English Wikipedia is the one with the most data, i.e., about 73M hyperlinks, where the second highest resource language is German with only about 28M hyperlinks (see Table 4 in Appendix B).",
"Regarding the coverage of our hyperlink vocabulary with 250 k entries, the English Wikipedia covers over 249 k hyperlink types at least 10 times, whereas the second highest coverage is for the French Wikipedia, which covers over 142 k hyperlink types at least 10 times.",
"We plan on investigating the effect of the size and coverage of hyperlinks further in future work.",
"Limitations Finally, we highlight that: (1) We report results using single model runs, therefore we have no estimates of the variance of these models; (2) We lack a more thorough hyperparameter search to further consolidate our results.",
"In both cases, the reason we made such choices is because of the high cost of training large models such as XLM-R large.",
"We presented a multilingual Wikipedia hyperlink prediction intermediate task to improve the pretraining of contextualised word embedding models.",
"We trained three model variants on different sets of languages, finding that injecting multilingual semantic knowledge consistently improves performance on several zero-shot crosslingual tasks.",
"As future work, we plan to devise a solution to allow crosslingual transferability to scale more efficiently with the number of languages.",
"Finally, we will investigate the impact on resource-poor vs resource-rich languages, and the effect of the size and coverage of hyperlinks in model transferability.",
"We would like to thank Clara Vania and Sam Bowman for comments on early versions of this work, and our three anonymous reviewers for their helpful comments and feedback.",
"IC has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skodowska-Curie grant agreement No 838188.",
"TP and AR gratefully acknowledge the support of the ERC Consolidator Grants MOUSSE No. 726487, and FoTran No. 771113 under the European Union's Horizon 2020 research and innovation programme.",
"AR also thanks the CSC IT Center for Science (Finland) for the computational resources."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"result",
"result",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"method",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"other",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"result",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"objective",
"other",
"other",
"other",
"other"
] |
[
"We introduce deep inside-outside recursive autoencoders (DIORA), a fully-unsupervised method for discovering syntax that simultaneously learns representations for constituents within the induced tree.",
"Our approach predicts each word in an input sentence conditioned on the rest of the sentence and uses inside-outside dynamic programming to consider all possible binary trees over the sentence.",
"At test time the CKY algorithm extracts the highest scoring parse.",
"DIORA achieves a new state-of-the-art F1 in unsupervised binary constituency parsing (unlabeled) in two benchmark datasets, WSJ and MultiNLI.",
"Syntactic parse trees are useful for downstream tasks such as relation extraction (Gamallo et al., 2012), semantic role labeling (Sutton and McCallum, 2005; He et al., 2018), machine translation (Aharoni and Goldberg, 2017; Eriguchi et al., 2017; Zaremoodi and Haffari, 2018), and text classification (Li and Roth, 2006; Tai et al., 2015).",
"Traditionally, supervised parsers trained on datasets such as the Penn Treebank (Marcus et al., 1993) are used to obtain syntactic trees.",
"However, the treebanks used to train these supervised parsers are typically small and restricted to the newswire domain.",
"Unfortunately, models trained on newswire treebanks tend to perform considerably worse when applied to new types of data, and creating new domain specific treebanks with syntactic annotations is expensive and time-consuming.",
"Motivated by the desire to address the limitations of supervised parsing and by the success of large-scale unsupervised modeling such as ELMo and BERT (Peters et al., 2018a; Devlin et al., Equal contribution, randomly ordered. Under the current circumstances he says their scenario no longer seems unrealistic Figure 1: An unlabeled binary constituency parse from DIORA matching the ground truth. 2019), we propose a new deep learning method of unsupervised parser training that can extract both shallow parses (i.e., noun phrases or entities) and full syntactic trees from any domain or language automatically without requiring any labeled training data .",
"In addition to producing parses, our model simultaneously builds representations for internal constituents that reflect syntactic and semantic regularities which can be leveraged by downstream tasks.",
"Our model builds on existing work developing latent tree chart parsers (Socher et al., 2011b; Le and Zuidema, 2015; Yogatama et al., 2017; Mail-lard et al., 2017; Choi et al., 2018).",
"These methods produce representations for all internal nodes in the tree (cells in the chart), each generated as a soft weighting over all possible sub-trees ( 2).",
"Unfortunately, they still require sentence-level annotations during training, as they are all trained to optimize a downstream task, typically natural language inference.",
"To address these limitations, we present deep inside-outside recursive autoencoders (DIORA) which enable unsupervised discovery and representation of constituents without requiring any supervised training data.",
"DIORA incorporates the inside-outside algorithm (Baker, 1979; Lari and Young, 1990) into a latent tree chart parser.",
"The bottom-up inside step calculates a representation for all possible constituents within a binary tree over the input sentence.",
"This step is equivalent to the forward-pass of previous latent tree chart parsers (Maillard et al., 2017).",
"These inside representations only encode the current subtree, ignor-0.7 + 0.3 The cat drinks The cat drinks",
"ing all outside context.",
"Thus, we perform an additional top-down outside calculation for each node in the tree, providing external context into the subtree representations in each chart cell.",
"The model is then trained with the objective that the outside representations of the leaf cells should reconstruct the corresponding leaf input word, analogous to masked language model (Devlin et al., 2019) pretraining, except by using dynamic programming we predict every word from a completely unmasked context.",
"The single most likely tree can be recovered using the CKY algorithm and compatibility scores between constituents.",
"Previous work either predict trees that are not well aligned with known treebanks (Yogatama et al., 2017; Choi et al., 2018), or has no mechanism for explicitly modeling phrases, requiring a complex procedure to extract syntactic structures (Shen et al., 2018).",
"To probe different properties of our model, we run experiments on unsupervised parsing, segment recall, and phrase representations.",
"DIORA achieves multiple new state-of-the-art results for unsupervised constituency parsing (absolute improvements of 13.7% , 11.5% , and 7.8% on WSJ, WSJ-40, and MultiNLI), has a greater recall on more constituent types than a strong baseline, and produces meaningful phrase representations.",
"Our goal is to design a model and unsupervised training procedure that learns structure from raw text.",
"The design of DIORA is based on our hypothesis is that the most effective compression of a sentence will be derived from following the true syntactic structure of the underlying input.",
"Our approach builds on previous latent tree chart parsers which are augmented with the inside-outside algorithm ( Baker, 1979; Lari and Young, 1990) and trained to reproduce each input word from its outside context.",
"Based on our hypothesis, loosely inspired by the linguistic substitution principle (Frege, 1960), the model will best reconstruct the input by discovering and exploiting syntactic regularities of the text.",
"The inside pass of our method recursively compresses the input sequence, at each step inputting the vector representations of the two children into a composition function ( 2.1.1) that outputs an inside vector representation of the parent.",
"This process continues up to the root of the tree, eventually yielding a single vector representing the entire sentence (Figure 2a).",
"This is loosely analogous to the compression step of an autoencoder and equivalent to existing latent tree chart parsers forward pass (Maillard et al., 2017).",
"Following this, we initiate the outside pass of our algorithm with a generic (root) representation that is learned as a separate parameter.",
"As the outside step of the inside-outside algorithm (Figure 2b), we unfold until finally producing representations of the leaf nodes.",
"These leaves are then optimized to reconstruct the input sentence as done in an autoencoder-based deep neural network.",
"Each inside representation is the root of a particularly sub-tree, and that representation is generated by considering only the descendant constituents within that sub-tree, ignoring any outside context.",
"After the inside representations are calculated, we perform a top-down outside pass to compute outside representations.",
"The outside representations are encoded by looking at only the context of a given sub-tree.",
"Once the chart is filled, each constituent k (cell in the chart) is associated with an inside vector a ( k ) , an outside vector b ( k ) , inside compatibility score e ( k ) and outside compatibility score f ( k ) .",
"The input to our model is a sentence x made up of T tokens, x 0 , x 1 , ..., x T 1 .",
"Each token x i has a corresponding pre-trained embedded vector v i .",
"For each pair of neighboring constituents i and j 1 , we compute a compatibility score and a composition vector.",
"The score and vector that represent a particular span k are computed using a soft weighting over all possible pairs of constituents, that together fully cover the span (we refer to this set of constituent pairs as { k } ).",
"Vectors for spans of length 1 are initialized as a non-linear transformation 2 of the embedded input v i , and the scores associated with these spans are set to 0 : x o u = tanh ( U v k + b ) a ( k ) = o + tanh( x u ) e ( k ) = 0 Higher levels of the chart are computed as a weighted summation of constituent pairs: a ( k ) = X i,j { k } e ( i, j ) a ( i, j ) e ( k ) = X i,j { k } e ( i, j ) e ( i, j ) The compatibility function e is meant to produce a score for how likely a pair of neighboring cells are to be merged.",
"We implement this as a bilinear function of the vectors from neighboring spans, using a learned parameter matrix S .",
"We additionally add the individual scores from each two merging cells.",
"Intuitively, these individual scores correspond to how likely each of the cells would 1 The symbols i , j , and k are identifiers of spans from the input x .",
"exist in the final binary tree independently.",
"The formula for the compatibility function (and its normalized form e ) is defined as follows: e ( i, j ) = exp( e ( i, j )) P i, j { k } exp( e ( i, j )) e ( i, j ) = ( a ( i ) , a ( j ); S ) + e ( i ) + e ( j ) Where the bilinear projection is defined as: ( u, v ; W ) = u W v For the composition function a we used either a TreeLSTM (Tai et al., 2015) or a 2-layer MLP (see Appendix A.2 for more precise definitions on both methods).",
"In order for the remainder of equations to remain agnostic to the choice of composition function, we refer to the function as Compose , which produces a hidden state vector h and, in the case of TreeLSTM , a cell state vector c , resulting in: a ( i, j ) = Compose ( a ( i ) , a ( j )) 2.1.2 Outside Pass The outside computation is similar to the inside pass (depicted in Figure 2b).",
"The root node of the outside chart is learned as a bias.",
"Descendant cells are predicted using a disambiguation over the possible outside contexts.",
"Each component of the context consists of a sibling cell from the inside chart and a parent cell from the outside chart.",
"The function f is analogous to the function e .",
"It is normalized over constituent pairs i, j for the span k , and is used to disambiguate among the many outside contexts.",
"The function b generates a phrase representation for the missing sibling cell.",
"Equations for the outside computation follow: b ( k ) = X i,j { k } f ( i, j ) b ( i, j ) f ( k ) = X i,j { k } f ( i, j ) f ( i, j ) b ( i, j ) = Compose ( a ( i ) , b ( j )) f ( i, j ) = ( a ( i ) , b ( j ); S ) + e ( i ) + f ( j ) In the majority of our experiments, the Compose used in b shares parameters with a used in the inside pass, as do the compatibility functions e and f (see 3.4 for results on the effects of parameter sharing).",
"To train our model we use an autoencoder-like language modeling objective.",
"In a standard autoencoder, the entire input x is compressed into a single lower dimensional representation.",
"This representation, z , is then decompressed and trained to reconstruct x .",
"In our model, we never condition the reconstruction of x on a single z because the root's outside representation is initialized with a bias rather than the root's own inside vector.",
"Instead, we reconstruct x conditioned on the many sub-tree roots, each of which is only a compression of a subset of the input.",
"To approximate this reconstruction we use a max-margin loss considering a set { x } of N negative examples that are sampled according to their frequency from the vocabulary (further details in Appendix A.1).",
"The terminal outside vector b ( i ) is trained to predict its original input v i .",
"The per-instance loss function is described in Equation 1: L x = T 1 X i =0 N 1 X i =0 max(0 , 1 b ( i ) a ( i ) + b ( i ) a ( i )) (1) The max-margin loss does not provide a gradient if the predicted vector is closer to its ground truth than the negative example by a margin greater than 1 .",
"For that reason, we also experimented with an objective based on cross-entropy, described in Equation 2: Z = N 1 X i =0 exp( b ( i ) a ( i )) L x = T 1 X i =0 log exp( b ( i ) a ( i )) exp( b ( i ) a ( i )) + Z (2) 2.3 DIORA CKY Parsing To obtain a parse with DIORA, we populate an inside and outside chart using the input sentence.",
"We can extract the maximum scoring parse based on our single grammar rule using the CKY procedure (Kasami, 1966; Younger, 1967).",
"The steps for this procedure are described in Algorithm 1 and its runtime complexity in Appendix A.4.",
"vised segment recall, and phrase similarity.",
"The model has been implemented in PyTorch (Team, 2018) and the code is published online.",
"3 For training details, see Appendix A.1.",
"We first evaluate how well our model predicts a full unlabeled constituency parse.",
"We look at two data sets used in prior work (Htut et al., 2018), The Wall Street Journal (WSJ) section of Penn Treebank (Marcus et al., 1993), and the automatic parses from MultiNLI (Williams et al., 2018b).",
"WSJ has gold human-annotated parses and MultiNLI contains automatic parses derived from a supervised parser (Manning et al., 2014).",
"In addition to PRPN (Shen et al., 2018), 4 we compare our model to deterministically constructed left branching, right branching, balanced, and random trees.",
"We also compare to ON-LSTM (Shen et al., 2019), an extension of the PRPN model, RL-SPINN (Yogatama et al., 2017), an unsupervised shift-reduce parser, and ST-Gumbel (Choi et al., 2018), an unsupervised chart parser.",
"The latter two of these models are trained to predict the downstream task of natural language inference (NLI).",
"For the full WSJ test set and MultiNLI datasets we follow the experimental setup of previous work (Williams et al., 2018a).",
"We binarize target trees using Stanford CoreNLP (Manning et al., 2014) and do not remove punctuation (experiments in 3.1.2 do remove punctuation).",
"Latent tree models have been shown to perform particularly poorly on attachments at the beginning and end of the sequence (Williams et al., 2018a).",
"To address this, we incorporate a postprocessing heuristic (denoted as +PP in result tables) 5 .",
"This heuristic simply attaches trailing punctuation to the root of the tree, regardless of its predicted attachment.",
"In Table 1, we see that DIORA +PP achieves the highest average and maximum F1 from five random restarts.",
"This model achieves a mean F1 7 points higher than ON-LSTM and an increase of over 6.5 max F1 points.",
"We also see that DIORA exhibits much less variance between random seeds than ON-LSTM.",
"Additionally, we find that PRPN-UP and DIORA benefit much more from the +PP heuristic than PRPN-LM.",
"This is consistent with qualitative analysis showing that DIORA and PRPN-UP incorrectly attach trailing punctuation much more often than PRPN-LM.",
"On the MultiNLI dataset, PRPN-LM is the top performing model without using the +PP heuristic while DIORA matches PRPN-UP (Table",
"2. Using the heuristic, DIORA greatly surpasses both variants of PRPN.",
"However, it is worth noting that this is not a gold standard evaluation and instead evaluates a model's ability to replicate the output of a trained parser (Manning et al., 2014).",
"A second caveat is that SNLI (Bowman et al., 2015) and MultiNLI contain several non-newswire domains.",
"Syntactic parsers often suffer significant performance drops when predicting outside of the newswire domain that the models were trained on.",
"We also compare our models to two subsets of the WSJ dataset that were used in previous unsupervised parsing evaluations.",
"WSJ-10 and WSJ-40 contain sentences up to length 10 and 40 respectively after punctuation removal.",
"We do not binarize either of these two splits in order to compare to previous work (see Appendix A.3 for more 5 We did not have access to predictions or an implementation of the concurrent ON-LSTM model and therefore could not apply the +PP heuristic.",
"details on WSJ split differences).",
"Not binarizing the target trees sets an upper-bound on the performance of our models, denoted as UB in Table",
"3. We compare against previous notable models for this task: CCM ( Klein and Manning, 2002) uses the EM algorithm to learn probable nested bracketings over a sentence using gold or induced part-of-speech tags, and PRLG (Ponvert et al., 2011) performs constituent parsing through consecutive rounds of sentence chunking.",
"In Table 3, we see that DIORA outperforms the previous state of the art for WSJ-40, PRLG, in max F1.",
"The WSJ-10 split has been difficult for latent tree parsers such as DIORA, PRPN, and ON-LSTM, none of which (including our model) are able to improve upon previous non-neural methods.",
"However, when we compare trends between WSJ-10 and WSJ-40, we see that DIORA does a better job at extending to longer sequences.",
"In many scenarios, one is only concerned with extracting particular constituent phrases rather than a full parse.",
"Common use cases would be identifying entities, noun phrases, or verb phrases for downstream analysis.",
"To get an idea of how well our model can perform on phrase segmentation, we consider the maximum recall of spans in our predicted parse tree.",
"We leave methods for cutting the tree to future work and instead consider the maximum recall of our model which serves as an upper bound on its performance.",
"Recall here is the percentage of labeled constituents that appear in our predicted tree relative to the total number of constituents in the gold tree.",
"These scores are separated by type and presented in Table",
"4. In Table 4 we see the breakdown of constituent recall across the 10 most common types.",
"DIORA achieves the highest recall across the most types and is the only model to perform effectively on verb-phrases.",
"Interestingly, DIORA performs worse than PRPN-LM at prepositional phrases.",
"One of the goals of DIORA is to learn meaningful representations for spans of text.",
"Most language modeling methods focus only on explicitly modeling token representations and rely on ad-hoc postprocessing to generate representations for longer spans, typically relying on simple arithmetic functions of the individual tokens.",
"To evaluate our model's learned phrase representations, we look at the similarity between spans of the same type within labeled phrase datasets.",
"We look at two datasets.",
"CoNLL 2000 (Tjong Kim Sang and Buchholz , 2000) is a shallow parsing dataset containing spans of noun phrases, verb phrases, etc.",
"CoNLL 2012 ( Pradhan et al., 2012) WSJ-10 WSJ-40 Model F1 F1 max F1 F1 max UB 87.8 87.8 85.7 85.7 LB 28.7 28.7 12.0 12.0 RB 61.7 61.7 40.7 40.7 CCM -63.2 -CCM gold -71.9 -33.7 PRLG -72.1 -54.6 PRPNNLI 66.3 0 .",
"For each of the labeled spans with length greater than one, we first generate its phrase representation.",
"We then calculate its cosine similarity to all other labeled spans.",
"We then calculate if the label for that query span matches the labels for each of the K most similar other spans in the dataset.",
"In Table 5 we report precision@ K for both datasets and various values of K .",
"The first baseline we compare against produces phrase representations from averaging context-insensitive (CI) ELMo vectors of individual tokens with the span.",
"The second uses sentence-insensitive (SI) ELMo vectors, running the full ELMo over only the relevant tokens and ignoring the rest of the sentence.",
"We also look at ELMo's output when given the entire sentence.",
"When analyzing our baselines that run the full ELMo, we follow the procedure described in (Pe-ters et al., 2018b) and represent phrases as a function of its first and last hidden state.",
"We extract these states from the final ELMo layer (3rd BiL-STM) as these consistently gave the best performance among other options.",
"For DIORA, we use the concatenation of the inside and outside representations ( [ a ; b ] ).",
"This demonstrates DIORA's ability to capture and represent syntactic information within phrases.",
"For CoNLL 2012, we find that DIORA outperforms both ELMo CI and ELMo SI while ELMo performs best overall.",
"ELMo CI is surprisingly effective on this dataset even though it performed more poorly on CoNLL 2000.",
"These results indicate that DIORA is capturing syntax quite well, but still has room to improve on more fine-grained semantic representations.",
"To test the impact of our modeling choices, we compared the performance of two different losses and four different composition functions on the full WSJ validation set.",
"The losses were covered in Equations 1 (Margin) and 2 (Softmax).",
"The two primary methods of composition we considered were TreeLSTM (Tai et al., 2015) and MLP (a 2-hidden layer neural network).",
"In addition, we experimented with a simple kernel of the MLP input [ x ; y ; x y ; x y ] and with a setting where both the inside and outside parameters are shared .",
"The results are shown in Table 6.",
"We see that MLP composition consistently performs better than with TreeLSTM, that MLP benefits from the Softmax loss, and that the best performance comes from sharing parameters.",
"All other experimental results use this highly performant setting unless otherwise specified.",
"All examples of DIORA parses are already binary.",
"Some punctuation has been removed for easier readability.",
"Looking at our model's output, we see that some trees are an exact replication of the binarized ground truth (Fig. 3), or very close (Fig. 4).",
"For future work we intend to explore common patterns in DIORA's learned structure, although some patterns are already recognizable, such as the affinity to group particles and verbs (Fig. 5).",
"Latent Tree Learning A brief survey of neural latent tree learning models was covered in (Williams et al., 2018a).",
"The first positive result for neural latent tree parsing was shown in (Htut et al., 2018), which used a language modeling objective.",
"The model in (Liu et al., 2018) uses an inside chart and an outside procedure to calculate marginal probabilities in order to align spans between sentences in entailment.",
"Neural Inside-Outside Parsers The Inside-Outside Recursive Neural Network (IORNN) (Le and Zuidema, 2014) is closest to ours.",
"It is a graph-based dependency parser that uses beam search and can reliably find accurate parses when retaining a k -best list.",
"In contrast, our model produces the most likely parse given the learned compatibility of the constituents.",
"The Neural CRF Parser (Durrett and Klein, 2015), similar to DIORA, performs exact inference on the structure of a sentence, although requires a set of grammar rules and labeled parse trees during training.",
"DIORA, like Liu et al. (2018), has a single grammar rule that applies to any pair of constituents and does not use structural supervision.",
"Learning from Raw Text Unsupervised learning of syntactic structure has been an active research area (Brill et al., 1990), including for unsupervised segmentation (Ando and Lee, 2000; Goldwater et al., 2009; Ponvert et al., 2011) and unsupervised dependency parsing (Spitkovsky et al., 2013).",
"Some models exploit the availability of parallel corpora in multiple languages (Das and Petrov, 2011; Cohen et al., 2011).",
"Others have shown that dependency parsing can be used for unsupervised constituency parsing (Spitkovsky et al., 2013; Klein and Manning, 2004), or that it's effective to prune a random subset of possible trees (Bod, 2006).",
"These approaches aren't necessarily orthogonal to DIORA.",
"For instance, our model may benefit when combined with an unsupervised dependency parser.",
"In this work we presented DIORA, an unsupervised method for inducing syntactic trees and representations of constituent spans.",
"We showed inside-outside representations constructed with a latent tree chart parser and trained with an autoencoder language modeling objective learns syntactic structure of language effectively.",
"In experiments on unsupervised parsing, chunking, and phrase representations we show our model is comparable to or outperforms previous methods, achieving the state-of-the-art performance on unsupervised unlabeled constituency parsing for the full WSJ (with punctuation), WSJ-40, and NLI datasets.",
"We also show our model obtains higher segment recall than a comparable model and outperforms strong baselines on phrase representations on a chunking dataset.",
"While the current model seems to focus primarily on syntax, future work can improve the model's ability to capture fine-grained semantics.",
"Potential avenues include training larger models over much larger corpora, extra unsupervised or weakly-supervised phrase classification objectives, and other modeling enhancements.",
"We are also eager to apply DIORA to other domains and languages which do not have rich linguistically annotated training sets.",
"We are grateful to Carolyn Anderson, Adina Williams, Phu Mon Htut, and our colleagues at UMass for help and advice, and to the UMass NLP reading group and the anonymous reviewers for feedback on drafts of this work.",
"This work was supported in part by the Center for Intelligent Information Retrieval, in part by the National Science Foundation (NSF) grant numbers DMR-1534431, IIS-1514053 and CNS-0958392.",
"Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor."
] | [
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"method",
"method",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"result",
"result",
"abstain",
"abstain",
"method",
"other",
"other",
"other"
] |
[
"Abstract Automatic code summarization, which aims to describe the source code in natural language, has become an essential task in software maintenance.",
"Our fellow researchers have attempted to achieve such a purpose through various machine learning-based approaches.",
"One key challenge keeping these approaches from being practical lies in the lacking of retaining the semantic structure of source code, which has unfortunately been overlooked by the state-of-the-art methods.",
"Existing approaches resort to representing the syntax structure of code by modeling the Abstract Syntax Trees (ASTs).",
"However, the hierarchical structures of ASTs have not been well explored.",
"In this paper, we propose CODESCRIBE to model the hierarchical syntax structure of code by introducing a novel triplet position for code summarization.",
"Specifically, CODESCRIBE leverages the graph neural network and Transformer to preserve the structural and sequential information of code, respectively.",
"In addition, we propose a pointer-generator network that pays attention to both the structure and sequential tokens of code for a better summary generation.",
"Experiments on two real-world datasets in Java and Python demonstrate the effectiveness of our proposed approach when compared with several state-of-the-art baselines 1 .",
"Code documentation in the form of code comments has been an integral component of software development, benefiting software maintenance (Iyer et al., 2016), code categorization (Nguyen and Nguyen, 2017) and retrieval (Gu et al., 2018).",
"However, few real-world software projects are well-documented with high-quality comments.",
"Many projects are either inadequately documented due to missing important code comments or inconsistently documented due to different naming conven-1 The source code of CODESCRIBE is available at https: //github.com/GJCEXP/CODESCRIBE tions by developers, e.g., when programming in legacy code bases, resulting in high maintenance costs (de Souza et al., 2005; Kajko-Mattsson, 2005).",
"Therefore, automatic code summarization, which aims to generate natural language texts (i.e., a short paragraph) to describe a code fragment by extracting its semantics, becomes critically important for program understanding and software maintenance.",
"Recently, various works have been proposed for code summarization based on the encoder-decoder paradigm, which first encodes the code into a distributed vector, and then decodes it into natural-language summary.",
"Similarly, several works (Iyer et al., 2016; Allamanis et al., 2016) proposed to tokenize the source code into sequential tokens, and design RNN and CNN to represent them.",
"One limitation of these approaches is that they only consider the sequential lexical information of code.",
"To represent the syntax of code, several structural neural networks are designed to represent the Abstract Syntax Trees (AST) of code, e.g., TreeLSTM (Wan et al., 2018), TBCNN (Mou et al., 2016), and Graph Neural Networks (GNNs) (LeClair et al., 2020).",
"To further improve the efficiency on AST representation, various works (Hu et al., 2018a; Alon et al., 2019) proposed to linearize the ASTs into a sequence of nodes or paths.",
"Despite much progress on code summarization, there are still some limitations in code comprehension for generating high-quality comments.",
"Particularly, when linearizing the ASTs of code into sequential nodes or paths, the relationships between connected nodes are generally discarded.",
"Although the GNN-based approaches can well preserve the syntax structure of code, they are insensitive to the order of nodes in AST.",
"For example, given the expressions a=b/c and a=c/b , current approaches cannot capture the orders of variables b and c .",
"However, these orders are critical to accurately preserve the semantics of code.",
"paper proposes to model the hierarchical syntax structure of code using triplet position, inspired by the positional encoding used in sequence modeling (Gehring et al., 2017; Vaswani et al., 2017), and incorporates the triplet position into current GNNs for better code summarization.",
"The triplet position records the depth, width position of its parent, and width position among its siblings for each node.",
"To utilize the triplet position in AST, this paper proposes CODESCRIBE , an encoder-decoder-based neural network for source code summarization.",
"Specially, we initialize the embedding of each AST node by incorporating the triplet positional embeddings, and then feed them into an improved GNN, i.e., GraphSAGE (Hamilton et al., 2017) to represent the syntax of code.",
"In addition, we also account for the sequential information of code by using a Transformer encoder (Vaswani et al., 2017).",
"In such a case, the decoding process is performed over the learned structural features of AST and sequential features of code tokens with two multi-head attention modules.",
"To generate summaries with higher quality, we further design a pointer-generator network based on multi-head attention (Vaswani et al., 2017), which allows the summary tokens to be generated from the vocabulary or copied from the input source code tokens and ASTs.",
"To validate the effectiveness of our proposed CODESCRIBE , we conduct experiments on two real-world datasets in Java and Python.",
"Overall, the primary contributions of this paper are as follows.",
"It is the first time that we put forward a simple yet effective approach of triplet position to preserve the hierarchical syntax structure of source code accurately.",
"We also incorporate the triplet position into an adapted GNN (i.e., GraphSAGE) for source code summarization.",
"We conduct comprehensive experiments on two real-world datasets in Java and Python to evaluate the effectiveness of our proposed CODESCRIBE .",
"Experimental results on both datasets demonstrate the superiority of CODESCRIBE when comparing with several state-of-the-art baselines.",
"For example, we get 3.70/5.10/4.77% absolute gain on BLEU/METEOR/ROUGE-L metrics on the Java dataset, when comparing with the most recent mAST+GCN (Choi et al., 2021).",
"Recent studies have showed promising results by using AST context for tasks based on code representation learning (Yao et al., 2019; Zhang et al., 2019; Choi et al., 2021).",
"Therefore, our work also relies on AST information besides source code tokens.",
"As a type of intermediate representation, AST represents the hierarchical syntactic structure for source code, which is an ordered tree with labeled nodes (cf. Figure 1).",
"In this work, we divide the nodes into two categories: (1) function node that controls the structure of AST and function realization, e.g., Module and Assign in Figure 1, and (2) attribute node that provides the value or name of its parent function node, which is always visualized as leaf node, such as a ' and b ' in dotted boxes of Figure",
"1. Due to the strict construction rules of AST, positions are crucial for AST nodes.",
"For example in Figure 1, the node BinOp has two children with the same label Name .",
"If the positions of the two siblings are swapped, the source code will become a=c/b , which is totally different from the intent of the code a=b/c .",
"However, GNNs are insensitive to the positions of neighbouring nodes when encoding such tree structures.",
"Based on this obser-487 Probabilities of Next Summary Tokens 6 Multi-Head Att (cid:17) with Res.",
"vation, we specify triplet positions for AST nodes to retain accurate structural information in AST learning.",
"The triplet position of a node includes: (1) the depth of the node in the AST, (2) the width position of its parent node in the layer, and (3) the node's width position among its siblings, which can also distinguish function node from attribute node.",
"That is, the width position of a function node is a non-negative integer starting from 0, while the width position of an attribute node is a negative integer counting from",
"-1. Note that, width positions are estimated in a breadth traversal from left to right.",
"With such triplet indices specified, all nodes can be marked with unique positions in a given AST.",
"Taking a Python code snippet a=b/c as an example, Figure 1 illustrates its AST structure with triplet positions of nodes.",
"Specifically, by traversing the tree, we can represent the function node (Name,{2,0,0}) as the first child node of node (Assign,{1,0,0}) : the depth position 2 means the third level (counting from the top to bottom starting with 0); the second width position 0 means that the parent node Assign is the first function node at this level (counting from the left to right); and the third position 0 indicates that the node is the first (counting from left to right) among its siblings (i.e., all children nodes of node Assign ).",
"Another example is the node ( a ' ,{3,0,-1}) .",
"The difference lies in the third position that represents it is an attribute node and it is the first among the siblings.",
"In particular, we set the position of root node Module to {0,0,0} as it has no parent node.",
"This triplet positioning is very precise and unique, allowing to track and discriminate among the Name nodes which also include (Name,{3,1,0}) and (Name,{3,1,2}) .",
"Given a code snippet with l c tokens T c = ( c 1 , c 2 , . . . , c l c ) and sequential positions P c = (1 , 2 , . . . , l c ) , and its AST with l n nodes T n = ( n 1 , n 2 , . . . , n l n ) and triplet positions P n =",
"( { x 1 , y 1 , z 1 } , { x 2 , y 2 , z 2 } , . . . , { x l n , y l n , z l n } ) , CODESCRIBE predicts the next summary token s m based on the existing tokens T s = ( </s> , s 1 , s 2 , . . . , s m 1 , . . . ) with the sequential positions P s = (1 , 2 , . . . , l s ) , where </s> is a special starting tag for summary input.",
"Note that T s is padded to a maximum length of l s with special padding tags (e.g., <pad> s).",
"Figure 2 illustrates the architecture of CODESCRIBE model, which is mainly composed of four modules: source code encoder, AST encoder, summary decoder and multi-source pointer-generator network (MPG) for output.",
"As shown in Figure 2, the source code, AST, and summary tokens are firstly mapped into embedding vectors E 0 c R l c d , E 0 n R l n d , and E 0 s R l s d where d is the embedding size.",
"In the encoding process, the embedded code and AST are fed into Transformer encoder (Vaswani et al., 2017) and GNN layers respectively for learning the source code representation E (cid:48) c R l c d and the AST representation E (cid:48) n R l n d .",
"Then, the decoding process is performed to yield the decoded vector e (cid:48) s R d for the predicted summary token by fusing the learned source code and AST features (i.e., E (cid:48) c and E (cid:48) n ) as an initial state for decoding E 0 s .",
"At the decoding stage, we build MPG stacked on the decoder and encoders to predict the next summary token s m by selecting from summary vocabulary or copying from the input source code and AST tokens.",
"The detailed process will be further described in the following sub-sections.",
"Before feeding code tokens, AST nodes, and summary tokens into neural networks, it is essential to embed them into dense numerical vectors.",
"In this work, the source code tokens T c , AST nodes T n , and summary tokens T s are all embedded into numeric vectors with their related positions P c , P n , and P s incorporated through learnable positional embeddings (Gehring et al., 2017).",
"In particular for AST, we take each triplet position { x i , y i , z i } in P n as an individual tuple, and directly map it into a positional embedding vector e i R d .",
"The embedded triplet positional information is then added to the node embeddings for initializing the AST representation.",
"The embedding processes are formulated as follows: E 0 c = CNEmb ( T c ) d + CPEmb ( P c ) , E 0 n = CNEmb ( T n ) d + NPEmb ( P n ) , E 0 s = SEmb ( T s ) d + SPEmb ( P s ) , (1) where CNEmb denotes the shared embedding operation for source code tokens and AST nodes; SEmb means the token embedding operation for summary text; CPEmb , NPEmb , and SPEmb are the corresponding positional embedding operations.",
"Afterwards, the initialized representations E 0 c , E 0 n , and E 0 s are fed into the encoders and decoder of CODESCRIBE for in-depth processing.",
"Source Code Encoder.",
"As shown in Figure 2, the code encoder is composed of two identical layers.",
"And each layer consists of two sub-layers: multi-head attention mechanism and fully connected position-wise feed-forward network (FFN).",
"In addition, residual connection (He et al., 2016) and layer normalization (Ba et al., 2016) are performed in the two sub-layers for the sake of vanishing gradient problem in multi-layer processing and high offset of vectors in residual connection.",
"For the k -th layer, the process can be formulated as: H kc = LayerNorm ( E k 1 c + Att ( E k 1 c , E k 1 c , E k 1 c )) , E kc = LayerNorm ( H kc + FFN ( H kc )) , (2) where E k 1 c R l c d is the output vectors from the ( k 1 )-th layer ; LayerNorm denotes layer normalization; and Att means the multi-head attention (Vaswani et al., 2017) that takes query, key, and value vectors as inputs.",
"AST Encoder.",
"Considering that AST is a kind of graph, it can be learned by GNNs.",
"Since GraphSAGE (Hamilton et al., 2017) shows high efficiency and performance dealing with graphs, we introduce the idea of GraphSAGE and improve it by adding residual connection for AST encoding, as shown in Figure",
"2. The encoding layer processes the AST by firstly aggregating the neighbors of the nodes with edge information and then updating the nodes with their aggregated neighborhood information.",
"For a node i and its neighbors in the k -th layer, the process can be formulated as follows: h ki = W 1 e k 1 i + W 2 Aggr ( { e k 1 j , j N ( i ) } ) , (3) where e k 1 i R d means the vector representation of i -th node from the ( k 1 )-th layer; N ( i ) is the neighbors of the node i ; e k 1 j R d denotes the j -th neighbor vector for node i ; W 1 , W 2 R d d are learnable weight matrices; Aggr represents aggregation function.",
"With the increase of the number of layers, a node aggregates the neighborhood information from a deeper depth.",
"In order to achieve strong capability of aggregation, the AST encoder is composed of six layers.",
"And to mitigate gradient vanishing and high offset caused by multi-layer processing, we adopt residual connection (He et al., 2016) and layer normalization (Ba et al., 2016) in each layer for improvement, which is formulated as follows: E kn = LayerNorm ( H kn + E k 1 n ) .",
"Note that, E k 1 n R l n d in this formula denotes the output vectors of nodes from the ( k 1 )-th layer.",
"The decoder of CODESCRIBE is designed with six stacks of modified Transformer decoding blocks.",
"Given the existing summary tokens, the k -th decoding block firstly encodes them by masked multihead attention with residual connection and layer normalization, which is formalized as: H ks = LayerNorm ( E k 1 s + MaskAtt ( E k 1 s , E k 1 s , E k 1 s )) , (6) where E k 1 s R l s d is the output vectors from the ( k 1 )-th layer and MaskAtt denotes the masked multi-head attention (Vaswani et al., 2017).",
"After that, we expand the Transformer block by leveraging two multi-head attention modules to interact with the two encoders for summary decoding.",
"One multi-head attention module is performed over the AST features to get the first-stage decoded information, which will then be fed into the other over the learned source code for the second-stage decoding.",
"Then the decoded summary vectors are put into FFN for non-linear transformation.",
"The process can be formalized as follows: H ks,n = LayerNorm ( H ks + Att ( H ks , E (cid:48) n , E (cid:48) n )) , H ks,c = LayerNorm ( H ks,n + Att ( H ks,n , E (cid:48) c , E (cid:48) c )) , E ks = LayerNorm ( H ks,c + FFN ( H ks,c )) , (7) where E (cid:48) n and E (cid:48) c are the learned features of AST nodes and code tokens, respectively.",
"We present a multi-source pointer-generator network (MPG) on top of the decoder and encoders to yield the final probability of the next summary token.",
"Considering that tokens such as function names and variable names appear both in code and summary text (Ahmad et al., 2020), MPG is designed to allow CODESCRIBE to generate summary tokens both from the summary vocabulary and from the AST and source code.",
"Taking the m -th output token as an example, three probability distributions p v , p c , and p n will be calculated from decoded summary, code, and AST and determine the probabilities for the token.",
"To get the first probability distribution p v , a Linear sub-layer with Softmax is applied over the decoded summary token vector e (cid:48) s R d , as follows: p v = Softmax ( Linear ( e (cid:48) s )) .",
"For a token w , p v ( w ) = 0 if w is an out-of-vocabulary word to the summary vocabulary.",
"As for the distributions p c and p n , we only describe p c since the two have the similar calculation process.",
"In detail, our model applies an additional multi-head attention layer stacked on the last code encoding block and summary decoding block.",
"It takes the decoded summary token vector e (cid:48) s R d as query and the encoded code information E (cid:48) c R l c d as key and value: c = Att ( e (cid:48) s , E (cid:48) c , E (cid:48) c ) , c = Softmax ( Mean ( a 1 , a 2 , . . . , a i , . . . )) , a i = Softmax (cid:32) e (cid:48) s W Qi ( E (cid:48) c W Ki ) T d (cid:33) ( E (cid:48) c W Vi ) , (9) where W Qi , W Ki , and W Vi are learnable parameters.",
"The context vector c R d will be used for the final distribution.",
"Through the function Mean and Softmax , the attention vectors ( a 1 , a 2 , . . . , a i , . . . ) of all heads are averaged as c R l c .",
"For the token w , its probability p c ( w ) is formulated as follows: p c ( w ) = (cid:80) i : w i = w ci , (10) where w i means the i -th token in the source code.",
"Similarly, we can get n and p n corresponding to the AST.",
"After that, the final probability p s ( w ) of the token w is defined as a mixture of the three probabilities: p s ( w ) = v p v ( w ) + c p c ( w ) + n p n ( w ) , [ v , c , n ] = Softmax ( Linear ([ e (cid:48) s , c , n ])) , (11) where v , c , and n are the weight values for p v ( w ) , p c ( w ) , and p n ( w ) .",
"The higher the probability p s ( w ) is, the more likely the token w is considered as the next summary token.",
"We conduct experiments to answer the following research questions: (1) How effective is CODESCRIBE compared with the state-of-the-art baselines?",
"(2) How effective is the structure design of CODESCRIBE ?",
"(3) What is the impact of model size on the performance of CODESCRIBE ?",
"We also perform a qualitative analysis of two detailed examples.",
"The experiments are conducted based on two benchmarks: (1) Java dataset (Hu et al., 2018b) and (2) Python dataset (Wan et al., 2018).",
"The two datasets are split into train/valid/test sets with 69,708/8,714/8,714 and 55,538/18,505/18,502 , respectively.",
"In the experiments, we follow the divisions for the fairness of the results.",
"In the data preprocessing, NLTK package (Bird, 2006) is utilized for the tokenization of source code and summary text.",
"And we apply javalang 2 and ast 3 packages to parsing Java and Python code into ASTs.",
"In addition, the tokens in forms of Cammel-Case , snake_case , and concatenatecase are split into sub-tokens as Cammel Case , snake case , and concatenate case .",
"We leverage PyTorch 1.9 for CODESCRIBE implementation.",
"The model runs under the development environment of Python 3.9 with NVIDIA 2080 Ti GPUs and CUDA 10.2 supported.",
"We follow the previous works (Ahmad et al., 2020; Choi et al., 2021) and set all the embedding sizes of code tokens, AST nodes, and summary tokens to 512, and the number of attention headers to 8.",
"As described in Section 3, the numbers of layers of code encoder, AST encoder, and summary decoder are 2, 6, and 6, respectively.",
"The model is trained with Adam optimizer (Kingma and Ba, 2015).",
"We initialize the learning rate as 5 e 4 that will be decreased by 5% after each training epoch until to 2 .",
"5 e 5 .",
"The dropout rate is set to 0 .",
"2 .",
"We set the batch size to 96 and 160 for the Java and Python datasets, respectively.",
"The training process will terminate after 100 epochs or stop early if the performance does not improve for 10 epochs.",
"In addition, we leverage beam search (Koehn, 2004) during the model inference and set the beam width to 5.",
"We introduce eight state-of-the-art works as baselines for comparison, including six RNN-based",
"models and two Transformer-based models.",
"RNN-based Models.",
"Among these baselines, CODE-NN (Iyer et al., 2016), API+CODE (Hu et al., 2018b), and Dual Model (Wei et al., 2019) learn source code for summarization.",
"Tree2Seq (Eriguchi et al., 2016) and DeepCom (Hu et al., 2018a) generate summaries from AST features.",
"RL+Hybrid2Seq (Wan et al., 2018) combines source code and AST based on LSTM.",
"baselines include CopyTrans (Ahmad et al., 2020) and mAST+GCN (Choi et al., 2021), both of which leverage Transformer for code summary generation.",
"The main difference is that CopyTrans learns sequential source code, and mAST+GCN is built based on AST.",
"For the model evaluation, three metrics are introduced: BLEU (Papineni et al., 2002), METEOR (Banerjee and Lavie, 2005), and ROUGE (Lin, 2004).",
"All the scores are presented in percentage.",
"We first evaluate the performance of CODESCRIBE by comparing it with eight state-of-the-art baselines.",
"The results of baselines are all from Choi et al. (2021) and are shown in Table",
"1. The overall results in Table 1 illustrate that the recent Transformer-based models (Ahmad et al., 2020; Choi et al., 2021) are superior to the previous works based on RNNs (Iyer et al., 2016; Eriguchi et al., 2016; Wan et al., 2018; Hu et al., 2018a,b; Wei et al., 2019).",
"Although the two models CopyTrans and mAST+GCN have high performance in code summarization, our approach CODESCRIBE performs much better than them both on the two datasets.",
"Intuitively, CODESCRIBE improves the performance (i.e., BLEU/METEOR/ROUGE-L) by 4.46/5.84/4.83% on the Java dataset and 2.59/3.71/3.73% on the Python dataset compared to CopyTrans.",
"In comparison with mAST+GCN, the performance of CODESCRIBE improves by 3.70/5.10/4.77% on the Java dataset and 2.29/3.36/3.65% on the Python dataset.",
"RNN-based models in code summarization task; (2) AST information contributes significantly to code comprehension; and (3) by incorporating both AST and source code into CODESCRIBE based on GraphSAGE and Transformer, the performance can be greatly improved due to its more comprehensive learning capacity for code and better decoding for summary generation.",
"This section validates the effectiveness of CODESCRIBE 's structure to by performing an ablation study on the Java dataset.",
"We firstly design five models for comparison that remove one of important components in CODESCRIBE including: (1) the AST encoder (R-AST), (2) the source code encoder (R-Code), (3) the triplet positions (R-ASTPos), (4) the MPG (R-Copy), and (5) the residual connection in the AST encoder (R-ASTRes).",
"We further investigate the rationality of CODESCRIBE 's structure by comparison with five variants: (1) V-Copy that replaces MPG with the copying mechanism (See et al., 2017) used in Ahmad et al. (2020), (2) V-GCN that replaces GraphSAGE with GCN (Kipf and Welling, 2017), (3) V-GAT that replaces GraphSAGE with GAT (Kipf and Welling, 2017), (4) V-Emb that replaces the shared embedding layer for code tokens and AST nodes with two independent embedding layers, and (5) V-Dec that reverses the decoding order for the source code and AST features.",
"As shown in Table 2, the performance of CODESCRIBE is affected if the components are removed.",
"The results of R-AST and R-Code show that the two encoders are the most significant learning components to CODESCRIBE .",
"Moreover, the AST encoder is more important than the code encoder as R-Code performs better than R-AST.",
"The performances of R-ASTPos and R-Copy indicate that the triplet positions for nodes and copying mechanism (MPG) we proposed are effective for CODESCRIBE in code summarization.",
"In addition, we find that R-ASTRes suffers from under-fitting on the Java dataset, which indicates that the residual connection in AST encoder has a powerful influence on CODESCRIBE .",
"As illustrated in Table 2, CODESCRIBE improves the performance by 0.26/0.22/0.30% on the Java dataset compared with V-Copy.",
"It indicates that our proposed MPG is more effective than the copying mechanism in Ahmad et al. (2020).",
"As for the GNN module in AST encoding, it can be observed that CODESCRIBE still has the higher performance than V-GCN and V-GAT.",
"This demonstrates the superiority of GrahpSAGE for the architecture of CODESCRIBE compared to GCN and GAT.",
"Compared with V-Emb, it shows that the shared embedding layer works better than two separated embedding layers for AST and source code.",
"The result of V-Dec turns out that the performance will not be affected sinificantly if the order of decoding over AST and code features is reversed.",
"The results on the Python dataset are presented in Table 7 in Appendix A. 4.6 Study on the Model Size (RQ3) This section studies the performance of CODESCRIBE with the change of model size 4 on the Java dataset.",
"To that end, we modify the number of layers of the encoders and the decoder respectively for performance observation and comparison.",
"Table 3 presents the performance of CODESCRIBE when the number of AST encoding layers 4 This work considers the number of trainable parameters in the encoders and decoder of CODESCRIBE as the model size to facilitate observation.",
"varies from 2 to 12.",
"The results show that the performance improves as the number of AST encoding layers increases from 2 to 6.",
"With the increase of the number from 6 to 12, the performance does not improve any more and is even impacted slightly.",
"As illustrated in Table 4, CODESCRIBE has the best performance with 2 code encoding layers.",
"With the number of code layers growing from 4 to 12, there is a trend of gradual decrease of the performance.",
"For the model size concerned with summary decoding layers, as shown in Table 5, the performance is getting better when the number of layers ranges from 2 to 6, and can not be improved as the number continues to increase.",
"The overall results show that it the performance of CODESCRIBE will not be improved if the encoders and the decoder become too deep (i.e. with more layers), especially for the source code encoder.",
"More experimental results are provided in Table 8 11 in Appendix B. 4.7 Case Study Table 6 shows the qualitative examples of R-AST, R-Copy, V-GCN, V-Dec, and CODESCRIBE on the two datasets.",
"From the table, it can be observed that CODESCRIBE with the whole architecture generates better code summaries compared with the four variants.",
"In the case on the Java dataset, only R-Copy and CODESCRIBE get the right intent of the code.",
"The other variants miss out the key word history .",
"In the case on the Python dataset, CODE Code Model BLEU METEOR ROUGE-L Layers Size ( 10 6 ) 2 40.99 49.19 32.27 59.59 4 47.30 48.80 32.15 59.32 6 53.60 48.92 32.10 59.30 8 59.91 48.73 31.95 58.95 10 66.21 49.11 31.97 59.09 12 72.52 48.36 31.59 58.59 Table 4: Performance of CODESCRIBE with different numbers of code encoding layers on the Java dataset.",
"SCRIBE generates the most accurate summary compared to the other variants.",
"In contrast, although the four variants output the first half of the summary (i.e., create an image ), the rest information from the value dictionary . can not be generated correctly.",
"More qualitative examples are referred to Table 12 and 13 in Appendix C. 5 Related Work With the development of deep learning, most works have considered code summarization as a sequence generation task.",
"In many of the recent approaches, source code snippets are modeled as plain texts based on RNNs (Iyer et al., 2016; Hu et al., 2018b; Wei et al., 2019; Ye et al., 2020).",
"For example, Hu et al. (2018b) proposed an RNN-based model that learns API knowledge from a different but related task and incorporates the knowledge into code summarization.",
"Wei et al. (2019) presented a dual learning framework based on LSTMs to train code generation and code summarization and improve the performances of both tasks.",
"Ye et al. (2020) combined code summarization and code generation to train the code retrieval task via multi-task learning, which achieved competitive performance for the code summarization task.",
"Most recently, Ahmad et al. (2020) applied Transformer to encoding the source code sequence to improve the summarization performance.",
"Since considering source code as plain text ignores the structural information in code, recent works have explored the AST of code and modeled the tree-based structure for code summarization.",
"Typically, Hu et al. (2018a) proposed a structure-based traversal (SBT) method to traverse ASTs into node sequences and used a sequence-to-sequence model based on LSTMs to generate code comments.",
"Alon et al. (2019) represented a code snippet as a set of compositional paths in its AST and used LSTMs to encode these paths.",
"Shido et al. (2019) extended Tree-LSTM (Tai et al., 2015) to Multi-way Tree-LSTM to learn the representation of AST for code summary generation.",
"Liu et al. (2020) built code property graph (CPG) (Ya-maguchi et al., 2014) based on AST and combined retrieval method and GNNs for describing C programming language.",
"The latest work (Choi et al., 2021) performed graph convolutional networks (GCNs) (Kipf and Welling, 2017) before Transformer framework to learn AST representation for summary generation.",
"To represent the code comprehensively, more and more works have paid attention to both the source code and the AST for code summarization.",
"For example, Hu et al. (2020) integrated both AST node sequence and source code into a hybrid learning framework based on GRUs.",
"Wei et al. (2020) and Zhang et al. (2020) both utilized the information retrieval techniques to improve the quality of code summaries that are generated from the code snippets and ASTs.",
"Wan et al. (2018) incorporated AST as well as sequential content of code snippet into a deep reinforcement learning framework based on LSTM and AST-based LSTM.",
"LeClair et al. (2020) proposed a graph-based neural architecture for code summarization, which uses GRUs and GCN to encode AST and GRUs to learn source code sequence.",
"Wang et al. (2022) presented the first hierarchical-attention based learning approach for code summarization by integrating source code, type-augmented AST, and control-flow graphs.",
"Recently, several pre-trained models, e.g., CodeBERT (Feng et al., 2020), CodeT5 (Wang et al., 2021), PLBART (Ahmad et al., 2021) and Co-TexT (Phan et al., 2021), have been proposed to better represent the source code, and verified on code summarization.",
"For example, CodeBERT (Feng et al., 2020) is a pre-trained model based on ELEC-TRA (Clark et al., 2020), which has achieved promising performance on downstream tasks including code summarization.",
"CodeT5 (Wang et al., 2021) considers the token type information in code and builds on the T5 architecture (Raffel et al., 2020) that utilizes denoising sequence-to-sequence pre-training.",
"PLBART (Ahmad et al., 2021) is another start-of-the-art pre-trained model on an extensive collection of Python and Java functions, as well as their natural language summaries via denoising auto-encoding.",
"Note that, our work is aim to introduce an encoder network with a novel triplet position to better represent the hierarchical structure of programs, rather than pre-training a language model for source code.",
"We think that our introduced encoder can be easily incorporated into the pre-training models through masking and predicting code tokens or code graphs.",
"We leave the comparison between our model and those mentioned pre-trained code models to future work.",
"This paper has presented CODESCRIBE , an encoder-decoder-based neural network for source code summarization.",
"CODESCRIBE designs a triplet position to model the hierarchical syntax structure of code, which is then incorporated into the Transformer and GNN based framework for better representation of lexical and syntax information of code, respectively.",
"The performance of CODESCRIBE is further enhanced by the introduced multi-source pointer generator in decoding.",
"Experiments on two benchmarks reveal that the summaries generated by CODESCRIBE are of higher quality when compared with several recent state-of-the-art works.",
"This work is supported by the National Natural Science Foundation of China under Grants 61972290.",
"Yao Wan is partially supported by the National Natural Science Foundation of China under Grant No. 62102157.",
"We would like to thank all the anonymous reviewers for their constructive comments on improving this paper."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"abstain",
"method",
"objective",
"objective",
"objective",
"abstain",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"The journey of reducing noise from distant supervision (DS) generated training data has been started since the DS was first introduced into the relation extraction (RE) task.",
"For the past decade, researchers apply the multi-instance learning (MIL) framework to find the most reliable feature from a bag of sentences.",
"Although the pattern of MIL bags can greatly reduce DS noise, it fails to represent many other useful sentence features in the datasets.",
"In many cases, these sentence features can only be acquired by extra sentence-level human annotation with heavy costs.",
"Therefore, the performance of distantly supervised RE models is bounded.",
"In this paper, we go beyond typical MIL framework and propose a novel C ontrastive Instance Learning (CIL) framework.",
"Specifically, we regard the initial MIL as the relational triple encoder and constraint positive pairs against negative pairs for each instance.",
"Experiments demonstrate the effectiveness of our proposed framework, with significant improvements over the previous methods on NYT10, GDS and KBP.",
"Training a robust and unbiased RE system under DS data Corresponding author (cid:1710) (cid:1876) (cid:2869) (cid:2009) (cid:2869) (cid:2009) (cid:2870) (cid:2009) (cid:3014) (cid:1876) (cid:2870) (cid:1876) (cid:3014) (cid:1710) (cid:1710) (cid:1876) (cid:2869) (cid:1876) (cid:2870) (cid:1876) (cid:3014) (cid:3560)(cid:1828) (cid:1860) (cid:2869) (cid:1860) (cid:3014) (cid:1860) (cid:2870) (cid:1870) (cid:1870) (cid:1710) (cid:1876) (cid:2869) (cid:2009) (cid:2869) (cid:2009) (cid:2870) (cid:2009) (cid:3014) (cid:1876) (cid:2870) (cid:1876) (cid:3014) Sentence Encoder (cid:1876)(cid:1876)(cid:1876)(cid:1876)(cid:1876)(cid:1876)(cid:1876)(cid:1876)(cid:1876) (cid:2869)(cid:2869)(cid:2869)(cid:2869)(cid:2869) (cid:1710) (cid:1876)(cid:1876)(cid:1876)(cid:1876)(cid:1876)(cid:1876)(cid:1876)(cid:1876) (cid:2870)(cid:2870)(cid:2870)(cid:2870)(cid:2870) (cid:1876) (cid:2869) (cid:1876) (cid:2870) (cid:1876) (cid:3014) (cid:3560)(cid:1828) (cid:1710) (cid:1860) (cid:2869) (cid:1860) (cid:3014) (cid:1860) (cid:2870) (cid:1870) KB Fact [ (cid:1857) (cid:2869) , (cid:1857) (cid:2870) , (cid:1870) ] (cid:1828) Figure 1: Classical MIL framework for DSRE.",
"Relation extraction (RE) aims at predicting the relation between entities based on their context.",
"Several studies have been carried out to handle this crucial and complicated task over decades as the extracted information can serve as a significant role for many downstream tasks.",
"Since the amount of training data generally limits traditional supervised RE systems, current RE systems usually resort to distant supervision (DS) to fetch abundant training data by aligning knowledge bases (KBs) and texts.",
"However, such a heuristic way inevitably introduces some noise to the generated data.",
"(Left)",
"A set of instances ( x 1 , x 2 , . . . , x m ) with the same KB fact [ e 1 , e 2 , r ] form a bag B ; (Right) The MIL framework trains the DSRE model at bag level( (cid:101) B : (cid:80) i h i ).",
"noise becomes the biggest challenge for distantly supervised relation extraction (DSRE).",
"With awareness of the existing DS noise, Zeng et al. (2015) introduces the multi-instance learning (MIL) framework to DSRE by dividing training instances into several bags and using bags as new data units.",
"Regarding the strategy for selecting instances inside the bag, the soft attention mechanism proposed by Lin et al. (2016) is widely used for its better performance than the hard selection method.",
"The ability to form accurate representations from noisy data makes the MIL framework soon become a paradigm of following-up works.",
"However, we argue that the MIL framework is effective to alleviate data noise for DSRE, but is not data-efficient indeed: As Figure 1 shows: The attention mechanism in the MIL can help select relatively informative instances ( e.g. h 1 , h 2 ) inside the bag, but may ignore the potential information of other abundant instances ( e.g. h m ).",
"In other words, no matter how many instances a bag contains, only the formed bag-level representation can be used for further training in the MIL, which is quite ineffi-cient.",
"Thus, our focus is on how to make the initial MIL framework efficient enough to leverage all instances while maintaining the ability to obtain an accurate model under DS data noise ?",
"Here, we propose a contrastive-based method to help the MIL framework learn efficiently.",
"In detail, we regard the initial MIL framework as the bag encoder, which provides relatively accurate representations for different relational triples.",
"Then we develop contrastive instance learning (CIL) to utilize each instance in an unsupervised manner: In short, the goal of our CIL is that the instances sharing the same relational triples ( i.e. positive pairs) ought to be close in the semantic space, while the representations of instances with different relational triples ( i.e. negative pairs) should be far away.",
"Experiments on three public DSRE benchmarks NYT10 (Riedel et al., 2010; Hoffmann et al., 2011), GDS (Jat et al., 2018) and KBP (Ling and Weld, 2012) demonstrate the effectiveness of our proposed framework CIL, with consistent improvements over several baseline models and far exceed the state-of-the-art (SOTA) systems.",
"Furthermore, the ablation study shows the rationality of our proposed positive/negative pair construction strategy.",
"We discuss the long-standing MIL framework and point out that it can not effectively utilize abundant instances inside MIL bags.",
"We propose a novel contrastive instance learning method to boost the DSRE model performances under the MIL framework.",
"Evaluation on held-out and human-annotated sets shows that CIL leads to significant improvements over the previous SOTA models.",
"In this paper, we argue that the MIL framework is effective to denoise but is not efficient enough, as the initial MIL framework only leverages the formed bag-level representations to train models and sacrifices the potential information of numerous instances inside bags.",
"Here, we go beyond the typical MIL framework and develop a novel contrastive instance learning framework to solve the above issue, which can prompt DSRE models to utilize each instance.",
"A formal description of our proposed CIL framework is illustrated as follows.",
"tokens: ( t 1 , t 2 , . . . e 1 . . . e 2 . . . t L ), where e 1 , e 2 are the tokens corresponding to the two entities, and L is the max length of all input sequences.",
"Following standard practices (Devlin et al., 2019), we add two special tokens to mark the beginning ([CLS]) and the end ([SEP]) of sentences.",
"In BERT, token [CLS] typically acts as a pooling token representing the whole sequence for downstream tasks.",
"However, this pooling representation considers entity tokens e 1 and e 2 as equivalent to other common word tokens t i , which has been proven (Baldini Soares et al., 2019) to be unsuitable for RE tasks.",
"To encode the sentence in an entity-aware manner, we add four extra special tokens ([H-CLS], [H-SEP]) and ([T-CLS], [T-SEP]) to mark the beginning and the end of two entities.",
"Position Embedding In the Transformer attention mechanism (Vaswani et al., 2017), positional encodings are injected to make use of the order of the sequence.",
"Precisely, the learned position embedding has the same dimension as the token embedding so that the two can be summed.",
"(cid:1876) Figure 2: BERT Encoder: N Transformer Blocks.",
"BERT Encoder (Transformer Blocks, see Figure 2) transforms the above embedding inputs (token embedding & position embedding) into hidden feature vectors: ( h 1 , h 2 , . . . h e 1 . . . h e 2 . . . h L ) , where h e 1 and h e 2 are the feature vectors corresponding to the entities e 1 and e 2 .",
"By concatenating the two entity hidden vectors, we can obtain the entity-aware sentence representation h = [ h e 1 ; h e 2 ] for the input sequence x .",
"We denote the sentence encoder H as: H ( x ) = [ h e 1 ; h e 2 ] = h 2.3 Bag Encoder Under the MIL framework, a couple of instances x with the same relational triple [ e 1 , e 2 , r ] form a bag B .",
"We aim to design a bag encoder F to obtain representation (cid:101) B for bag B , and the obtained bag representation is also a representative of the current relational triple [ e 1 , e 2 , r ] , which is defined as: F ( B ) = F ([ e 1 , e 2 , r ]) = (cid:101) B With the help of the sentence encoder described in section 2.2, each instance x i in bag B can be first encoded to its entity-aware sentence representation h i = H ( x i ) .",
"Then the bag representation (cid:101) B can be regarded as an aggregation of all instances' representations, which is further defined as: F ([ e 1 , e 2 , r ]) = (cid:101) B = K (cid:88) i =1 i h i where K is the bag size.",
"As for the choice of weight i , we follow the soft attention mechanism used in (Lin et al., 2016), where i is the normalized attention score calculated by a query-based function f i that measures how well the sentence representation h i and the predict relation r matches: i = e f i (cid:80) j e f j where f i = h i Aq r , A is a weighted diagonal matrix and q r is the query vector which indicates the representation of relation r (randomly initialized).",
"Then, to train such a bag encoder parameterized by , a simple fully-connected layer with activation function softmax is added to map the hidden feature vector (cid:101) B to a conditional probability distribution p ( r | (cid:101) B, ) , and this can be defined as: p ( r | (cid:101) B, ) = e o r (cid:80) n r i =1 e o i where o = M (cid:101) B + b is the score associated to all relation types, n r is the total number of relations, M is a projection matrix, and b is the bias term.",
"And we define the objective of bag encoder using cross-entropy function as follows: LB ( ) = (cid:88) i =1 log p ( r i | (cid:101) B i , ) 2.4 Contrastive Instance Learning As illustrated in section 1, the goal of our framework CIL is that the instances containing the same relational triples ( i.e. positive pairs) should be as close ( i.e. ) as possible in the hidden semantic space, and the instances containing different relational triples ( i.e. negative pairs) should be as far ( i.e. (cid:28) ) away as possible in the space.",
"A formal description is as follows.",
"Assume there is a batch bag input (with a batch size G ): ( B 1 , B 2 , . . . , BG ) , the relational triples of all bags are different from each other.",
"Each bag B in the batch is constructed by a certain relational triple [ e 1 , e 2 , r ] , and all instances x inside the bag satisfy this triple.",
"The representation of the triple can be obtained by bag encoder as (cid:101) B .",
"We pick any two bags B s and B t : t (cid:54) = s in the batch to further illustrate the process of contrastive instance learning.",
"B s is defined as the source bag constructed with relational triple [ e s 1 , e s 2 , r s ] while B t is the target bag constructed with triple [ e t 1 , e t 2 , r t ] .",
"And we discuss the positive pair instance and negative pair instances for any instance x s in bag B s .",
"It is worth noting that all bags are constructed automatically by the distantly supervised method, which extracts relational triples from instances in a heuristic manner and may introduce true/false positive label noise to the generated data.",
"In other words, though the instance x is included in the bag with relational triple [ e 1 , e 2 , r ] , it may be noisy and fail to express the relation r .",
"Instance x s Random Instance x s (cid:48) One intuitive choice of selecting positive pair instance for instance x s is just picking another instance x s (cid:48) (cid:54) = x s from the bag B randomly.",
"However, both of the instances x s and x s (cid:48) may suffer from data noise, and they are hard to express the same relational triple simultaneously.",
"Thus, taking instance x s and randomly selected instance x s (cid:48) as a positive pair is not an optimal option.",
"(cid:1876) (cid:1876) (cid:1876) Figure 3: Instance x s Random Instance x s (cid:48)",
"the relational triple representation (cid:101) B s of current bag B .",
"Though (cid:101) B s can be regarded as a de-noised representation, x s may be still noisy and express other relation r (cid:54) = r s .",
"Besides, the quality of constructed positive pairs heavily relies on the model performance of the bag encoder.",
"(cid:1710) (cid:1876) (cid:3046) (cid:1876) (cid:3046)(cid:4593) (cid:3560)(cid:1828) (cid:3046) (cid:1710) (cid:1876) (cid:3046) (cid:3560)(cid:1828) (cid:3046) (cid:1710) (cid:1876) (cid:3046) (cid:1876) (cid:3046)(cid:1499) (cid:3560)(cid:1828) (cid:3046) (cid:1710) (cid:1710) (cid:1876) (cid:2869) (cid:2009) (cid:2869) (cid:2009) (cid:2870) (cid:2009) (cid:3014) (cid:1876) (cid:2870) (cid:1876) (cid:3014) (cid:1710) (cid:1710) (cid:1876) (cid:2869) (cid:1876) (cid:2870) (cid:1876) (cid:3014) (cid:1860) (cid:2869) (cid:1860) (cid:3014) (cid:1860) (cid:2870) (cid:1710) (cid:1876) (cid:3046) (cid:3560)(cid:1828) (cid:3046) (cid:1710) (cid:3560)(cid:1828) (cid:3047) (cid:1876) (cid:3047) (cid:1710) (cid:1876) (cid:3046) (cid:3560)(cid:1828) (cid:3046) (cid:1710) (cid:3560)(cid:1828) (cid:3047) (cid:1710) (cid:3560)(cid:1828) (cid:3046) (cid:1710) (cid:3560)(cid:1828) (cid:3047) (cid:1876) (cid:3046)(cid:1499) (cid:1710) (cid:1710) (cid:1870) (cid:3046) (cid:1870) (cid:3047) (cid:1876) (cid:3046) (cid:1876) (cid:3047) (cid:1876) (cid:3047)(cid:1499) Figure 4: Instance x s Relational Triple (cid:101) B s Instance x s Augmented Instance x s From the above analysis, we can see that the general positive pair construction methods often encounter the challenge of DS noise.",
"(cid:1876) (cid:3047)(cid:1499) (cid:3560)(cid:1828) (cid:3047) (cid:1876) (cid:3047) (cid:1710) (cid:1876) (cid:3046) (cid:1876) (cid:3046)(cid:4593) (cid:3560)(cid:1828) (cid:3046) (cid:1710) (cid:1876) (cid:3046) (cid:3560)(cid:1828) (cid:3046) (cid:1710) (cid:1876) (cid:3046) (cid:1876) (cid:3046)(cid:1499) (cid:3560)(cid:1828) (cid:3046) (cid:1710) (cid:1710) (cid:1876) (cid:2869) (cid:2009) (cid:2869) (cid:2009) (cid:2870) (cid:2009) (cid:3014) (cid:1876) (cid:2870) (cid:1876) (cid:3014) (cid:1710) (cid:1710) (cid:1876) (cid:2869) (cid:1876) (cid:2870) (cid:1876) (cid:3014) (cid:1860) (cid:2869) (cid:1860) (cid:3014) (cid:1860) (cid:2870) (cid:1710) (cid:1876) (cid:3046) (cid:3560)(cid:1828) (cid:3046) (cid:1710) (cid:1710) (cid:1876) (cid:3046) (cid:3560)(cid:1828) (cid:3046) (cid:1710) (cid:3560)(cid:1828) (cid:3047) (cid:1710) (cid:3560)(cid:1828) (cid:3046) (cid:1710) (cid:3560)(cid:1828) (cid:3047) (cid:1876) (cid:3046)(cid:1499) (cid:1710) (cid:1710) (cid:1870) (cid:3046) (cid:1870) (cid:3047) (cid:1876) (cid:3046) (cid:1876) (cid:3047) Figure 7: Instance x s (cid:28) Random Instance x t Instance x s (cid:28) Relational Triple (cid:101) B t Compared to the random selection strategy, using relational triple representation (cid:101) B t as the negative pair instance for x s is a better choice to reduce the im-pact of data noise.",
"Here, we propose a noise-free positive pair construction method based on TF-IDF data augmentation: If we only make small and controllable data augmentation to the original instance x s , the augmented instance x s should satisfy the same relational triple with instance x s .",
"(cid:1710) (cid:1876) (cid:3046) (cid:1876) (cid:3046)(cid:4593) (cid:3560)(cid:1828) (cid:3046) (cid:1710) (cid:1876) (cid:3046) (cid:3560)(cid:1828) (cid:3046) (cid:1710) (cid:1876) (cid:3046) (cid:1876) (cid:3046)(cid:1499) (cid:3560)(cid:1828) (cid:3046) (cid:1710) (cid:1710) (cid:1876) (cid:2869) (cid:2009) (cid:2869) (cid:2009) (cid:2870) (cid:2009) (cid:3014) (cid:1876) (cid:2870) (cid:1876) (cid:3014) (cid:1710) (cid:1710) (cid:1876) (cid:2869) (cid:1876) (cid:2870) (cid:1876) (cid:3014) (cid:1860) (cid:2869) (cid:1860) (cid:3014) (cid:1860) (cid:2870) (cid:1710) (cid:1876) (cid:3046) (cid:3560)(cid:1828) (cid:3046) (cid:1710) (cid:3560)(cid:1828) (cid:3047) (cid:1876) (cid:3047) (cid:1710) (cid:1876) (cid:3046) (cid:3560)(cid:1828) (cid:3046) (cid:1710) (cid:3560)(cid:1828) (cid:3047) (cid:1710) (cid:3560)(cid:1828) (cid:3046) (cid:1710) (cid:3560)(cid:1828) (cid:3047) (cid:1876) (cid:3046)(cid:1499) (cid:1710) (cid:1710) (cid:1870) (cid:3046) (cid:1870) (cid:3047) (cid:1876) (cid:3046) (cid:1876) (cid:3047) (cid:1876) (cid:3047)(cid:1499) Figure 5: Instance x s Augmented Instance x s In detail: (1) We first view each instance as a document and view each word in the instances as a term, then we train a TF-IDF model on the total training corpus.",
"(cid:1876) (cid:3047)(cid:1499) (cid:3560)(cid:1828) (cid:3047) (cid:1876) (cid:3047) (cid:1710) (cid:1876) (cid:3046) (cid:1876) (cid:3046)(cid:4593) (cid:3560)(cid:1828) (cid:3046) (cid:1710) (cid:1876) (cid:3046) (cid:3560)(cid:1828) (cid:3046) (cid:1710) (cid:1876) (cid:3046) (cid:1876) (cid:3046)(cid:1499) (cid:3560)(cid:1828) (cid:3046) (cid:1710) (cid:1710) (cid:1876) (cid:2869) (cid:2009) (cid:2869) (cid:2009) (cid:2870) (cid:2009) (cid:3014) (cid:1876) (cid:2870) (cid:1876) (cid:3014) (cid:1710) (cid:1710) (cid:1876) (cid:2869) (cid:1876) (cid:2870) (cid:1876) (cid:3014) (cid:1860) (cid:2869) (cid:1860) (cid:3014) (cid:1860) (cid:2870) (cid:1710) (cid:1876) (cid:3046) (cid:3560)(cid:1828) (cid:3046) (cid:1710) (cid:1710) (cid:1876) (cid:3046) (cid:3560)(cid:1828) (cid:3046) (cid:1710) (cid:3560)(cid:1828) (cid:3047) (cid:1710) (cid:3560)(cid:1828) (cid:3046) (cid:1710) (cid:3560)(cid:1828) (cid:3047) (cid:1876) (cid:3046)(cid:1499) (cid:1710) (cid:1710) (cid:1870) (cid:3046) (cid:1870) (cid:3047) (cid:1876) (cid:3046) (cid:1876) (cid:3047) Figure 8: Instance x s (cid:28) Relational Triple (cid:101) B t 2.5 Training Objective As discussed above, for any instance x s in the source bag B s : (1) The instance x s after controllable data augmentation based on x s is its positive pair instance.",
"2.4.2 Negative Pair Construction Instance x s (cid:28) Random Instance x t Similarly, for instance x s in bag B s , we can randomly select an instance x t from another different bag B t as its negative pair instance.",
"Under this strategy, x s is far away from the average representation (cid:80) Ki =1 i h i of the bag B t , where all i = 1 K approximately.",
"And the randomly selected instance x t may be too noisy to represent the relational triple of bag B t , so that the model performance may be influenced.",
"As the instance x i can be seen as be far away from a weighted representation (cid:80) Ki =1 i h i of the bag B t , where all i are learnable.",
"Though the instance x s may still be noisy, x s and (cid:101) B t can not belong to the same relational triple.",
"(2) The relational triple representations (cid:101) B t of other different ( t (cid:54) = s ) bags in the batch are its negative pair instances.",
"The overall schematic diagram of CIL is shown in Figure 9.",
"(2) Based on the trained TF-IDF model, we insert/substitute some unimportant (low TF-IDF score, see Figure 6) words to/in instance x s with a specific ratio, and can obtain its augmented instance x s .",
"Particularly, special masks are added to entity words to avoid them being substituted.",
"And we define the objective for instance x s in",
"bag B s using InfoNCE (Oord et al., 2018) loss:",
"where sim ( a, b ) is the function to measure the similarity between two representation vectors a, b , and h s = H ( x s ) , h s = H ( x s ) are the sentence representations of instances x s , x s .",
"Besides, to inherit the ability of language understanding from BERT and avoid catastrophic forgetting (McCloskey and Cohen, 1989), we also add the masked language modeling (MLM) objective to our framework.",
"Pre-text task MLM randomly masks some tokens in the inputs and allows the model to predict the masked tokens, which prompts the model to capture rich semantic information in the contexts.",
"And we denote this objective as LM ( ) .",
"Accordingly, the total training objective of our contrastive instance learning framework is: L ( ) = ( t ) N (cid:88) B (cid:88) x BLC ( x ; )+ LB ( )+ MLM ( ) where N = KG is the total number of instances in the batch, M is the weight of language model objective LM , and ( t ) [0 , 1] is an increasing function related to the relative training steps t : ( t ) = 2 1 + e t 1 At the beginning of our training, the value of ( t ) is relatively small, and our framework CIL focuses on obtaining an accurate bag encoder ( LB ).",
"The value of ( t ) gradually increases to 1 as the relative training steps t increases, and more attention is paid to the contrastive instance learning ( LC ).",
"Our experiments are designed to verify the effectiveness of our proposed framework CIL.",
"We evaluate our method on three popular DSRE benchmarks NYT10, GDS and KBP, and the dataset statistics are listed in Table",
"1. NYT10 (Riedel et al., 2010) aligns Freebase entity relations with New York Times corpus, and it has two test set versions: (1) NYT10-D employs held-out KB facts as the test set and is still under distantly supervised.",
"(2) NYT10-H is constructed manually by (Hoffmann et al., 2011), which contains 395 sentences with human annotations.",
"GDS (Jat et al., 2018) is created by extending the Google RE corpus with additional instances for each entity pair, and this dataset assures that the at-least-one assumption of MIL always holds.",
"KBP (Ling and Weld, 2012) uses Wikipedia articles annotated with Freebase entries as the training set, and employs manually-annotated sentences from 2013 KBP slot filling assessment results (Ellis et al., 2012) as the extra test set.",
"Following previous literature (Lin et al., 2016; Vashishth et al., 2018; Alt et al., 2019), we first conduct a held-out evaluation to measure model performances approximately on NYT10-D and GDS.",
"Besides, we also conduct an evaluation on two human-annotated datasets (NYT10-H & KBP) to further support our claims.",
"Specifically, Precision-Recall curves (PR-curve) are drawn to show the trade-off between model precision and recall, the Area Under Curve (AUC) metric is used to evaluate overall model performances, and the Precision at N (P@N) metric is also reported to consider the accuracy value for different cut-offs.",
"Mintz (Mintz et al., 2009) A multi-class logistic regression RE model under DS setting.",
"PCNN-ATT (Lin et al., 2016) A piece-wise CNN model with selective attention over instances.",
"MTB-MIL (Baldini Soares et al., 2019) A relation learning method based on distributional similarity, achieves amazing results for supervised RE 1 .",
"RESIDE (Vashishth et al., 2018) A NN model that makes use of relevant side information (entity types and relational phrases) and employs Graph CNN to capture syntactic information of instances.",
"1 For MTB-MIL, we firstly conduct MTB pre-training to learn relation representations on the entire training corpus and continually fine-tune the model by the MIL framework.",
"REDSandT (Christou and Tsoumakas, 2021) A transformer-based DSRE method that manages to capture highly informative instance and label embeddings by exploiting BERT pre-trained model.",
"DISTRE (Alt et al., 2019) A transformer-based model, GPT fine-tuned for DSRE under the MIL.",
"We summarize the model performances of our method and above-mentioned baseline models in Table",
"2. From the results, we can observe that: (1) On both two datasets, our proposed framework CIL achieves the best performance in all metrics.",
"(2) On NYT10-D, compared with the previous SOTA model DISTRE, CIL improves the metric AUC (42.2 50.8) by 20.4% and the metric P@Mean (66.8 86.0) by 28.7%.",
"(3) On GDS, though the metric of previous models is already high ( 90 . 0 ), our model still improves it by nearly 2 percentage points.",
"(89.9 91.6 & 92.8 94.1).",
"The overall PR-curve on NYT10-D is visualized in Figure 10.",
"From the curve, we can observe that: (1) Compared to PR-curves of other baseline models, our method shifts up the curve a lot.",
"(2) Previous SOTA model DISTRE performs worse than model RESIDE at the beginning of the curve and yields a better performance after a recall-level of approximately 0.25, and our method CIL surpasses previous two SOTA models in all ranges along the curve, and it is more balanced between precision and recall.",
"(3) Furthermore, as a SOTA scheme of relation learning, MTB fails to achieve competitive results for DSRE.",
"This is because MTB relies on label information for pre-training, and noisy labels in DSRE may influence its model performance.",
"The automated held-out evaluation may not reflect the actual performance of DSRE models, as it gives false positive/negative labels and incomplete KB information.",
"Thus, to further support our claims, we also evaluate our method on two human-annotated datasets, and the results 2 are listed in Table 3.",
"From the above result table, we can see that: (1) Our proposed framework CIL can still perform well under accurate human evaluation, with averagely 21.7% AUC improvement on NYT10-H and 36.2% on KBP, which means our method can generalize 2 Manual evaluation is performed for each test sentence.",
"to real scenarios well.",
"(2) On NYT10-H, DISTRE fails to surpass PCNN-ATT in metric P@Mean.",
"This indicates that DISTRE gives a high recall but a low precision, but our method CIL can boost the model precision (54.1 63.0) while continuously improving the model recall (37.8 46.0).",
"And the human evaluation results further confirm the observations in the held-out evaluation described above.",
"We also present the PR-curve on KBP in Figure 11.",
"Under accurate sentence-level evaluation on KBP, the advantage of our model is more obvious with averagely 36.2% improvement on AUC, 17.3% on F1 and 3.9% on P@Mean, respectively.",
"We firstly conduct an ablation experiment to verify that CIL has utilized abundant instances inside bags: (1) By removing our proposed contrastive instance learning, the framework degenerates into vanilla MIL framework, and we train the MIL on regular bags (MIL bag ).",
"(2) To prove the MIL can not make full use of sentences, we also train the MIL on sentence bags (MIL sent ), which repeats each sentence in the training corpus to form a bag 3 .",
"From Table 4 we can see that: (1) MIL bag only resorts to the accurate bag-level representations to train the model and fails to play the role of each instance inside bags; thus, it performs worse than our method CIL (50.8 40.3).",
"(2) Though MIL sent can access all training sentences, it loses the advantages of noise reduction in MIL bag (40.3 30.6).",
"The noisy label supervision may wrongly guide model training, and its model performance heavily suffers from DS data noise (86.0 63.3).",
"(3) Our framework CIL succeeds in leveraging abundant instances while retaining the ability to denoise.",
"To validate the rationality of our proposed pos-itive/negative pair construction strategy, we also conduct an ablation study on three variants of our framework CIL.",
"We denote these variants as: CIL randpos : Randomly select an instance x s (cid:48) also from bag B s as the positive pair instance for x s .",
"CIL bagpos : Just take the relational triple representation (cid:101) B s as the positive pair instance for x s .",
"CIL randneg : Randomly select an instance x t from another bag B t as the negative pair instance for x s .",
"As the previous analysis in section 2.4, the three variants of our CIL framework may suffer from DS noise: (1) Both variants CIL randpos and CIL bagpos may construct noisy positive pairs; therefore, their model performances have a little drop (50.8 49.2, 50.8 47.8).",
"Besides, the variant CIL bagpos also relies on the bag encoder, for which it performs worse than the variant CIL randpos (49.2 47.8).",
"(2) Though the constructed negative pairs need not be as accurate as positive pairs, the variant CIL randneg treats all instances equally, which gives up the advantage of formed accurate representations.",
"Thus, its model performance also declines (50.8 48.4).",
"We select a typical bag (see Table 6) from the training set to better illustrate the difference between MIL sent , MIL bag and our framework CIL.",
"Under MIL sent pattern, both S1, S2 are used for model training, and the noisy sentence S2 may confuse the model.",
"As for MIL bag pattern, S1 is assigned with a high attention score while S2 has a low attention score.",
"However, MIL bag only relies on the bag-level representations, and sentences like S2 can not be used efficiently.",
"Our framework CIL makes full use of all instances (S1, S2) and avoids the negative effect of DS data noise from S2.",
"Our work is related to DSRE, pre-trained language models, and recent contrastive learning methods.",
"DSRE Traditional supervised RE systems heavily rely on the large-scale human-annotated dataset, which is quite expensive and time-consuming.",
"Distant supervision is then introduced to the RE field, and it aligns training corpus with KB facts to generate data automatically.",
"However, such a heuristic process results in data noise and causes classical supervised RE models hard to train.",
"To solve this issue, Lin et al. (2016) applied the multi-instance learning framework with selective attention mechanism over all instances, and it helps RE models learn under DS data noise.",
"Following the MIL framework, recent works improve DSRE models from many different aspects: (1) Yuan et al. (2019) adopted relation-aware attention and constructed super bags to alleviate the problem of bag label error.",
"(2) Ye et al. (2019) analyzed the label distribution of dataset and found the shifted label problem that significantly influences the performance of DSRE models.",
"(3) Vashishth et al. (2018) employed Graph Convolution Networks (Defferrard et al., 2016) to encode syntactic information from the text and improves DSRE models with additional side information from KBs.",
"(4) Alt et al. (2019) extended the GPT to the DSRE, and fine-tuned it to achieve SOTA model performance.",
"Pre-trained LM Recently pre-trained language models achieved great success in the NLP field.",
"Vaswani et al. (2017) proposed a self-attention based architecture Transformer, and it soon becomes the backbone of many following LMs.",
"By pre-training on a large-scale corpus, BERT (Devlin et al., 2019) obtains the ability to capture a notable amount of common-sense knowledge and gains significant improvements on many tasks following the fine-tune scheme.",
"At the same time, GPT (Rad-ford et al., 2018), XL-Net (Yang et al., 2019) and GPT2 (Radford et al., 2019) are also well-known pre-trained representatives with excellent transfer learning ability.",
"Moreover, some works (Radford et al., 2019) found that considerably increasing the size of LM results in even better generalization to downstream tasks.",
"Contrastive Learning As a popular unsupervised method, contrastive learning aims to learn representations by contrasting positive pairs against negative pairs (Hadsell et al., 2006; Oord et al., 2018; Chen et al., 2020; He et al., 2020).",
"Wu et al. (2018) proposed to use the non-parametric instance-level discrimination to leverage more information in the data samples.",
"Our approach, however, achieves the goal of data-efficiency in a more complicated MIL setting: instead of contrasting the instance-level information during training, we find that instance-bag negative pair is the most effective method, which constitutes one of our main contributions.",
"In the NLP field, Dai and Lin (2017) proposed to use contrastive learning for image caption, and Clark et al. (2020) trained a discriminative model for language representation learning.",
"Recent literature (Peng et al., 2020) has also attempted to relate the contrastive pre-training scheme to classical supervised RE task.",
"Different from our work, Peng et al. (2020) aims to utilize abundant DS data and help classical supervised RE models learn a better relation representation, while our CIL focuses on learning an effective and efficient DSRE model under DS data noise.",
"In this work, we discuss the long-standing DSRE framework ( i.e. MIL) and argue the MIL is not efficient enough, as it aims to form accurate bag-level representations but sacrifices the potential information",
"information of abundant instances inside MIL bags.",
"Thus, we propose a contrastive instance learning method CIL to boost the MIL model performances.",
"Experiments have shown the effectiveness of our CIL with stable and significant improvements over several baseline models, including current SOTA systems.",
"This work has been supported in part by National Key Research and Development Program of China (2018AAA0101900), Zhejiang NSF (LR21F020004), Key Technologies and Systems of Humanoid Intelligence based on Big Data (Phase ii) (2018YFB1005100), Zhejiang University iFLY-TEK Joint Research Center, Funds from City Cloud Technology (China) Co.",
"Ltd., Zhejiang University-Tongdun Technology Joint Laboratory of Artificial Intelligence, Chinese Knowledge Center of Engineering Science and Technology (CKCEST)."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"objective",
"method",
"objective",
"abstain",
"objective",
"method",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"objective",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"abstain",
"objective",
"abstain",
"objective",
"result",
"other",
"other"
] |
[
"Recent end-to-end task oriented dialog systems use memory architectures to incorporate external knowledge in their dialogs.",
"Current work makes simplifying assumptions about the structure of the knowledge base (such as the use of triples to represent knowledge) and combines dialog utterances (context), as well as, knowledge base (KB) results, as part of the same memory.",
"This causes an explosion in the memory size, and makes reasoning over memory, harder.",
"In addition, such a memory design forces hierarchical properties of the data to be fit into a triple structure of memory.",
"This requires the memory reader to learn how to infer relationships across otherwise connected attributes.",
"In this paper we relax the strong assumptions made by existing architectures and use separate memories for modeling dialog context and KB results.",
"Instead of using triples to store KB results, we introduce a novel multilevel memory architecture consisting of cells for each query and their corresponding results.",
"The multi-level memory first addresses queries, followed by results and finally each key-value pair within a result.",
"We conduct detailed experiments on three publicly available task oriented dialog data sets and we find that our method conclusively outperforms current state-of-the-art models.",
"We report a 15-25% increase in both entity F1 and BLEU scores.",
"Task oriented dialog systems are designed to complete a user specified goal, or service an information request using natural language exchanges.",
"Unlike open domain end-to-end neural dialog models, task oriented systems rely on external knowledge sources, outside of the current conversation context, to return a response (Henderson Work done during internship at IBM Research AI et al., 2014a; Su et al., 2016; Bordes and Weston, 2017a; Eric and Manning, 2017; El Asri et al., 2017).",
"For instance, in the example shown in Table 1, a dialog agent giving tour package recommendations needs to be able to first query an external knowledge source to determine packages that meet a user's requirement, and then respond accordingly.",
"In order to enable end-to-end goal oriented dialog tasks, current state of the art methods use neural memory architectures to incorporate external knowledge (Su et al., 2016; Eric and Manning, 2017; Madotto et al., 2018).",
"As can be seen in Table 1, agent responses may also include entity values present only in the dialog context (eg: Mu-nich in the Agent response in Turn 2).",
"In order to support such utterances, models also include tokens from the input dialog context in the same memory (Madotto et al., 2018).",
"Existing memory based architectures for task oriented dialog suffer from multiple limitations.",
"First, the creation of a shared memory for copying values from dialog context, as well as the knowledge base (KB) results, forces the use of a common memory reader for two different types of data.",
"This makes the task of reasoning over memory, harder not only does the memory reader need to determine the right entries from a large memory (since each word from context also occupies a memory cell), it also needs to learn to distinguish between the two forms of data (context words and KB results) stored in the same memory.",
"Second, all current neural memory architectures store results, returned by a knowledge source, in the form of triples (eg. subject relation object ).",
"This modeling choice makes it hard for the memory reader to infer relationships across otherwise connected attributes.",
"For instance, consider the example triple store in Table 2 showing results for a query executed for packages between Dallas and Mannheim.",
"If the user asks the dialog agent to check the price of stay at a 5 star hotel, the memory reader needs to infer that the correct answer is $2800 by learning that the price, category and hotel need to be linked inorder to return an answer (shown in blue).",
"Lastly, current models treat conversations as a sequential process, involving the use of KB results from only the most recent information re-quest/query.",
"In contrast, in real world dialogs such as the one shown in Table 1, the agent may have to refer to results ( to Mannheim ) from a previously executed query (see Turn 7).",
"Thus, at each turn, the system has to memorize all the information exchanged during the dialog, and infer the package being referred to, by the user.",
"In order to support such dialogs, the memory needs to store results of all queries executed during the course of the dialog.",
"The problem of inferring over such results (which may be from multiple queries) is exacerbated when memory is represented in the form of triples.",
"In this paper, we present our novel multi-level memory architecture that overcomes the limitations of existing methods:",
"(i) We separate the memory used to store tokens from the input context and the results from the knowledge base.",
"Thus, we learn different memory readers for context words as well for knowledge base entities",
"(ii) Instead of using a subj rel obj store, we develop a novel multi-level memory architecture which encodes the natural hierarchy exhibited in knowledge base results by storing queries and their corresponding results and values at each level.",
"We first attend on the queries, followed by the results in each query to identify the result being referred to, by the user.",
"We then attend on the individual entries in the result to determine which value to copy in the response.",
"Figure 1c shows our multi-level memory storing the results from queries executed as part of the dialog in Table 1. Our paper makes the following contributions: 1. We propose the use of separate memories for copying values from context and KB results.",
"Thus, the model learns separate memory readers for each type of data.",
"2. Our novel multi-level memory for KB results, models the queries, results and their values in their natural hierarchy.",
"As our experiments show, the separation of memory as well as our multi-level memory architecture, both, contribute to signifi-cant performance improvements.",
"3. We present detailed experiments demonstrating the benefit of our memory architecture along with model ablation studies.",
"Our experiments on three publicly available datasets ( CamRest676 (Su et al., 2016), InCar assistant (Eric and Manning, 2017), Maluuba Frames (El Asri et al., 2017)) show a substantial improvement of 15-25 % in both entity F1 scores, and BLEU scores as compared to existing state of the art architectures.",
"To the best of our knowledge, we are the first to attempt end-to-end modeling of task oriented dialogs with non-sequential references as well as multiple queries, as seen in the Maluuba Frames dataset.",
"A human evaluation on model outputs also shows our model is preferred by users over existing systems such as KVRet (Eric and Manning, 2017) and Mem2Seq (Madotto et al., 2018).",
"Recent methods, such as (Vinyals and Le, 2015; Serban et al., 2016, 2017), proposed for end-to-end learning of dialogs were aimed at modeling",
"open-domain dialogs.",
"While they can be used for learning task oriented dialogs, they are not well suited to interface with a structured KB.",
"To better adapt them to handle task oriented dialogs: 1) (Bordes and Weston, 2017b) proposed a memory network based architecture to better encode KB tuples and perform inferencing over them and 2) (Madotto et al., 2018) incorporated copy mechanism to enable copying of words from the past utterances and words from KB while generating responses.",
"All successful end-to-end task oriented dialog networks (Eric and Manning, 2017; Bordes and Weston, 2017b; Madotto et al., 2018) make assumptions while designing the architecture: 1) KB results are assumed to be a triple store, 2) KB triples and past utterances are forced to be represented in a shared memory to enable copying over them.",
"Both these assumptions make the task of inferencing much harder.",
"Any two fields linked directly in the KB tuple are now linked indirectly by the subject of the triples.",
"Further, placing the KB results and the past utterances in same memory forces the architecture to encode them using a single strategy.",
"In contrast, our work uses two different memories for past utterances and KB results.",
"The decoder is equipped with the ability to copy from both memories, while generating the response.",
"The KB results are represented using a multi-level memory which better reflects the natural hierarchy encoded by sets of queries and their corresponding result sets.",
"Memory architectures have also been found to be helpful in other tasks such as question answering.",
"Work such as (Xu et al., 2016) defines a hierarchical memory architecture consisting of sentence level memory followed by word memory for a QA task while (Chandar et al., 2016) defines a memory structure that speeds up loading and inferencing over large knowledge bases.",
"Recent work by (Chen et al., 2018) uses a variational memory block along with a hierarchical encoder to improve diversity of open domain dialog responses.",
"In this section, we describe our end-to-end model for task oriented dialogues.",
"Our model 1 (Figure 1a) consists of:",
"(i) a hierarchical encoder which encodes the current input context consisting of the user and agent utterances",
"(ii) a multi-level memory that maintains the queries and knowledge base re-1 Code is available at Multi-Level Memory sults seen so far in the course of the dialogue, and",
"(iii) copy augmented sequence decoder that uses separate context and multi-level memory.",
"The queries and their corresponding results are maintained in a multi-level memory.",
"The decoder uses a gating mechanism for memory selection while generating a response.",
"Our model uses a standard hierarchical encoder as proposed by (Sordoni et al., 2015).",
"The encoder takes a sequence of utterances as input.",
"For the t th turn, the dialogue context can be represented as ( c 1 , c 2 , ...c 2 t 1 ) , which consists of t user utterances and t 1 system utterances.",
"Each utterance c i is further a sequence of words ( w i 1 , w i 2 , ...w im ) .",
"We first embed each word w ij using a word embedding function emb that maps each word to a fixed-dimensional vector.",
"We then generate utterance representations, ( c i ) using a single layer bi-directional GRU.",
"h eij denotes the hidden state of word w ij in the bidirectional GRU.",
"The input representation c is generated by passing each utterance representation ( c i ) through another single layer GRU.",
"Motivation: Current approaches break down KB results by flattening them into ( subj-rel-obj ) triples.",
"However, converting KB results into triples leads to loss of relationship amongst attributes in the result set.",
"This makes the reasoning over memory difficult as model now has to infer relationships when retrieving values from memory.",
"Instead, we use a multi-level memory which keeps the natural hierarchy of results intact (without breaking them into triples).",
"We store the queries and their corresponding results and individual values at different levels .",
"We first attend on the queries and then on the results for each query to identify which result the user is referring to.",
"This also enables us to handle user requests that refer to results from a previously executed query.",
"We propose that a representation of all the values in the result, and not just one of the values (desig-nated as subj ), should be used while attending over a result in KB.",
"We attend on this compound representation of the result before attending on the individual key-value pairs in each result, to determine which value to copy into the generated response.",
"attention",
"Table 1 Figure 1: Model architecture",
"(a) along with schematic representation of context memory",
"(b) and multi-level KB memory",
"Let q 1 , q 2 , ...q k be the queries fired to the knowledge base till the current state of the dialogue.",
"Every query q i is a set of key-value pairs { k q i a : v q i a , 1 < a < n q i } , corresponding to the query's slot and argument where n q i is the number of slots in query q i .",
"For example, after the user utterance at Turn 3 in Table 1, the query fired by the system on the knowledge base would be { 'origin':'Dallas','destination':'Manheim','Start': 'Aug 26', 'end': 'Aug 31', 'Adults':1 } .",
"The execution of a query on an external knowledge base, returns a set of results.",
"Let r ij be the j th result of query q i .",
"Each result r ij is also a set of slot-value pairs { k r ij a : v r ij a , 1 < a < n r ij } where n r ij is the number of attributes in result r ij .",
"A visualization of the memory with queries and their corresponding results can be seen in Figure 1c.",
"The first level of memory contains the query representations.",
"Each query q i is represented by q vi = Bag of words over the word embeddings of values ( v q i a ) in q i .",
"The second level of memory contains the result representations.",
"Representation of each result r ij is given by r vij = Bag of words over the word embeddings of values ( v r ij a ) in r ij .",
"The third level of memory contains the result cells which have the key-value pairs ( k r ij a : v r ij a ) of the results.",
"The values ( v r ij a ) which are to be copied into the system response are thus present in the final level of memory .",
"We now describe how we apply attention over the context and multi-level memory.",
"The model generates the agent response word-by-word; a word at time step t is either generated from the decode vocabulary or is a value copied from one of the two memories (knowledge base or context memory).",
"A soft gate g 1 controls whether a value is generated from vocabulary or copied from memory.",
"Another gate g 2 determines which of the two memories is used to copy values.",
"Let the hidden state of the decoder at time t be h t",
"The hidden state h t is used to apply attention over the input context memory.",
"Attention is applied over the hidden states of the input bidirectional (BiDi) GRU encoder using the con-cat scheme as given in (Luong et al., 2015).",
"The attention for the j th word in the i th utterance is given by: a ij = exp ( w T 1 tanh ( W 2 tanh ( W 3 [ h t , h eij ]))) (cid:80) ij exp ( w T 1 tanh ( W 2 tanh ( W 3 [ h t , h eij ]))) (2) The attention scores a ij are combined to create an attended context representation d t , d t = (cid:88) i,j a i,j h eij (3) and similar to (Luong et al., 2015), the decoder word-generation distribution is given by : P g ( y t ) = softmax ( W 1 [ h t , d t ] + b 1 ) (4) 3.3.2 Copying words from context memory: The input context memory is represented using the hidden states h eij of the input Bi-Di GRU encoder.",
"Similar to (Gulcehre et al., 2016), the attention scores a ij , are used as the probability scores to form the copy distribution P con ( y t ) over the input context memory.",
"The context representation d t , along with the hidden state of decoder h t , is used to attend over the multi-level memory.",
"The first level attention, .",
", is applied over the queries q .",
".",
"The second level attention, i.",
", is the attention over the results r i.",
"of query q i .",
"The product of first level attention and second level attention is the attention over results of all the queries in the multi-level memory.",
"The weighted sum of the first level attention, second level attention and result representations gives us the attended memory representation, m t .",
"Each result is further composed of multiple result cells.",
"On the last level of memory, which contains the result cells, we apply key-value attention similar to (Eric and Manning, 2017).",
"The key of the result cell is the word embedding of the slot, k r ij a , in the result.",
"The attention scores, ij.",
", for the keys represent the attention over the result cells of each result r ij .",
"The product of first level attention i , second level attention ij and third level attention ijl gives the final attention score of the value v r ij l in the KB memory.",
"These final attention scores when combined (Eq. 10), form the copy distribution, P kb ( y t ) , over the values in KB memory.",
"Similar to (Gulcehre et al., 2016), we combine the generate and copy distributions we use gate g 2 (Eq. 11) to obtain the copy distribution P c ( y t ) (Eq. 12) by combining P kb ( y t ) and P con ( y t ) .",
"Finally, we use gate g 1 to obtain the final output distribution P ( y t ) , by combining generate distribution P g ( y t ) and copy distribution P c ( y t ) as shown below:",
"We present our experiments using three real world publicly available multi-turn task oriented dialogue datasets: the InCar assistant (Eric and Manning, 2017), CamRest (Su et al., 2016) and the Maluuba Frames dataset (El Asri et al., 2017).",
"All three datasets contain human-human task oriented dialogues which were collected in a Wizard-of-Oz (Wen et al., 2017) setting.",
"(i) InCar assistant dataset consists of 3031 multi-turn dialogues in three distinct domains: calendar scheduling, weather information retrieval, and point-of-interest navigation.",
"Each dialogue has it's own KB information provided and thus, the system does not have to make any queries.",
"(ii) CamRest dataset , consists of 676 human-to-human dialogues set in the restaurant reservation domain.",
"There are three queryable slots (food, price range, area) that users can specify.",
"This dataset has currently been used for evaluating slot-tracking systems.",
"Recent work by (Lei et al., 2018) uses an end-to-end network without a KB and substitutes slot values with placeholders bearing the slot names in agent responses.",
"However, we formatted the data to evaluate end-to-end systems by adding API call generation from the slot values so that restaurant suggestion task can proceed from the KB results.",
"(iii) Maluuba Frames dataset , consists of 1369 dialogues developed to study the role of memory in task oriented dialogue systems.",
"The dataset is set in the domain of booking travel packages which involves flights and hotels.",
"In contrast to the previous two datasets, this dataset contains dialogs that require the agent to remember all information presented previously as well as support results from multiple queries to the knowledge base.",
"A user's preferences may change as the dialogue proceeds, and can also refer to previously presented queries (non-sequential dialog).",
"Thus, to store multiple queries, we require 3 levels in our multi-level memory as compared to 2 levels in the other datasets, since they don't have more than one query.",
"We do not use the dialogue frame annotations and use only the raw text of the dialogues.",
"We map ground-truth queries to API calls that are also required to be generated by the model.",
"Recent work has used this dataset only for frame tracking (Schulz et al., 2017) and dialogue act prediction (Peng et al., 2017; Tang et al., 2018).",
"To the best of our knowledge we are the first to attempt the end-to-end dialog task using this dataset.",
"Table 3 summarizes the statistics of the datasets.",
"In this section, we briefly describe how the knowledge base queries are generated as API calls as part of the model response.",
"The InCar assistant dataset has a fixed KB for each dialogue whereas the CamRest and Maluuba datasets require queries to be fired on a global KB.",
"Queries in CamRest dataset can have 3 slots namely cuisine, area and pricerange, whereas those in Maluuba can have 8 slots, which are destination, origin, start date, end date, budget, duration, number of adults and children.",
"Any query that is to be fired on the KB is expected to be generated by the model as an API call, by considering a fixed ordering of slots in the generated response.",
"For",
"eg., in CamRest dataset, ApiCall(area=south, pricerange=cheap) would be generated by the model as api call dontcare south cheap , with dontcare meaning that the user does not have any preference for cuisine and, south , cheap being the user constraints for area and pricerange respectively.",
"Therefore, the task of API call generation typically involves copying relevant entities that are present in dialog context.",
"Our model is trained end-to-end using Adam optimizer (Kingma and Ba, 2014) with a learning rate of 2 .",
"5 e 4 .",
"The batch-size is sampled from [8,16].",
"We use pre-trained Glove vectors (Pen-nington et al., 2014) with an embedding size of 200.",
"The GRU hidden sizes are sampled from [128, 256].",
"We tuned the hyper-parameters with grid search over the validation set and selected the model which gives best entity F1.",
"We use the commonly used BLEU metric (Pap-ineni et al., 2002) to study the performance of our systems as it has been found to have strong correlation (Sharma et al., 2017) with human judgments in task-oriented dialogs.",
"To explicitly study the behaviour of different memory architectures, we use the entity F 1 to measure how effectively values from the knowledge base are used in the dialog.",
"To compute the entity F1, we micro-average the precision and recall over the entire set of system responses to compute the micro F1 2 .",
"For the InCar Assistant dataset, we compute a per-domain entity F1 as well as the aggregated entity F1.",
"Since our model does not have slot-tracking by design, we evaluate 2 We observe that (Madotto et al., 2018) reports the micro average of recall as the micro F1.",
"We experiment with the following baseline models for comparing the performance of our Multi-Level Memory architecture:",
"Ptr-UNK 3 (Gulcehre et al., 2016): The model augments a sequence-to-sequence architecture with attention-based copy mechanism over the encoder context.",
"KVRet (Eric and Manning, 2017): The model uses key value knowledge base in which the KB is represented as triples in the form of subject relation object .",
"This model does not support copying words from context.",
"The sum of word embeddings of subject , relation is used as the key of the corresponding object .",
"Mem2Seq 3 (Madotto et al., 2018): The model uses a memory networks based approach for attending over dialog history and KB triples.",
"During decoding, at each time step, the hidden state of the decoder is used to perform multiple hops over a single memory which contains both dialog history and the KB triples to get the pointer distribution used for generating the response.",
"Table 4 shows the performance of our model against our baselines.",
"We find that our multilevel memory architecture comprehensively beats all existing models, thereby establishing new state-of-theart benchmarks on all three datasets.",
"Our model outperforms each baseline on both BLEU and entity F1 metrics.",
"InCar: On this dataset, we show entity F1 scores for each of the scheduling, weather and navigation domains.",
"Our model has the highest F1 scores across all the domains.",
"It can be seen that our 3 We use the implementation provided by (Madotto et al., 2018) at https://github.com/HLTCHKUST/Mem2Seq model strongly outperforms Mem2Seq on each domain.",
"A detailed study reveals that the use of triples cannot handle cases when a user queries with non-subject entries or in cases when the response requires inferencing over multiple entries.",
"In contrast, our model is able to handle such cases since we use a compound representation of entire result (bag of words over values) while attending on that result.",
"CamRest: Our model achieves the highest BLEU and entity F1 scores on this dataset.",
"From Table 4, we see that simpler baselines like Ptr-UNK show competitive performance on this dataset because, as shown in Table 3, CamRest dataset has relatively fewer KB entries.",
"Thus, a simple mechanism for copying from context results in good entity F1 scores.",
"Maluuba Frames: The Maluuba Frames dataset was introduced for the frame tracking task.",
"Here, a dialog frame is a structured representation of the current dialog state.",
"Instead of explicitly modeling the dialog frames, we use the context representation d t to directly attend on the Multi-level memory.",
"As Table 3 shows, this dataset contains significantly longer contexts as well as larger number of entities, as compared to the other two datasets.",
"In addition, unlike other datasets, it also contains non-linear dialog flows where a user may refer to previously executed queries and results.",
"The complexity of this dataset is reflected in the relatively lower BLEU and F1 scores as compared to other datasets.",
"To further understand the effect of separating context memory from KB memory and using a multi-InCar",
"level memory for KB, Table 5 shows the percentage of ground-truth entities, according to their category, which were also present in the generated response.",
"For example, on the InCar dataset, out of the 930 entities in ground-truth response that were to be copied from the KB, our model was able to copy 37 .",
"5% of them into the generated response.",
"From Table 5, it can be seen that our model is able to correctly copy a significantly larger number of entities from both, KB and context, as compared to the recent Mem2Seq model in all datasets.",
"We report results from ablation studies on all three datasets.",
"Table 6 shows the incremental benefit obtained from individual components used in our model.",
"We investigate the gains made by",
"(i) Using separate memory for context and KB triples",
"(ii) Replacing KB triples with a Multi-level memory.",
"We use the recent Mem2Seq model for comparison with a unified context and KB memory model.",
"As can be seen from Table 6, the separation of context memory and KB memory leads to a significant improvement in BLEU and F1 scores on all datasets.",
"This validates our hypothesis that storing context words and KB results in a single memory confuses the memory reader.",
"The use of a multi-level memory instead of triples leads to further gains.",
"This suggests, better organization of KB result memory by keeping the natural hierarchy intact is beneficial.",
"We analyzed the errors made by our dialog model on 100 dialog samples in test set of Maluuba Frames.",
"We observed that the errors can be divided into five major classes:",
"(i) Model outputs wrong KB result entry due to incorrect attention (27%),",
"(ii) Model returns package details instead of asking for more information from the user (16%),",
"(iii) Model incorrectly captures user intent (13%),",
"(iv) Model makes an error due to nonsequential nature of dialog (22%).",
"In such errors, our model either generates an API call for a result already present in memory, or our model asks for a query-slot value that was already provided by the user,",
"(v) Data specific characteristics such as in-sufficient samples for certain classes of utterances (eg: more than one package returned) or returning different, but meaningful package attributes as compared to ground-truth data, contribute to 22% of the errors.",
"We also conducted a blind user study that compared outputs from our model, Mem2Seq and KVRet systems.",
"We used 96 randomly selected examples from each test split of Maluuba and CamRest datasets resulting in a total of 192 examples.",
"Our study was split across 8 users who were provided with results fetched from the KB, current dialog context, gold response and the outputs of each of the models.",
"Model outputs were shuffled in each example and users were asked to score each output between 1 (lowest) to 5 (highest) in terms of its accuracy of information in response and the quality of language.",
"The results of this study are presented in Table 7. We also report the MRR (mean-reciprocal rank) for model preference along with other scores.",
"It can be seen that our model consistently ranks high for both information accuracy and language quality as well as reports a higher MRR.",
"To further understand the quality of model performance, we asked the human evaluators whether their best ranked model output was a useful response.",
"We saw that the evaluators agreed in 76.04% and 58.33% of the cases for CamRest and Maluuba datasets respectively.",
"We observe that the results from human evaluation go hand-in-hand with automatic evaluation and reinforce our claim that separating context, KB memory and using a multilevel representation for the KB memory are useful for improving dialog modeling.",
"Role Turn Utterance Agent 1 hello! howcanihelpyoutoday?",
"Analyzing the attention weights is a useful way to understand how the model is inferencing over the memory to copy entities from it.",
"Table 8 shows an example of a dialog from the Maluuba Frames dataset and the outputs generated by different models.",
"Here, the user first wants to know about packages to Manas' and then requests for trips to Pittsburgh'.",
"Later, the user becomes interested in the 3.5 star hotel in Pittsburgh which was suggested by the agent and wants to know its guest rating.",
"It can be seen from Table 8 that our model outputs the correct guest rating (8.86) of the hotel.",
"Mem2Seq fails to understand the context and generates an irrelevant response.",
"KVRet generates a readable response but points to the guest rating of a different hotel.",
"(b) Decreasing order of attention scores over words in dialogue context Figure 2: Visualization of attention over memory while generating the word 8.86' for the example in Table 8. The attention over the memory while generating the word 8 . 86 ' for this example is shown in Fig 2. Fig 2a shows that the query with destination as Pittsburgh' gets the highest attention and among the results of this query, the package with the 3.5 star rated hotel gets highest attention.",
"Within this result, the model gives highest score to the result cell with guest rating as the key.",
"To further understand why the correct result hotel gets higher attention, Fig 2b shows the attention scores over the words in context memory.",
"The context representation d t captures the important words (3.5, guest, rating) in context which are in-turn used to apply attention over the multi-level memory.",
"Lastly, studying the values of the gates g 1 (prob. of generating from vocabulary) and g 2 (prob. of copying from KB), we found that gate g 1 had a probability value of 0.08 thereby driving the model to copy from memory instead of generating from output vocabulary and gate g 2 , with a probability value of 0.99 , was responsible for selecting KB memory over context memory.",
"In this paper, we presented an end-to-end trainable novel architecture with multi-level memory for task oriented dialogues.",
"Our model separates the context and KB memory and combines the attention on them using a gating mechanism.",
"The multi-level KB memory reflects the natural hierarchy present in KB results.",
"This also allows our model to support non-sequential dialogs where a user may refer to a previously suggested result.",
"We find that our model beats existing approaches by 15-25% on both entity F1 and BLEU scores, establishing state-of-the-art results on three publicly available real-world task oriented dialogue datasets.",
"In a user study comparing outputs from our system against recent models, we found that our model consistently scored higher for both language quality as well as correctness of information in the response.",
"We also present the ben-efits of each of our design choices by performing an ablation study.",
"In future work, we would like to incorporate better modeling of latent dialog frames so as to improve the attention signal on our multi-level memory.",
"As our error analysis suggests, nearly 22% of the errors could possibly be reduced by improved modeling of the dialog context.",
"We believe that model performance can also be improved by capturing user intent better in case of non-sequential dialog flow."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"objective",
"objective",
"result",
"objective",
"abstain",
"objective",
"objective",
"objective",
"result",
"objective",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"other",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"method",
"abstain",
"result",
"result",
"result",
"method",
"result",
"method",
"result"
] |
[
"Abstract",
"Generating image captions with user intention is an emerging need.",
"The recently published Localized Narratives dataset takes mouse traces as another input to the image captioning task, which is an intuitive and efficient way for a user to control what to describe in the image.",
"However, how to effectively employ traces to improve generation quality and controllability is still under exploration.",
"This paper aims to solve this problem by proposing a novel model called LoopCAG, which connects C ontrastive constraints and A ttention G uidance in a Loop manner, engaged explicit spatial and temporal constraints to the generating process.",
"Precisely, each generated sentence is temporally aligned to the corresponding trace sequence through a contrastive learning strategy.",
"Besides, each generated text token is supervised to attend to the correct visual objects under heuristic spatial attention guidance.",
"Comprehensive experimental results demonstrate that our LoopCAG model learns better correspondence among the three modal-ities(vision, language, and traces) and achieves SOTA performance on trace controlled image captioning task.",
"Moreover, the controllability and explainability of LoopCAG are validated by analyzing spatial and temporal sensitivity during the generation process.",
"Image captioning is a fundamental task to examine whether an intelligent system can understand the visual world by letting the system describe it with natural language.",
"Generating a reasonable caption requires the model to link linguistic tokens to objects, relationships, scenes of the visual world in the input image.",
"Thus, a great captioning model will help us better understand what characteristics promise a good joint visual-linguistic representation.",
"Most previous attempts aim to describe the image indicating the salient objects and relations without considering user intention.",
"To generate controllable and explainable captions, recent works dedicated to establishing a new controllable image captioning task to generate the caption at will.",
"The captioning process can be controlled by POS tagging (Deshpande et al., 2018), sentiment (You et al., 2018), length (Deng et al., 2020), bounding boxes (Cornia et al., 2019), and mouse traces (Pont-Tuset et al., 2020).",
"In this paper, we mainly investigate trace-controlled image captioning, since it is not only a more natural and interactive paradigm for real web applications, e.g. automatic presentation or help people with visual difficulties but also a new perspective for us to better understand how the long-pursued cross-modality alignment is performed in deep learning models.",
"Figure 1 presents a showcase of the scenario.",
"Given an image, users can easily draw a trace to ask the AI agent to describe the scene in the image along the trace automatically.",
"In the Localized Narratives dataset (Pont-Tuset et al., 2020), the annotators describe the image while drawing the traces of their attention movement, which presents a spatial alignment between visual objects and caption tokens as well as a temporal alignment between user intention(by trace) and caption sentences.",
"From Figure 1, we see that the caption tokens, e.g. person, horse, trees can be grounded to the visual objects spatially, and the order of caption sentences can be arranged to align to the order of traces temporally.",
"Although it is easy for humans to recognize which visual object is indicated by the traces, it is a challenge for the agent to recognize, emphasize and arrange visual semantics solely based on several tracepoints' coordinates.",
"Thereby, we mainly devote our effort to the spatial grounding and temporal controllability of image captioning.",
"Inspired by the above observation, we design two novel approaches to tackle the above challenges.",
"Specifically, we design sentence-level contrastive constraints to align the generated sentences to the corresponding trace sequences temporally.",
"Besides, we design a type of heuristic spatial attention guidance to supervise each generated text tokens to attend to the correct visual objects.",
"Composing the above together, We propose a novel trace-controlled image captioning model called LoopCAG and demonstrate its superior capability on captioning quality and flexible controllability.",
"Our contribution can be summarized as: 1) We propose a novel model LoopCAG, which learns the caption tokens' spatial grounding through attention guidance and temporal localization between trace input and the caption sentences through contrastive constraints in an end-to-end loop manner among the three modalities(vision, language, and traces).",
"2) The quantitative results show that our LoopCAG model can generate better trace-controlled captions and achieve SOTA performance on automatic criteria.",
"The qualitative results present that our model can generate highly relevant captions given users' trace inputs.",
"3) We intensively study the controllability and explainability of trace-controlled image captioning.",
"For image captioning, the task is to generate a text description y given an image I .",
"We first apply a pre-trained visual object detector on the image and get an object level visual feature set V = { v 1 , . . . , v N } , in which v i R 2048 is the i -th object visual feature, and N is the number of visual objects.",
"The text description sequence is y = { y 1 , . . . , y l } , in which y j is the j -th token and l is the text sequence length.",
"The output is conditioned on model parameters , and the optimization process can be formulated as the following maximum likelihood form: = arg max log p ( y | V ; ) .",
"For trace-controlled image captioning, the raw trace input is a sequence of tracepoints coordinates with timestamps.",
"To reduce those tracepoints to an acceptable length due to the limit of GPU memory, we segment the tracepoints sequences uniformly by the same time window , and then each trace segment is converted to its minimal bounding rectangle.",
"Every bounding rectangle can be represented by a 5D vector which contains normalized coordinates of the top-left and bottom-right corners, and the area ratio with respect to the whole image.",
"We denote the trace input as T = { t 1 , . . . , t M } , where t i R 5 .",
"The trace controlled captioning objective can be formulated as follow: = arg max log p ( y | V , T ; ) (2) 3 Method Our method consists of three components: the caption generation module with a transformer encoder-decoder backbone, the attention guidance for object-level spatial grounding, and the contrastive constraints for sentence-level temporal alignment.",
"The overall model structure is illustrated in Figure 2. The model is trained by jointly optimizing the three objectives listed in the following subsections.",
"The caption generation backbone is a transformer-based encoder-decoder proposed by Vaswani et al. (2017), which mainly employs a multi-head attention mechanism and achieves top-tier performance in many sequential related tasks.",
"Here, we highlight several task-oriented modifications.",
"Vision-Trace Encoder The visual embeddings V and traces embeddings T are encoded separately and then concatenated together as a single input sequence feeding into a transformer encoder.",
"Object visual embedding : We first represent the spatial info of each object proposal by a 5D vector (in the same way as the traces), then project it into a spatial embedding p i R d , where d is the embedding size across the model.",
"Each object visual feature v i is projected into a lower dimension vector v i R d .",
"The final visual embedding is V = { v 1 , . . . , v N } , where v i = v i + p i .",
"Trace Embedding : Each trace input item t i is projected into t i R d .",
"We also generate Sinusoidal Positional Embeddings (Vaswani et al., 2017) o i to capture the temporal order of the traces.",
"The final trace embedding T = { t 1 , . . . , t M } , where t i = t i + o i .",
"Caption Decoder Caption decoder combines vision and trace information using cross attention connected to the hidden states of Vision-Trace En-coder's last layer.",
"Using a casual mask to encode generated token progressively, the transformer decoder ensures that the predictions for position i can depend only on the known outputs at positions less than i .",
"During training, the ground truth caption tokens are shifted right, and a special token (cid:104) BOS (cid:105) (begin of the sentence) is inserted into the first position.",
"A cross-entropy generation loss L gen is then computed with the logits transformed from the last decoder layer's hidden states and un-shifted ground truth caption token ids with a special token (cid:104) EOS (cid:105) (end of the sentence) appended.",
"It is noted that y is the masked version of the ground-truth caption y .",
"To make a fair comparison with the baseline (Pont-Tuset et al., 2020), we apply the same setting and do not employ common techniques such as label smoothing(Szegedy et al., 2016) or self-critical training(Rennie et al., 2017).",
"Attention Supervision Construction To explicitly guide the attention for object-level spatial grounding, we align the semantic caption tokens with the visual object by taking trace as an intermediate bridge.",
"In this way, we construct a supervision matrix to guide the attention between the caption tokens and visual objects by the two following steps.",
"1) Language-trace temporal alignment.",
"In the Localized Narrative dataset, the caption utterances 1 u and mouse traces are highly temporal-aligned, i.e., every utterance u has a 1 We are following the naming tradition of Pont-Tuset et al. (2020), where an utterance means one or several adjacent tokens, not a whole sentence.",
"To leverage this information, we first assign each tracepoint p to a unique utterance u , where the tracepoint timestamp is in the utterance time window.",
"Thus, every utterance u is aligned to a series of tracepoints P u = { p 1 , . . . , p k u } .",
"2) Language-vision spatial alignment.",
"Give the utterance u and corresponding P u , we calculate the alignment score considering the spatial overlap between tracepoints P u and each vision object v i .",
"Every visual object v i has a corresponding spatial bounding box b i = ( x 1 i , y 1 i , x 2 i , y 2 i ) , and the x 1 i , y 1 i , x 2 i , y 2 i are top-left and bottom-right horizontal and vertical coordinates respectively.",
"We set the alignment score s ( u j ,b i ) between utterance u j and bounding box b i as, s ( u j ,b i ) = (cid:80) p P uj I b i ( p ) | P u j | (4) where I is an indicator of whether point p is in the bounding box b i : I b i ( p ) = 1 if x 1 i < x p < x 2 i and y 1 i < y p < y 2 i 0 otherwise (5) x p and y p are the coordinates of each tracepoint in p u .",
"An example of the alignment score calculation is illustrated in Figure 3. By calculating the alignment score, we establish the spatial grounding supervision between caption tokens and auto-detected visual objects.",
"For every word y i in the same utterance u , the s ( y i ,b j ) = s ( u,b j ) .",
"Eventually, we get the supervision score matrix S [0 , 1] N T and S ij = s ( y i ,b j ) .",
"Attention-guided Grounding A cross-attention matrix is generated in shape ( N, T, L, H ) during the transformer's decoding steps.",
"Here N denotes the number of pre-detected visual objects, T denotes the number of tokens in a caption sentence after padding, L denotes the number of transformer layers, and H denotes the number of attention heads in transformer layers.",
"Two linear projections and layer normalization (Ba et al., 2016) are applied sequentially on dimension L and H , respectively reducing the dimension to 1.",
"Thus, for a single instance, we eventually calculate an attention matrix A RN T .",
"To train the model, the goal can be achieved by minimizing the following attention guidance loss function L att : L att = E a A ,s S s [ s log a + (1 s ) log (1 a )] , (6) which is a weighted Binary Cross Entropy between A and S .",
"Noted that we also choose to mask out some stop-words columns of the matrix A and S to avoid introducing too much annotation noise.",
"As illustrated on the left side of Figure 4, we first use a split by sentence procedure to build a sentence-level alignment between caption and traces, and then employ contrastive loss to constrain the temporal order of the generation process.",
"Split by Sentence An annotated instance consists of an image, a tracepoint list, and a caption paragraph consisting of a list of ordered caption sentences.",
"Here, we define a caption sentence as a series of utterances segmented out by a period('.').",
"In section 3.2, we already maintain an alignment between utterances and tracepoints.",
"Following this setting, we can unite a list of ordered utterance U = { u 1 , . . . , u k } in the same caption sentence, and then orderly unite a list of tracepoints corresponding to U 's elements into a so-called trace segment.",
"The alignment between caption sentences and trace segments can be established by simply uniting the association between utterances and tracepoints with respect to the above sentence split.",
"We call this procedure as split by sentence .",
"Temporal Contrastive Constraints According to the split mentioned above, we aggregate the transformer's last layer hidden states of trace segments and caption sentences respectively, and denote them as H ts = { h 1 ts , . . . , h nts } and H cs = { h 1 cs , . . . , h ncs } .",
"Here n is the number of caption sentences.",
"We adopt the NCE loss to learn to discriminate the positive from negative trace-caption pairs.",
"The positive is defined as all the temporal aligned corresponding caption sentence and trace segment pairs i.e. with the same order indices, and other pairs without temporal alignment in the same image as negative samples.",
"This contrastive loss function L cts is defined as follows, L cts = E h ts H ts log exp( s ( h its , h ics )) Z , (7) Z = n (cid:88) j =1 exp( s ( h its , h jcs )) (8) where s ( , ) means two linear layers and an L2 normalization applied on the elements respectively, and a dot production between them.",
"By minimizing the L cts , we force the model to learn a representation being aware of sentence-level temporal ordering, which leads to more precise captioning.",
"Finally, the model is trained with three losses L gen , L att , and L cts , where L gen is the caption generation loss, L att is the spatial attention guidance loss, and L cts is the temporal contrastive loss.We jointly optimize our model by minimizing all losses added together:",
"We use the annotated COCO subset of Localized Narratives to evaluate our method.",
"We call this dataset split as LN-COCO for short.",
"Each image has one or several pairs of the captioning paragraph and corresponding mouse traces.",
"Every single pair is a so-called localized narrative.",
"The training and validation splits are identical to Pont-Tuset et al. (2020)'s setting.",
"There are 134,272 localized narratives in the training set and 8,573 in the validation set.",
"We train on the whole training set and evaluate our model performance against the identical validation set.",
"For the visual feature, we adopt Faster-RCNN(Ren et al., 2015) to extract 100 bounding box proposals.",
"For trace feature, we use = 0 .",
"4 s to extract trace segment for feature extraction.",
"The embedding size d , number of transformer layers, hidden size of the transformer feed-forward layer are 768, 2, and 768, respectively.",
"The number of attention heads is 8, and the dropout rate is 0.1.",
"We adopt the Adam-W optimizer (Loshchilov and Hutter, 2019) with learning rate of 7e-4(which is the best performance setting of baseline, and adopted widely for other trials), and set two momentum parameters 1 = 0.9 and 2 = 0.99.",
"We set the batch size to 256.",
"All models are trained on 4 Tesla V100 GPUs with 32GB memory for 10 to 12 hours.",
"This generation task adopts the traditional image captioning evaluation metric using the open-source tool 2 with a minor modification 3 to suit with LN-COCO, including BLEU(Papineni et al., 2002), METEOR (Banerjee and Lavie, 2005), ROUGE-L (Lin and Och, 2004), ROUGE-1-F1(Pont-Tuset et al., 2020), and CIDEr-D (Vedantam et al., 2015).",
"Baseline and +Trace methods The Baseline and +Trace methods are our re-implementations following (Pont-Tuset et al., 2020)'s method description.",
"The Baseline method only takes image feature as input while the +Trace model take image feature 2 https://github.com/tylin/coco-caption 3 We add an additional id to every trace-image-caption triplet and adjust some code of the standard evaluation tool to meet the 1 trace-vs-1 caption evaluation need.",
"and trace both as input.",
"They employ the architecture in Changpinyo et al. (2019) with a few minor differences.",
"First, they set the number of Trans-formers' layers for both the encoder and the decoder to 2 instead of 6.",
"Second, their projection layers also consist of layer normalization(Ba et al., 2016).",
"Third, they set the maximum number of iterations to 150k.",
"Finally, they allow the maximum number of target captions to be as long as 225 to account for the narration's longer nature.",
"LoopCAG methods Our model comprises of four components: 1) the transformer encoder-decoder framework; 2) the trace input; 3) Attention Guidance( +AG for short) grounding loss; 4) Contrastive constraints( +C for short).",
"Main Results The Table 1 shows the overall performance comparison on the LN-COCO dataset.",
"To reduce the deviation caused by different implementation details, we first present our imple-mentations' performance (with *), which have a higher score than Pont-Tuset et al. (2020) reported.",
"Thus, we have a more strict baseline to evaluate the improvement purely coming from our innovative method.",
"Compared to Baseline* method, the performance on all metrics improves significantly when controlling captioning using the mouse trace (+Trace*), it indicates that using the mouse trace enables the system to describe better those user intended parts of the image.",
"Most importantly, the results indicate that our LoopCAG method achieves state of the art on all automatic criteria, outperforming the previous state-of-art model by 2.4 and 7.5 on BLEU-4 and CIDEr-D, respectively.",
"This demonstrates our proposed Attention Guidance method helps the model generate better spatially grounded and more precise captions.",
"When considering the 2.0 rising on ROUGE-L score, we can conclude that Contrastive constraints can help the model better align the order of generated sentence to the user intent because ROUGE-L mainly employs an order mattered longest common sequence F-measure.",
"Ablations We perform three ablations to verify the most improvements in-deed come from the Attention Guidance and Contrastive constraints.",
"Starting from standard captioning (Baseline*), we add the Attention Guidance to help the model better spatially ground visual objects and caption tokens (Table 2, + Ag).",
"This affects performance, suggesting that the model does benefit from knowing where to find the highly semantic related appearance feature in the image.",
"Next, we add the trace feature (Table 2, + Trace).",
"This introduces user intention to the model.",
"We also take this line to show the performance lift caused by Contrastive constraints fairly.",
"Then we add the contrastive module (Table 2, +C) and see a good improvement on almost all criteria.",
"Hence, we verify the significance of the positive influence of temporal contrastive constraints.",
"Moreover, in the last line is our full LoopCAG model.",
"We can see the two proposed methods are not exclusive to each other.",
"Controllability Analysis on Temporal Order We also design an experiment to further demonstrate LoopCAG's superior controllability on the caption sentences' temporal order.",
"Specifically, we split each localized narrative input by sentence as described in Sec3.3, and reverse the sequential order of the splits, i.e., the last sentence of a caption paragraph will become the first one, the same processing is applied to trace segments, too.",
"We conduct an evaluation on the sentence&segment reverted dataset, and the performance comparison is shown in Table 3. With the Contrastive constraints mechanism's help, the LoopCAG model is much more robust to trace input reversing, even competitive with the model trained on reverted data.",
"In contrast, the base models all face a dramatic drop on almost all metrics when the input trace order is reversed.",
"This also implies there are some biased habits of human annotators.",
"For example, they always describe the salient objects first and end with a sentence about the background of the image.",
"Controllability Analysis on Temporal Frequency Then, we analyze the controllability of the temporal frequency to present whether the coarse-grained or fine-grained tracepoints (sam-pling rate, in other words) affects the generation performance.",
"As the Table 4 shows, we change the temporal frequency from 0 .",
"4 to 1 .",
"2 .",
"A performance drop is impressive with the getting larger.",
"The purpose of this experiment for various is to simulate the trace drawing speed of users in a real application scenario, and a larger is equivalent to a faster drawing speed.",
"As Deng et al. (2020) has demonstrated, the length is one of the critical facts that impact quantitative performance.",
"This result implies we can further decide to generate either a coarse-grained or fine-grained caption by Method ROUGE-L ROUGE-1-F1 BLEU-1 BLEU-4 CIDEr-D METEOR Baseline(Pont-Tuset et al., 2020) 31.7 47.9 32.2 8.1 29.3 -+Trace(Pont-Tuset et al., 2020) 48.3 60.7 52.2 24.6 106.5 Baseline* 34.1 54.0 36.0 10.3 29.5 16.4 +Trace* 49.0 68.1 55.4 25.0 107.9 25.2 LoopCAG(our) 50.3 69.8 57.2 27.0 114.0 26.0 Table 1: Comparison with baseline methods results: Baseline means an encoder-decoder model without taking trace as input.",
"Controllability Analysis on Spatial Semantic Grounding One of our important purposes of using attention guidance is introducing more interpretability to the model while improving the caption performance.",
"When generating each token, the model is forced to show which visual elements are the most effective reason for the current generation.",
"And this effectiveness is supervised by our pseudo attention label.",
"In this way, we can hopefully obtain better visual-linguistic joint representation.",
"In appendix A, we showcase the attention values comparison of models w/wo attention guidance.",
"We find that the AG model has a more diverse distribution across all different types of tokens.",
"A neater activation is observed in Appendix A",
"(a) compared with",
"(c), e.g., activations of who, is and on are clearly suppressed.",
"We observe that these suppressions happen on most function word, so we add this illustration for further discussion and exploration by our research community.",
"We present a showcase of a captioning result of different methods in Figure 7.",
"We can easily find that the Baseline captioning describes the image in random order while the +Trace Captioning and LoopCAG Captioning almost have the same order as Ground Truth Captioning.",
"It is also awe-inspiring that the Baseline captioning and +Trace Captioning both consist of some preposterous description highlighted in red color.",
"In contrast, the LoopCAG captioning is all reasonable.",
"This is evidence of superior fact grounding advantage brought by our Attention Guidance Method.",
"In this picture there is a stand on a ground.",
"On the backside there is a person.",
"He is riding on a horse.",
"He is wearing a cap.",
"He is in between the fence.",
"There is a flags on a wall.",
"On the left side there is a score board on a table and flower plants.",
"We can see in the background sky.",
"trees.",
"Baseline Captioning In this image I can see a horse which is in white color, at left there is a person sitting on the horse, at the back ground there are some people standing, in the background there are few buildings, trees and sky.",
"This picture might be taken outside of the city.",
"in this image, in the middle there is a man sitting on horse and holding the collar rope of a horse.",
"on the right side, we can also see another horse and a person is riding it.",
"In the background, there are group of people, flags, trees, plants, metal fence, hoardings, trees.",
"on top there is s a sky, at the bottom there are some grass and a land.",
"Controllable Image Captioning is an emerging research direction.",
"Previous works aim to control the captioning by Part-Of-Speech tag-ging(Deshpande et al., 2018), sentiment, (You et al., 2018), length (Deng et al., 2020), bounding box (Cornia et al., 2019) etc.",
"Those works either tried to describe a semantic guided captioning.",
"Other works relied on predefined categories, e.g., bounding box or sentiment classes.",
"Similar works (Yu et al., 2018; Cornia et al., 2019) control the caption by a sequence of ordered topics and bounding boxes.",
"However, those methods limit the captioning on the pre-defined or recognized objects in the bounding box and hard to scale out.",
"Besides, the trace is a more natural way to input than the bounding box.",
"The most similar work (Pont-Tuset et al., 2020) proposed a trace-controlled image captioning task and designed a simple benchmark by directly concatenating the mouse trace coordinates and size into a self-attention module.",
"Although mouse trace is flexible and interactive, it is easy for humans to understand the trace's semantic representation but hard for AI agents.",
"Unlike previous works, we propose a novel trace-controlled model for capturing the semantic representation of trace from both fine-grained and coarse-grained spatial and temporal characteristics.",
"Contrastive Learning Recently, contrastive learning has been widely studied in unsupervised representation learning for vision, (He et al., 2020; Chen et al., 2020; Grill et al., 2020; Caron et al., 2020; Chen and He, 2020), language (Mikolov et al., 2013; Saunshi et al., 2019; Chi et al., 2020; Fang and Xie, 2020; Giorgi et al., 2020; Kong et al., 2020; Gunel et al., 2021), or multi-modal (Sun et al., 2019; Luo et al., 2020).",
"The goal is to learn semantic representation between two views by allowing the positive sample to be similar (in semantic space) and negatives to be dissimilar semantically simultaneously.",
"CLIP (Radford et al.) and MIL-NCE (Miech et al., 2020) has demonstrated the effectiveness for learning the semantic mapping between vision and language.",
"Previous attempts mainly exploit the InfoNCE (Oord et al., 2018) objective to maximize a lower bound of the mutual information.",
"This paper extends the multimodal contrastive learning between the trace in the image and captioning sentence.",
"In the same image, they correspond to each other semantically.",
"This motivates us to design a contrastive loss for better alignment between the trace and language.",
"References Jimmy Ba, J. Kiros, and Geoffrey E. Hinton.",
"2016.",
"M. Cornia, L. Baraldi, and R. Cucchiara.",
"2019.",
"Show, control and tell: A framework for generating controllable and grounded captions.",
"2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages 82998308.",
"Chaorui Deng, Ning Ding, Mingkui Tan, and Qi Wu.",
"2020.",
"Length-controllable image captioning.",
"In Computer Vision ECCV 2020 , pages 712729, Cham.",
"Springer International Publishing.",
"Aditya Deshpande, Jyoti Aneja, Liwei Wang, Alexander G Schwing, and David A Forsyth.",
"2018.",
"Diverse and controllable image captioning with part-of-speech guidance.",
"Hongchao Fang and Pengtao Xie.",
"2020.",
"Cert: Contrastive self-supervised learning for language understanding.",
"arXiv preprint arXiv:2005.12766 .",
"John M Giorgi, Osvald Nitski, Gary D. Bader, and Bo Wang.",
"2020.",
"Declutr: Deep contrastive learning for unsupervised textual representations.",
"arXiv preprint arXiv:2006.03659 .",
"Beliz Gunel, Jingfei Du, Alexis Conneau, and Veselin Stoyanov.",
"2021.",
"Supervised contrastive learning for pre-trained language model fine-tuning.",
"In ICLR .",
"Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick.",
"2020.",
"Momentum contrast for unsupervised visual representation learning.",
"In CVPR , pages 97299738.",
"Lingpeng Kong, Cyprien de Masson d'Autume, Lei Yu, Wang Ling, Zihang Dai, and Dani Yogatama.",
"2020.",
"A mutual information maximization perspective of language representation learning.",
"In ICLR .",
"Chin-Yew Lin and Franz Josef Och.",
"2004.",
"Automatic evaluation of machine translation quality using longest common subsequence and skip-bigram statistics.",
"In ACL , page 605.",
"Ilya Loshchilov and Frank Hutter.",
"2019.",
"Decoupled weight decay regularization.",
"In ICLR .",
"Huaishao Luo, Lei Ji, Botian Shi, Haoyang Huang, Nan Duan, Tianrui Li, Jason Li, Taroon Bharti, and Ming Zhou.",
"2020.",
"Univl: A unified video and language pre-training model for multimodal understanding and generation.",
"arXiv preprint arXiv:2002.06353 .",
"Antoine Miech, Jean-Baptiste Alayrac, Lucas Smaira, Ivan Laptev, Josef Sivic, and Andrew Zisserman.",
"2020.",
"End-to-End Learning of Visual Representations from Uncurated Instructional Videos.",
"In CVPR .",
"In this paper, we focus on the controlled image captioning task and find mouse traces provide an intuitive and efficient way for a user to control the description.",
"We propose a novel caption generation model with contrastive constraints and attention guidance called LoopCAG to control the captioning process spatially and temporally.",
"The experimental results demonstrate the our model's effectiveness, and our work will inspire more future research on vision-linguistic understanding and generation.",
"We thank Botian Shi, Rongcheng Tu for helpful discussions.",
"This work is supported in part by National Key R&D Program of China 2018AAA0102301 and NSFC 61925203."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"method",
"method",
"objective",
"objective",
"result",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"other",
"method",
"objective",
"other",
"other",
"method",
"other",
"other",
"method",
"other",
"abstain",
"other",
"other",
"other",
"abstain",
"other",
"method",
"objective",
"other",
"method",
"abstain",
"method",
"other",
"method",
"abstain",
"other",
"method",
"other",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"result",
"objective",
"objective",
"other",
"other"
] |
[
"Automatic dialogue response evaluator has been proposed as an alternative to automated metrics and human evaluation.",
"However, existing automatic evaluators achieve only moderate correlation with human judgement and they are not robust.",
"In this work, we propose to build a reference-free evaluator and exploit the power of semi-supervised training and pretrained (masked) language models.",
"Experimental results demonstrate that the proposed evaluator achieves a strong correlation ( > 0.6) with human judgement and generalizes robustly to diverse responses and corpora.",
"We open-source the code and data in https://github.com/ ZHAOTING/dialog-processing .",
"Evaluation of conversational systems has been one major obstacle in dialogue research.",
"Particularly for open-domain dialogues, automated metrics have been shown to correlate poorly with human judgement (Liu et al., 2016).",
"Although human evaluation provides the most accurate assessment, they are slow and expensive.",
"An alternative is to train an evaluator that learns to predict a human-like score.",
"Lowe et al. (2017) proposed ADEM, a supervised regression model, for automatic response evaluation and reported 0.436 Pearson's and 0.428 Spearman's correlations with human judgement.",
"Though better than automated metrics, the scores only indicate moderate correlations.",
"Another criticism from Sai et al. (2019) further pointed out that ADEM produces scores of low deviation and lacks robustness under adversarial attack.",
"An ideal evaluator should be precise such that its predictions have a strong correlation with human judgement.",
"It should also be robust such that it generalizes to new dialogues unseen during training.",
"We explored three methods to improve the precision and robustness of response evaluators.",
"1) We propose building reference-free evaluator since reference-dependent metrics cause the problem of low deviation described by Sai et al. (2019).",
"We also find that the reference-dependent evaluators' performance degrades significantly when we remove ground-truth responses from test data.",
"2) Tao et al. (2018) proposed an unsupervised model (RUBER) that outperforms supervised ADEM by training on a next sentence prediction (NSP) task.",
"We show that RUBER can be further improved by supervised training on a small amount of annotated data.",
"3) We make use of strong pretrained models such as RoBERTa (Liu et al., 2019) to obtain better text representations.",
"By combining the three methods, a reference-free, semi-supervised, RoBERTa-based evaluator has better correlation and robustness.",
"Experimental results also show that the model can maintain good performances in cross-domain and low-resource settings.",
"Automatic response evaluator was first proposed by Lowe et al. (2017) to mimic human annotator's assessment of response appropriateness.",
"They collected human annotations of response quality for 4,104 context-response pairs, and train a regression network (ADEM) supervisedly by minimizing a squared error.",
"Tao et al. (2018) proposed an unsupervised method (RUBER) to train automatic evaluators, where a model is optimized to distinguish a ground-truth response and a negative-sampling response by minimizing a margin rank loss.",
"This process resembles the next sentence prediction (NSP) task applied in the training of BERT (Devlin et al., 2019).",
"It allows for exploiting a large amount of conversation data and has been shown to outperform ADEM.",
"Using ADEM and RUBER as the baselines of this work, we will analyze their shortcomings and develop solutions to build more precise and robust evaluators.",
"Next sentence prediction is to predict whether a sentence is a true continuation given a preceding context, where a positive sample is the ground-truth subsequent sentence and a negative sample is a different piece of text.",
"NSP bene-fits not only evaluation (Tao et al., 2018), but also language understanding (Devlin et al., 2019) and language generation (Bruni and Fernandez, 2017; Wolf et al., 2019).",
"Dialogue response evaluation can also be improved with better automated metrics and approximation to response quality.",
"Examples of successful attempts to improve automated metrics include exploiting multiple references for comparison (Gupta et al., 2019) and combining human judgement with automated metrics (Hashimoto et al., 2019).",
"Li et al. (2019) demonstrated that single-turn human judgement is not reliable as expected and proposed multi-turn human evaluation.",
"Ghandeharioun et al. (2019) approximated sentiment, semantic similarity, and engagement with new automated metrics and used a hybrid metric in a multi-turn evaluation setting.",
"Dziri et al. (2019) showed that entailment is also an option to approximate dialogue coherence and quality.",
"ADEM is a regression model that takes as inputs a dialogue context vector c , a hypothesis response vector r , and a reference response vector r .",
"Its output is the sum of a referenced metric and an unreferenced metric: ADEM ref ( r , r ) = r TN r , (1) ADEM unref ( c , r ) = c TM r , (2) where the encoding vectors are produced by pretrained RNN encoders.",
"M and N are trainable parameters.",
"RUBER also combines two metrics but computes them differently: RUBER ref ( r , r ) = r T r (cid:107) r (cid:107) (cid:107) r (cid:107) , (3) RUBER unref ( c , r ) = MLP ([ c ; r ; c TM r ]; ) , (4) where [ ; ] denotes the concatenation of vectors and MLP is a multi-layer perceptron with nonlinear activation functions.",
"M and are trainable parameters.",
"Besides the differences in metric computation, they are different in training strategy.",
"ADEM uses supervised training to minimize the mean square error between predictions and human scores, while RUBER uses unsupervised training on an NSP task to minimize a margin ranking loss.",
"In Section 5, we combine their advantages to build a better response evaluator.",
"For assessing dialogue response evaluators, we sample 100 dialogues from the test split of the DailyDialog corpus (Li et al., 2017) which contains 13,118 open-domain and human-written conversations.",
"We expand them with extra response hypotheses and collect human annotations of response quality.",
"Collection of Extra Responses.",
"Besides the ground-truth response, we add responses from different sources for each dialogue context, including 1) a negative-sampling response randomly selected from a different dialogue and 2) responses generated by generative models trained on the training split.",
"We combine 6 generative models (S2S (Sutskever et al., 2014), attentional S2S, HRED (Serban et al., 2016), VHRED (Serban et al., 2017), GPT2 -sm , and GPT2 -md (Wolf et al., 2019)) with 3 decoding methods (greedy decoding, ancestral sampling, and nucleus sampling (Holtzman et al., 2019)).",
"The resulting response pool for each dialogue context contains 20 responses of various qualities.",
"Collection of Human Annotations.",
"From the 2,000 dialogue-response pairs, we select 900 of them and ask Amazon Mechanical Turk workers to rate response appropriateness on a 5-point Lik-ert scale.",
"Each pair is rated by four workers.",
"After removing annotation outliers for each pair (Leys et al., 2013), the remaining data reaches good reliability regarding an inter-annotator agreement with Krippendorff's > 0 .",
"8 (Krippendorff, 2018).",
"1 We make a 0.8:0.1:0.1 split of the annotated data for training, validation and test.",
"Figure",
"1(a) shows the overall distribution of 900 human scores on response appropriateness , and 1 More details of inter-annotator agreement and outlier removal are provided in Appendix A. 1 2 3 4 5 Appropriateness 0 50 100 150 200 250 N u m be r o f r e s pon s e s",
"(a) Overall score distribution.",
"GT ground-truth, NS negative-sampling.",
"Figure",
"1(b) shows box plots of human scores for different response sources.",
"The distributions suggest that the created data consists of diverse responses.",
"Sai et al. (2019) proved theoretically that the comparison with reference response in the referenced metric causes ADEM to make conservative predictions where scores have a very low standard deviation.",
"To investigate the effect of removing reference from computation, we experiment with the full ADEM and RUBER as well as their referenced and unreferenced versions.",
"As shown in Table 1, the referenced metrics of ADEM and RUBER have much lower standard deviations than human scores.",
"ADEM's unreferenced metric has low scores in both correlation and standard deviation because the full ADEM model is heavily affected by its referenced metric while its unreferenced metric is not fully utilized, especially in the data set that includes ground-truth responses.",
"Another important finding is that the referenced metrics' correlations degrade significantly when we remove ground-truth responses from the test data.",
"It suggests that referenced metrics may help evaluators to distinguish a ground-truth response from a non-ground-truth response easily, but they cannot distinguish a good response from a bad one among non-ground-truth responses.",
"Based on the results, we propose to build reference-free evaluators and avoid direct comparison with reference responses to improve its robustness and diversity.",
"ADEM is a supervised model that relies on human annotations.",
"However, it is expensive to collect large-scale annotated data; On the other hand, RUBER has been shown to reach reasonable correlation scores via only unsupervised training on an NSP task.",
"A natural idea is to apply unsupervised training first and then finetune an evaluator using a Model Pr.",
"relatively small amount of annotated data.",
"Taking RUBER as an example, by finetuning RUBER on 720 annotated samples, we improve its Pearson's correlation from 0.37 to 0.45 and Spearman's correlation from 0.31 to 0.41.",
"All the metrics mentioned before are based on encoding vectors r , r and c , so a powerful text encoder is essential to building a good evaluator.",
"ADEM and RUBER are both initialized with pretrained RNN response generators.",
"As an alternative, pretrained (masked) language models such as BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019) can be used as a powerful text encoder and have benefited most downstream tasks in natural language processing (Huang et al., 2019; Lan et al., 2020; Joshi et al., 2020; Shimanaka et al., 2019).",
"We choose RoBERTa -large to build our response evaluator.",
"A RoBERTa evaluator produces an encoding vector d given a context c and a response r and then finally calculates its score via an MLP with a sigmoid function.",
"We rescale the score to match annotator's scale of [1, 5]: d = RoBERTa ([ c ; r ]; ) , (5) RoBERTa-eval ( c, r ) = 4 MLP ( d ; ) + 1 , (6) where RoBERTa's parameter and MLP's parameter can both be optimized during training.",
"Table 3 shows the correlation scores and standard deviations of four metric groups.",
"The first group is automated metrics that are based on n gram overlapping (BLEU-2) or word embedding similarities (Average, Extrema, and Greedy).",
"The second group is the baseline ADEM and RUBER.",
"The third group is the semi-supervised full RUBER model, the semi-supervised unreferenced RUBER model, and the RoBERTa-based evaluator Model Pr.",
"that combines the three proposed methods.",
"Human scores are given in the final group.",
"Semi-supervised training yields improvement in correlations, and abandoning referenced metrics makes predictions less conservative.",
"The RoBERTa evaluator outperforms the baselines by a large margin and has a much human-like score diversity.",
"We are interested in applying a trained response evaluator to new data of different domains or styles.",
"Therefore, we carry out experiments to study the transferability of the RoBERTa evaluator.",
"In addition to the DailyDialog (DD) corpus, we further collect annotations on 900 responses from the PersonaChat (PC) corpus (Zhang et al., 2018) following the same procedure in Section 4. The evaluator turns out to generalize to a new corpus much better than the baseline RUBER according to results in Table 4. The evaluator trained on the DD corpus achieves even higher correlation scores when applied to the PC corpus.",
"However, performance degradation is observed when applying the evaluator trained on the PC corpus to the DD corpus.",
"It suggests that we should make a careful choice of training data when planning to evaluate our models on different corpora.",
"Although only 720 annotated samples are used in the experiments above, we explored the possibility",
"of training with even fewer data.",
"Figure 2 shows that, with only around 100 samples, the RoBERTa evaluator can reach performance close to the result obtained using the entire 720 samples.",
"In this section, we address Sai et al. (2019)'s requirements towards a robust evaluator.",
"1. Not be heavily influenced by the reference response.",
"The proposed evaluator is entirely inde-pendent of references.",
"2. Generalizing to diverse responses.",
"1) After removing ground-truth from the test data, the RoBERTa evaluator still achieves 0.62 Pearson's correlation and 0.64 Spearman's correlation.",
"2) The evaluator achieves good performances on diverse responses (see 4) and different corpora (see 6.1).",
"3. Sensitivity to grammar and relevance of the response.",
"We also collected annotations for relevance and grammatical correctness .",
"The RoBERTa evaluator trained on appropriateness annotations can achieve 0.68 Pearson's and 0.67 Spearman's correlations with relevance annotations, while its correlation scores with grammatical correctness are only 0.09 and 0.15.",
"However it is understandable because responses of per-fect grammar can still be inappropriate in a certain context and grammar itself is not highly correlated with appropriateness .",
"2 4. Robust against fooling attacks.",
"Unlike in Sai et al. (2019), we have not found any magic responses that can fool the evaluators to output high scores constantly.",
"Automatic dialogue response evaluators have problems in robustness and correlation with human judgement.",
"We investigated three methods to alleviate them: 1) using reference-free metrics, 2) applying semi-supervised training, and 3) exploiting powerful pretrained text encoders.",
"Experimental results demonstrated that our proposed evaluator achieved strong correlation ( > 0.6) with human judgement and showed robustness in dealing with diverse responses and a new domain.",
"It can also be trained efficiently with less than 100 annotated samples.",
"The authors would like to thank Shinsuke Mori from Kyoto University, Wei Wu from Microsoft, Graham Neubig from CMU, and the anonymous reviewers for their constructive comments.",
"This work was supported by JST ERATO Ishig-uro Symbiotic Human-Robot Interaction program (Grant Number JPMJER1401), Japan."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"objective",
"abstain",
"other",
"other"
] |
[
"Synthesizing QA pairs with a question generator (QG) on the target domain has become a popular approach for domain adaptation of question answering (QA) models.",
"Since synthetic questions are often noisy in practice, existing work adapts scores from a pretrained QA (or QG) model as criteria to select high-quality questions.",
"However, these scores do not directly serve the ultimate goal of improving QA performance on the target domain.",
"In this paper, we introduce a novel idea of training a question value estimator (QVE) that directly estimates the usefulness of synthetic questions for improving the target-domain QA performance.",
"By conducting comprehensive experiments, we show that the synthetic questions selected by QVE can help achieve better target-domain QA performance, in comparison with existing techniques.",
"We additionally show that by using such questions and only around 15% of the human annotations on the target domain, we can achieve comparable performance to the fully-supervised baselines.",
"1 1 Introduction Question answering (QA) systems based on pretrained language models such as BERT (Devlin et al., 2019) have recently achieved promising performance in machine reading comprehension.",
"However, neural QA systems trained on one domain may not generalize well to another, leaving it challenging to deploy such systems on new domains that lack large-scale QA training data 2 .",
"In this paper, we are interested in semi-supervised domain adaptation : we aim to build a target QA model with source-domain data and a small number of target-domain annotated QA pairs.",
"Due to high annotation costs, existing work (Golub et al., 2017; Dong et al., 2019; Wang et al., 2019; Puri et al., 2020; Chen et al., 2020; Yue et al., 2021) proposes to synthesize target-domain QA pairs via neural question generation (QG) models.",
"The synthetic data are then used to train a QA model on the target domain.",
"In practice, however, the generated questions are often of low quality, such as being semantically mismatched with their paired answers or asking about simple facts (Fig-ure 1).",
"Including all such questions for QA training is less likely to bring substantial improvements.",
"This inspires us to study a crucial problem: 1340 Given a set of target-domain synthetic QA pairs, how to select high-quality ones that are useful to improve target-domain QA training?",
"To address the problem, Alberti et al. (2019) propose the Roundtrip Consistency (RTC) method, which filters 3 questions that cannot be correctly answered by a pretrained QA model.",
"Other work (Shakeri et al., 2020) considers using the generation log likelihood by the QG model (LM Score) as a metric to filter noisy questions (Figure 1, top).",
"Although these filtering techniques have been shown to improve the question quality to some extent (Rennie et al., 2020), they are not directly optimized for selecting questions that can improve QA performance on the target domain .",
"For example, some useful but difficult questions (e.g., the last example in Figure",
"1) may be filtered by the Roundtrip method, since they cannot be answered correctly by the pretrained QA model.",
"However, these questions are often crucial to further improving QA performance when added into training.",
"In this paper, we propose a question value estimator (QVE) (Figure 1, middle) to select questions that can improve QA performance on the target domain.",
"QVE takes in generated QA examples and outputs real-valued scores (i.e., question values), which are expected to represent the usefulness of generated questions in terms of improving target-domain QA performance.",
"However, training the QVE model towards this goal is challenging due to the lack of supervision (i.e., true question values).",
"To solve the problem, we propose to train the QVE with direct QA feedback from the target domain.",
"Intuitively, if a batch of synthetic questions (when used for training) leads to increasing accuracy of the target-domain QA model, QVE should assign high values to them; the more the accuracy increases, the higher the question values should be.",
"Thus, we optimize QVE with the target-domain QA performance gain after adding the selected questions into training.",
"More formally, given the discrete and non-differentiable question selection process, we formulate the question selection of QVE as a reinforcement learning (Williams, 1992) problem (Figure 2).",
"The QVE receives a batch of synthetic samples each time and learns to select high-quality ones based on their estimated values.",
"The selected samples are then used to train the target-domain QA model, with the resulting performance 3 We interchangeably use filter (noisy/low-quality questions) and select (useful/high-quality questions).",
"gain (on the available target-domain annotations) as the reward.",
"The reward guides the optimization of QVE such that it will eventually make proper question value estimation and selection.",
"To evaluate the QVE model, we instantiate the QG and the QA model based on the pretrained BART (Lewis et al., 2020) and BERT (Devlin et al., 2019), respectively.",
"By carrying out comprehensive experiments on four commonly-used reading comprehension datasets (Trischler et al., 2017; Joshi et al., 2017; Yang et al., 2018; Kwiatkowski et al., 2019), we show that: (1) our QVE model trained with the target-domain QA feedback substantially outperforms the question selection techniques trained without direct QA feedback (Alberti et al., 2019; Shakeri et al., 2020).",
"(2) When using our QVE model to select synthetic questions, QA models can achieve comparable performance to fully-supervised baselines while using only 15% of the full target-domain annotations, which indicates that our method can greatly alleviate human annotation effort in practice.",
"(3) To understand why QVE brings superior improvement, we conduct human evaluation and find that QVE can better identify semantically-matched and difficult questions.",
"Domain Adaptation of Question Answering.",
"In this field, some work (Wiese et al., 2017; Chung et al., 2018; Hazen et al., 2019; Cao et al., 2020) assumes that target-domain annotated questions are available, however, manually creating questions is costly.",
"Therefore, another line of research work (Golub et al., 2017; Wang et al., 2019; Lee et al., 2020; Shakeri et al., 2020) investigates a domain adaptation setting where annotated questions are not available on the target domain.",
"A commonly-adopted approach of this line is to leverage a neural question generation (QG) model (Du et al., 2017; Zhou et al., 2017; Sun et al., 2018; Zhao et al., 2018; Nema et al., 2019; Tuan et al., 2020) to automatically synthesize questions given unlabeled contexts (Du and Cardie, 2018; Zhang and Bansal, 2019; Wang et al., 2019; Liu et al., 2020; Golub et al., 2017; Wang et al., 2019; Lee et al., 2020; Shakeri et al., 2020; Yue et al., 2021); see more discussions in Section 3.",
"However, it is very challenging to achieve satisfying performance without any target annotations.",
"In our work, we study semi-supervised domain adaptation of QA , and assume a small number of target annotations are available , 1341 which can greatly help models adapt to the target domain while requiring minimal human effort.",
"Unsupervised and Semi-supervised QA are two other research topics relevant to our work (Fabbri et al., 2020; Li et al., 2020; Lewis et al., 2019; Dhingra et al., 2018).",
"Unlike domain adaptation, these two settings do not assume the existence of the source domain and synthesize cloze-style questions via rule-based methods for building QA models.",
"Since rule-based QG methods typically have much worse performance than neural ones (pretrained on the source data), we do not compare with these two lines of research in experiments.",
"Data Selection methods aim to select a useful subset from the (noisy) training data.",
"Though (RL-based) data selection methods were explored in other NLP tasks (Ruder and Plank, 2017; Qu et al., 2019; Liu et al., 2019), none of them can be directly applied with trivial efforts to our QA scenario and semi-supervised setting.",
"For example, (Ruder and Plank, 2017) and (Liu et al., 2019) reward or measure the selection with the distribution distance between the selected data and target data, while we reward the selection by measuring how large the improvement the selected data can bring for target-domain QA training, which is more aligned with the end goal.",
"Our work is mostly inspired by recent research on data selection in machine learning community (Ghorbani and Zou, 2019; Jia et al., 2019), particularly (Yoon et al., 2020).",
"However, the significant differences between our work and (Yoon et al., 2020) are as follows:",
"1) we study a very challenging task, domain adaptation of question answering, which was not studied in (Yoon et al., 2020).",
"How to develop a method in a similar spirit for this task is unexplored.",
"2) In order to study the task, we begin our method by first proposing two data selection methods that are not covered in (Yoon et al., 2020) but achieve comparable results to existing baselines.",
"We then introduce our RL-based method with a carefully-designed reward, which is well connected to the end goal of improving target-QA performance.",
"the semi-supervised domain adaptation of extractive question answering, where the source-domain",
"and a small number 4 of target-domain QA annotations are provided.",
"Formally, we denote the source-domain QA dataset as D s = { ( c s i , q s i , a s i ) } Ni =1 , where large-scale tuples of context c s i , question q s i , and answer a s i are available.",
"For the target domain, only a small set of annotated QA pairs D t = { ( c t j , q t j , a t j ) } Mj =1 are available ( M (cid:28) N ).",
"Since unlabeled contexts are easy to collect, we assume that they are largely available: C t = { c t l } Ll =1 ( L (cid:29) M ).",
"The task is to build a QA model that can accurately answer questions on the target domain, given D s , D t , and C t .",
"Domain Adaptation via Question Generation.",
"Given the lack of large-scale target-domain annotations, an intuitive approach to domain adaptation is first synthesizing target-domain QA data D t syn = { ( c t l , q t l , a t l ) } Ll =1 automatically from the unlabeled contexts C t , and then training a target-domain QA model on the synthetic ( D t syn ) and the small-size annotated ( D t ) target-domain data.",
"In such an approach, a question generator (QG) g is first pretrained on the source training data and further finetuned on the available target-domain annotated QA pairs.",
"A well-trained QG model then takes target-domain context-answer pairs as input to generate a question: q t l = g ( c t l , a t l ) .",
"Although this approach has been shown promising, in practice, its effectiveness is restricted by the quality of synthetic questions.",
"Thus, learning to select ones that can lead to a better target-domain QA model becomes a crucial problem.",
"With respect to how to obtain a t l for QG, in this paper, we assume an answer a t l (i.e., a text span in the context c t l ) is given, following Du et al. (2017).",
"When the answer a t l is not given, it can be extracted from the given context by using an entity recognition tool (Du and Cardie, 2018), a classifier (Puri et al., 2020) or a seq2seq model (Shakeri et al., 2020).",
"Note that noise caused by such answer extraction tools will further lower the overall quality of the synthesized questions.",
"In this paper, we focus on how to select useful synthetic questions in general (i.e., those questions can be synthesized by any QG process) and assume answers are given for simplicity.",
"Given the synthetic target-domain QA data D t syn the task is to select high-quality pairs from D t syn",
"that are useful to improve target-domain QA training.",
"Such a selection decision is often made based on some scores that can indicate the quality of the pairs.",
"For example, Roundtrip filtering (Al-berti et al., 2019) selects questions based on the extracted answer's correctness by a pretrained QA model.",
"Similarly, LM filtering (Shakeri et al., 2020) selects questions with high log-likelihood scores in the generation.",
"However, these scores do not directly serve the goal of improving target-domain QA training.",
"Inspired by recent research on data selection in the machine learning community (Ghor-bani and Zou, 2019; Jia et al., 2019; Yoon et al., 2020), we propose a new idea of training a question value estimator , which predicts the usefulness of a synthetic question for target-domain QA.",
"Formally, we design a question value estimator (QVE), e , which takes in a synthetic QA example ( c l , q l , a l ) (for simplicity, we omit the superscript t ) and outputs a score indicating its value, i.e., v l = e ( c l , q l , a l ) .",
"The value can imply the potential for improving the target-domain QA performance when being used as a training sample.",
"With this score, one can select most useful synthetic examples for the target-domain QA training.",
"We use a BERT model as the backbone of the QVE.",
"Specifically, we concatenate the context, question and answer as input to the QVE, and use BERT to encode the sequence (Devlin et al., 2019).",
"h = BERT [ <CLS> q <ANS> a <SEP> c ] where q, a, c represent the question, answer, and context, respectively.",
"h RH denotes the hidden representation of the input sequence derived from the <CLS> token.",
"<ANS> and <SEP> are two special tokens used as delimiters.",
"In our preliminary experiments, we find that adding the answer (start index and end index) probabilities ( p s , p e ) by a pretrained QA model as additional features to the hidden representation h can accelerate the QVE training convergence and lead to better performance.",
"Thus, we add these two features ( p s , p e ) followed by linear transformations of the original hidden representation, and then build a linear classifier to output the question value.",
"h (cid:48) = ( W 2 ( W 1 h + b 1 ) + b 2 ) h (cid:48)(cid:48) = ( W 3 ( h (cid:48) p s p e ) + b 3 ) v l = W 4 h (cid:48)(cid:48) + b 4 where W 1 RH 1 H , W 2 RH 2 H 1 , W 3 RH 3 H 2 , W 4 RH 3 , b 1 RH 1 , b 2 RH 2 , b 3 RH 3 , b 4 R are trainable parameters of linear layers.",
"is the activation function tanh .",
"Learning such a question value estimator is challenging because we do not have direct supervision on the true value or usefulness of a synthetic question.",
"We discuss two straightforward baselines to train QVE in Section 4.1, and a more advanced one based on reinforcement learning in Section 4.2.",
"Binary Classifier : One straightforward solution is to treat QVE as a binary classifier and train it based on the human-annotated (positive) and the machine-synthesized (negative) QA pairs.",
"Given the scarcity of target-domain data, we first pretrain the classifier on the source domain and then finetune it on the target domain.",
"More specifically, we train a QG model on 70% of the source training data and generate synthetic questions on the remaining 30% of the source training contexts.",
"The generated questions and the source-domain annotated questions are used to train this binary classifier.",
"The classifier is then finetuned based on the small set of target-domain annotations (positive) and the samples synthesized on the same target-domain contexts (negative).",
"However, not all of the generated questions are bad.",
"Simply treating all synthetic samples as negatives may mislead the classifier.",
"Thus, we loose this assumption and introduce a ranking baseline.",
"Ranking Baseline : We assume that the quality of human-annotated questions is not inferior than that of machine-synthesized ones.",
"Thus, we train QVE based on a ranking triplet loss defined as follows: L r = (cid:88) max(0 , m + v s v h ) where v s , v h are the estimated question values of the machine-synthesized sample and human-annotated sample.",
"m is set to 0 .",
"15 as the margin.",
"The two baseline methods have two obvious drawbacks: (1) they are trained to differentiate between human-annotated and machine-synthesized samples, which is mismatched with our goal of selecting high-quality samples among machine-synthesized data ; (2) similar as (Alberti et al., 2019; Shakeri et al., 2020), the two baselines are not trained with direct signals that can represent the usefulness of a synthetic question.",
"In the next section, we will introduce a task-specific training 1343 Question Value Estimator Target Annot.",
"Algorithm 1 QVE REINFORCED Training Input : pretrained QA model f ; target synthetic QA pairs D t syn ; small target annotations D t .",
"Hyperparameters : outer iterations I o , outer batch size B o , inner iterations I n , inner batch size B n , QVE learning rate o , QA learning rate n .",
"Output : QVE e .",
"A well-trained QVE is expected to assign high values to synthetic questions that can improve the target-domain QA performance.",
"Therefore, an intuitive way to measure the value of a synthetic question is to consider the downstream QA performance gain (on the available target annotations) before and after this question is included in the training set.",
"However, this leave-one-out formulation is computationally expensive and time-consuming, given that it can estimate the value of only one single synthetic question in each forward pass.",
"In light of this challenge, we instead estimate question values in a batch-wise fashion.",
"Algorithm 1 and Figure 2 describe the learning process.",
"Generally speaking, we frame the QVE model learning as a reinforcement learning problem (Williams, 1992), and stimulate QVE to assign higher values to more useful questions by using performance-driven rewards.",
"Specially, for a batch of synthetic examples D = { ( c l , q l , a l ) } B o l =1 in the outer training iteration (Line 4-5), the QVE model selects a subset of examples that are most likely to boost the QA performance on the target domain, based on its judgment on their values.",
"Mathematically, the decision-making outcome is represented by the selection vector S = ( s 1 , s 2 , ..., s B o ) , where s l { 0 , 1 } l = 1 , ..., B o (Line 6-9).",
"The whole batch-level decision making policy is described as follows: v l = e ( c l , q l , a l ) s l Bernoulli ( v l ) ( S|D ) = B o (cid:89) l =1 [ v s l l (1 v l ) 1 s l ] , where the selection of a certain example ( c l , q l , a l ) is formulated as sampling from a Bernoulli distribution of probability v l (i.e., its estimated question value).",
"We adopt the Bernoulli sampling based on the estimated value v l instead of setting a hard threshold to encourage the policy exploration.",
"The model is rewarded based on how much performance gain the selected examples could bring 1344 when they are used to train the target-domain QA model.",
"To this end, we finetune the QA model f on the selected batch samples based on L qa , which typically is a cross-entropy loss: L qa = B o (cid:88) l log P ( a l | q l , c l ; ) In practice, to stabilize the QVE training, we choose a large outer batch size B o in each outer training iteration.",
"For finetuning the QA model, we pick a relatively smaller inner batch size B n and repeat the training for I n times, such that the QVE-selected samples are fully utilized (Line 10-14).",
"The reward r qve is defined as the QA performance gain on the target-domain annotations D t before ( f 0 ) and after ( f ) finetuning (Line 15-16), r qve = reward_fn ( f 0 , f , D t ) where reward_fn is Exact Match (EM) gain 5 .",
"Given the discrete and non-differentiable question selection process, we update the QVE model using the REINFORCE algorithm (Williams, 1992).",
"Mathematically, we aim to minimize: L = E S ( |D ) [ r qve ] .",
"L = E S [ r qve log ( S|D )] = E S [ r qve B o (cid:88) l =1 log[ v s l l (1 v l ) 1 s l ]] .",
"(1) Notably, to mitigate the instability in reinforcement learning, we reset the QA model to its pretrained checkpoint at the end of each outer iteration (Line 19), and keep the pretrained QG model unchanged.",
"After training QVE, we can use it to calculate the question value for all the synthetic questions on the target domain.",
"Then we can select top K % synthetic QA pairs as the training corpus to train the target-domain QA model.",
"We use datasets in the MRQA 2019 Shared Task (Fisch et al., 2019), a popular challenge focusing on generalization in reading comprehension.",
"5 We also tried F1 gain and loss drop as the reward_fn and the EM gain is slightly better than the other two.",
"Specifically, following Shakeri et al. (2020), we use SQuAD 1.1 (Rajpurkar et al., 2016) as the source-domain dataset.",
"For the target-domain datasets, we consider NewsQA (Trischler et al., 2017), Natural Questions (NQ) (Kwiatkowski et al., 2019), HotpotQA (Yang et al., 2018) and TriviaQA (Joshi et al., 2017) as they are commonly used and have sufficient contexts for the QG model to generate synthetic samples.",
"Since there is no test set available for each dataset, we use the original dev set as the test set.",
"Detailed descriptions of each dataset are in Appendix A. For the target-domain datasets, we assume all the contexts and n annotated QA pairs in the original training sets are available for training.",
"We set n = 1000 (about 1%-1.5% of original training sets) as default and discuss the impact of n in Section 6.2.",
"We implement models using the Hugging Face transformers (Wolf et al., 2020) library.",
"We instantiate the QA model with BERT-base-uncased (Devlin et al., 2019), and the QG model with BART-base (Lewis et al., 2020).",
"For training QVE (Algorithm 1), we use BERT-base-uncased model and set H 1 = H 3 = H = 768 and H 2 = 64 for linear layers.",
"To enable a large batch size B o , we use gradient checkpointing (Chen et al., 2016), a technique used for reducing the memory footprint when training deep neural networks.",
"We set I o = 2000 , B o = 80 , I n = 20 , B n = 4 , and o = n = 3 e 5 .",
"To select the best QVE checkpoint, we pick the one that achieves the highest reward on the target annotations or the one that leads to the lowest QA training loss.",
"When training (finetuning) QA and QG models (either on source or target domain), we set training epochs as 2 and 3 respectively.",
"Other hyperparameters are set as default in the transformers library.",
"We evaluate the following QA models built on different training data: (1) Source Only Baseline : we train a QA model on the source-domain data.",
"(2) Source + Target Annotations Baseline : we further finetune the (1) Source Only Baseline on the available target annotated QA pairs.",
"(3) QG Baseline (no filtering) : we first pretrain a QG model on the source-domain data and finetune it on the available target annotations.",
"The 1345 Different Filtering Methods Dataset NoFilter RTC LM QVE NewsQA 74,160 33,756 44,485 44,485 NQ 104,071 62,888 62,443 62,443 HotpotQA 72,928 46,273 43,757 43,757 TriviaQA 61,688 26,361 37,013 37,013 Table 1: Number of synthetic examples selected by different methods.",
"QG model is then used to generate synthetic QA samples on the target contexts.",
"We finetune a QA model sequentially on all available data with the order of source target synthetic target annotated for all the datasets except TriviaQA 6 .",
"The same QA finetuning strategy will also be used for (4)-(8).",
"(4) RoundTrip Filtering (Alberti et al., 2019): we use the (2) Source + Target Annotation Baseline to extract answers for target synthetic questions and select the ones, whose extracted answers are correct, as the target synthetic training corpus.",
"(5) LM Filtering (Shakeri et al., 2020): we use the log likelihood scores of synthetic questions produced by the QG model in (3) as the filtering criterion.",
"We select top K% samples as the target synthetic training corpus.",
"(8) QVE (RL) : we train QVE based on the direct feedback from target annotations using RL (Sec-tion 4.2), and then use it to select top K% target synthetic samples.",
"(9) Fully-supervised Baseline : we train a QA model on the original target training data.",
"Note that we report the fully-supervised performance here only as the reference and (1)-(8) are not directly comparable to this.",
"The number of the selected synthetic examples of RoundTrip Filtering is determined by the QA model and varies for each dataset.",
"For LM Filtering and QVE, we select top K% (K=60) samples among all synthetic ones and discuss the impact of the synthetic dataset size in Appendix B. We show the statistics of filtered datasets in Table 1.",
"6 For the TriviaQA dataset, we merge the target synthetic and target annotated dataset into one training file since directly finetuning on the target annotated dataset would hurt the QA performance based on our preliminary experiments.",
"We first discuss the domain adaptation results on the 4 target-domain QA datasets under semi-supervised setting where n = 1 , 000 target-domain QA examples are available.",
"Table 2 shows the overall results of different methods.",
"We summarize key findings as follows: (1) Compared with RoundTrip and LM Filtering, our QVE (RL) achieves the best performance.",
"This is because both baselines are not specifically trained to select useful examples for improving QA performance on the target domain.",
"Our QVE, on the contrary, is trained with a signal that directly reflects the QA performance, which can more accurately estimate the question value and select useful pairs for target-domain QA.",
"(2) Two QVE baselines (binary classifier and ranking baseline) can select some useful questions and achieve comparable performance with RoundTrip and LM Filtering.",
"However, due to the lack of direct QA evaluation feedback, they underperform QVE (RL), which demonstrates the usefulness of the QA feedback during training QVE.",
"In Table 2, we showed that with n ( n =1,000) target annotated QA pairs and the selected high-quality synthetic QA pairs, we can finetune a better QA model on the target domain.",
"In this section, we discuss the influence of n on the target-domain QA performance.",
"The results are shown in Figure 3, and interesting findings include: (1) In general, the performance of all models improves as more target annotations are used.",
"This is intuitive as more annotated pairs can improve both QA and QG training.",
"With a better QG model, the quality of the synthetic questions is improved, which could also lead to better QA models.",
"(2) Our QVE model can often outperform the QG baseline and the filtering baselines.",
"With an optimization objective considering the downstream QA performance, QVE can select more useful questions for improving target-domain QA.",
"(3) The improvement of our QVE compared with baselines is usually larger when more annotated QA pairs are available.",
"This is because our QVE training (with RL) relies on the QA feedback based on the available annotated pairs.",
"With more annotated pairs, the feedback can be more accurate, thus 1346 No.",
"leading to a better QVE for selecting more useful synthetic questions.",
"(4) With 10,000 (around 15% of the original training set) target annotations and the synthetic questions selected by QVE, we can achieve comparable performance with the fully-supervised baseline.",
"This indicates that one can save more annotation budgets when building a target-domain QA model based on our QVE in practice.",
"The results presented in the previous sections are based on BERT-base and BART-base .",
"In this section, we test whether our QVE can still be effective when working with larger models, and select BERT-Large and BART-Large as QA and QG model respectively.",
"When changing the QA (QG) model to its larger alternative, we keep the other one as the base model to better show the difference.",
"We use NaturalQuestions (NQ) and HotpotQA as representative datasets, and show results on them (with 1,000 target annotations).",
"As shown in Table 3, our QVE model can still help improve the performance for larger instantiations of QG/QA.",
"In this section, we aim to gain a better understanding of why QVE helps QA and verify that QVE selects more semantically matched and non-trivial questions, thus benefiting downstream QA.",
"Since automatic metrics cannot often reflect the actual quality of the question selections, we sample 50 generated examples from each target-domain dataset (200 in total), and ask three human annotators to label whether a generated QA pair is semantically matched (i.e., can be selected to train QA) and (if yes) whether it asks about a simple fact.",
"To lower the annotation bias in determining 1347 Question ID in the dataset Context Question Human Labels Selected by models?",
"whether a generated question asks about a simple fact or not, we provide the ground-truth question (the question in the original dataset created by humans) as a reference.",
"If the generated question is simpler than the ground truth, then it would be marked as trivial; otherwise, it is a non-trivial one.",
"Three annotators work independently and we adopt the majority vote for deciding the final labels of a generated QA pair (if disagreement appears).",
"We calculate the precision, recall and F1 between predictions 7 by each filtering method and human labels (for both semantically matched and non-trivial).",
"As shown in Table 5, though three methods obtain a similar precision on all sampled questions, our method has a better recall, especially on the non-trivial questions.",
"This means that our method can select more semantically matched and non-trivial questions, which explains why it leads to better QA performance.",
"We also show some real cases in Figure 1 and Table 4 to further illustrate this point.",
"For example, our QVE selects What was the nickname given to the woman who allegedly provided call girls for prostitution? while the baselines do not pick this semantically matched and non-trivial question.",
"For another example, Who is the founder of CNN , both baselines select it while our QVE filters it out since such a simple question would probably not help further improve QA.",
"We propose a question value estimator to estimate the usefulness of synthetic questions and select useful ones for improving target-domain QA train-7",
"ing. We optimize QVE with the target-domain QA performance gain after adding the selected questions into training.",
"Our comprehensive experiments demonstrate the superiority of QVE compared with other question selection methods.",
"Additionally, using the synthetic questions selected by QVE and only around 15% of the human annotated data on each target domain, we can achieve comparable performance to the fully-supervised baselines.",
"The authors would thank all the anonymous reviewers and the entire OSU and GMU NLP Group.",
"This research was sponsored in part by NSF IIS-1815674, NSF CAREER #1942980, NSF OAC-2112606, and Ohio Supercomputer Center (OSC, 1987).",
"The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S.Government.",
"The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notice herein."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"result",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"method",
"abstain",
"result",
"abstain",
"abstain",
"method",
"result",
"result",
"result",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"other",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"objective",
"method",
"objective",
"result",
"other",
"other",
"other",
"other"
] |
[
"Abstract Higher-order methods for dependency parsing can partially but not fully address the issue that edges in dependency trees should be constructed at the text span/subtree level rather than word level.",
"In this paper, we propose a new method for dependency parsing to address this issue.",
"The proposed method constructs dependency trees by directly modeling span-span (in other words, subtree-subtree) relations.",
"It consists of two modules: the text span proposal module which proposes candidate text spans, each of which represents a subtree in the dependency tree denoted by (root, start, end); and the span linking module , which constructs links between proposed spans.",
"We use the machine reading comprehension (MRC) framework as the backbone to formalize the span linking module, where one span is used as query to extract the text span/subtree it should be linked to.",
"The proposed method has the following merits: (1) it addresses the fundamental problem that edges in a dependency tree should be constructed between subtrees; (2) the MRC framework allows the method to retrieve missing spans in the span proposal stage, which leads to higher recall for eligible spans.",
"Extensive experiments on the PTB, CTB and Universal Dependencies (UD) benchmarks demonstrate the effectiveness of the proposed method.",
"1 2 1 Introduction Dependency parsing is a basic and fundamental task in natural language processing (NLP) (Eis-ner, 2000; Nivre, 2003; McDonald et al., 2005b).",
"Among existing efforts for dependency parsers, graph-based models (McDonald et al., 2005a; Pei et al., 2015) are a widely used category of models, which cast the task as finding the optimal maximum spanning tree in the directed graph.",
"Graph-1 Chun Fan is the corresponding author.",
"With notations defined in the previous section, we now illustrate how to compute the score ( T w 0 ) in",
"Eq.(1).",
"Since we want to model the span-span relations inside a dependency tree, where the tree is composed by spans and the links between them, we formalize the scoring function as: score ( T w 0 ) = n (cid:88) i =1 score span ( T w i ) + (cid:88) ( w i w j ) T w 0 score link ( T w i , T w j ) (2) where score span ( T w i ) represents how likely the subtree rooted at w i covers the text span from T.s to T.e .",
"score link ( T w i , T w j ) represents how likely tree T w j is a subtree of T w i , i.e. there is an arc from w i to w j , and is a hyper-parameter to balance score span and score link .",
"We will illustrate the details how to compute score span ( T ) and score link ( T 1 , T 2 ) in the following sections.",
"Table 1 shows all the spans and links for the left tree in Figure 1.",
"In this section, we introduce the span proposal module.",
"This module gives each tree T w i a score score span ( T w i ) in",
"Eq.(2), which represents how likely the subtree rooted at w i covers the text span from T w i",
".s to T w i",
".e .",
"The score can be decomposed into two components the score for the left half span from w i to T w i",
".s , and the score for the right half span from w i to T w i",
".e , given by: score span ( T w i ) = score start ( T w i .s | w i ) + score end ( T w i .e | w i ) (3) We propose to formalize score start ( T w i .s | w i ) as the score for the text span starting at T w i",
".s , ending at w i , by transforming the task to a text span extraction problem.",
"Concretely, we use the biaffine function (Dozat and Manning, 2016) to score the text span by computing score start ( j | i ) the score of the tree rooted at at w i and staring at w j : score start ( j | i ) = x (cid:62) i U start x j + w (cid:62) start x j (4) where U R d d and w R d are trainable parameters, x i R d and x j R d are token representations of w i and w j respectively.",
"To obtain x i and x j , we pass the sentence s to pretrained models such as BERT (Devlin et al., 2018).",
"x i and x j are the last-layer representations output from BERT for w i and w j .",
"We use the following loss to optimize the left-half span proposal module: L startspan = n (cid:88) i =1 log exp( score start ( T w i .s | i )) (cid:80) nj =1 exp( score start ( j | i )) (5) This objective enforces the model to find the correct span start T w i",
".s for each word w i .",
"We ignore loss for w 0 , the dummy root token.",
"score end ( T w i .e | w i ) can be computed in the similar way, where the model extracts the text span rooted at index w i and ending at T w i",
".e : score end ( j | i ) = x (cid:62) i U end x j + w (cid:62) end x j (6) The loss to optimize the right-half span proposal module: L endspan = n (cid:88) i =1 log exp( score end ( T w i .e | i )) (cid:80) nj =1 exp( score end ( j | i )) (7) Using the left-half span score in",
"Eq.(4) and the right-half span score in",
"Eq.(6) to compute the full span score in",
"Eq.(3), we are able to compute the score for any subtree, with text span starting at T w i",
".s , ending at T w i",
".e and rooted at w i .",
"Given two subtrees T w i and T w j , the span linking module gives a score score link ( T w i , T w j ) to represent the probability of T w j being a subtree of T w i .",
"This means that T w i is the parent of T w j , and that the span associated with T w j , i.e., ( T w j .s, T w j .e ) is fully contained in the span associated with T w i , i.e., ( T w i .s, T w i .e ) .",
"We propose to use the machine reading comprehension framework as the backbone to compute this score.",
"It operates on the triplet {context ( X ), query ( q ) and answer ( a )}.",
"The context X is the original sentence s .",
"The query q is the child span ( T w j .s, T w j .e ) .",
"And we wish to extract the answer, which is the parent span ( T w i .s, T w i .e ) from the context input sentence s .",
"The basic idea here is 2429 that using the child span to query the full sentence gives direct cues for identifying the corresponding parent span, and this is more effective than simply feeding two extracted spans and then determining whether they have the parent-child relation.",
"Constructing Query Regarding the query, we should consider both the span and its root.",
"The query is thus formalized as follows: <sos> , T w j",
"where <sos> , <sor> , <eor> , and <eos> are special tokens, which respectively denote the start of span, the start of root, the end of root, and the end of span.",
"One issue with the way above to construct query is that the position information of T w j is not included in the query.",
"In practice, we turn to a more convenient strategy where the query is the original sentence, with special tokens <sos> , <sor> , <eor> , and <eos> used to denote the position of the child.",
"In this way, position information for child T w j can be naturally considered.",
"Answer Extraction The answer is the parent, with the span T w i",
".s, T w i",
".e rooted at T w i .",
"We can directly take the framework from the MRC model by identifying the start and end of the answer span, respectively denoted by score s parent ( T w i .s | T w j ) and score e parent ( T w i .e | T w j ) .",
"We also wish to identify the root T w i from the answer, which is characterized by the score of w i being the root of the span, denoted by score r parent ( w i | T w j ) .",
"Furthermore, since we also want to identify the relation category between the parent and the child, the score signifying the relation label l is needed to be added, which is denoted by score l parent ( l | T w j , w i ) .",
"For quadruple ( T w i .s, T w i .e, T w j , l ) , which denotes the span T w i",
".s, T w i",
".e rooted at w i , the final score for it being the answer to T w j , and the relation between the subtrees is l , is given by: score parent ( T w i | T w j ) = score r parent ( w i | T w j ) + score s parent ( T w i .s | T w j )+ score e parent ( T w i .e | T w j ) + score l parent ( l | T w j , w i ) (9) In the MRC setup, the input is the concatenation of the query and the context, denoted by { <cls> , query , <sep> , context } , where <cls> and <sep> are special tokens.",
"The input is fed to BERT, and we obtain representations for each input token.",
"Let h t denote the representation for the token with index t output from BERT.",
"The probability of t th token being the root of the answer, which is denoted by score r parent ( w t | T w j ) is the softmax function over all constituent tokens in the context: score r parent ( w t | T w j ) = exp ( h (cid:62) root h t ) (cid:80) t (cid:48) context exp( h (cid:62) root h t (cid:48) ) (10) where h (cid:62) root is trainable parameter.",
"score s parent and score e parent can be computed in the similar way: score s parent ( w t | T w j ) = exp ( h (cid:62) start h t ) (cid:80) t (cid:48) context exp( h (cid:62) start h t (cid:48) ) score e parent ( w t | T w j ) = exp ( h (cid:62) end h t ) (cid:80) t (cid:48) context exp( h (cid:62) end h t (cid:48) ) (11) For score l parent ( l | T w j , w i ) , which denotes the relation label between T w i and T w j , we can compute it in a simple way.",
"Since h w i already encodes information for h w j through self-attentions, the representation h w i for w i is directly fed to the softmax function over all labels in the label set L : score l parent ( l | T w j , w i ) = exp ( h (cid:62) l h w i ) (cid:80) l (cid:48) L exp( h (cid:62) l (cid:48) h w i ) (12) Mutual Dependency A closer look at",
"Eq.(9) reveals that it only models the uni-directional dependency relation that T w i is the parent of T w j .",
"This is suboptimal since if T w i is a parent answer of T w j , T w j should be a child answer of T w i .",
"We thus propose to use T w i as the query q and T w j as the answer a .",
"score child ( T w j | T w i ) = score r child ( w j | T w i ) + score s child ( T w j .s | T w i )+ score e child ( T w j .e | T w i ) + score l child ( l | T w i , T w j ) (13) The final score score link is thus given by: score link ( T w i , T w j ) = score child ( T w j | T w i ) + score parent ( T w i | w j ) (14) Since one tree may have multiple children but can only have one parent, we use the multi-label cross entropy loss L parent for score parent ( T w i | T w j ) and use the binary cross entropy loss L child for score child ( T w j | T w i ) .",
"We jointly optimize these two losses L link = L parent + L child for span linking.",
"Given an input sentence s = ( w 0 , w 1 , w 2 , ..., w n ) , the number of all possible subtree spans ( w i , T w i .s, T w i .e ) is O ( n 3 ) , and therefore running MRC procedure for every candidate span is computationally prohibitive.",
"A naive solution is to use the span proposal module to extract top-k scored spans rooted at each token.",
"This gives rise to a set of span candidates T with size 1 + n k (the root token w 0 produces only one span), where each candidate span is associated with its subtree span score score span ( ) .",
"Then we construct the optimal dependency tree based only on these extracted spans by linking them.",
"This strategy obtains a local optimum for",
"Eq.(2), because we want to compute the optimal solution for the first part (cid:80) ni =1 score span ( T w i ) depending on the second part of",
"Eq.(2), i.e., (cid:80) ( w i w j ) T w 0 score link ( T w i , T w j ) .",
"But in this naive strategy, the second part is computed after the first part.",
"It is worth noting that the naive solution of using only the top-k scored spans has another severe issue: spans left out at the span proposal stage can never be a part of the final prediction, since the span linking module only operates on the proposed spans.",
"This would not be a big issue if top-k is large enough to recall almost every span in ground-truth.",
"However, span proposal is intrinsically harder than span linking because the span proposal module lacks the triplet span information that is used by the span linking module.",
"Therefore, we propose to use the span linking module to retrieve more correct spans.",
"Concretely, for every span T w j proposed by the span proposal module, we use arg max score parent ( T w i | T w j ) to retrieve its parent with the highest score as additional span candidates.",
"Recall that span proposal proposed 1 + n k spans.",
"Added by spans proposed by the span linking module, the maximum number of candidate spans is 1+2 n k .",
"The MRC formalization behind the span linking module improves the recall rate as missed spans at the span proposal stage can still be retrieved at this stage.",
"Projective Decoding Given retrieved spans harvested in the proposal stage, we use a CKY-style bottom-up dynamic programming algorithm to find the projective tree with the highest score based on",
"Eq.(2).",
"The algorithm is present in Algorithm 1.",
"The key idea is that we can generalize the definition of score ( T w 0 ) in",
"Eq.(2) to any w by the following Algorithm 1: Projective Inference Input : Input sentence s , span candidates T , span scores score span ( T ) , T T Output: Highest score of every span score ( T ) , T T /* Compute linking scores based on",
"score ( T w ) = score span ( T w ) + (cid:88) T wj C ( T w ) [ score ( T w j ) + score link ( T w , T w j )] (16) where C ( T w ) = { T w i | ( w w i ) T w , i = 0 , 1 , ...n } is the set of all direct subtrees of T w .",
"Non-Projective Decoding It is noteworthy that effectively finding a set of subtrees composing a tree T requires trees to be projective (the projective property guarantees every subtree is a continuous span in text), and experiments in Section 4 show that this algorithm performs well on datasets where most trees are projective, but performs worse when 2431 a number of trees are non-projective.",
"To address this issue, we adapt the proposed strategy to the MST (Maximum Spanning Tree) algorithm (Mc-Donald et al., 2005b).",
"The key point of MST is to obtain the score for each pair of tokens w i and w j (rather than spans) , denoted by score edge ( w i , w j ) .",
"We propose that the score to link w i and w j is the highest score achieved by two spans respectively rooted at w i and w j : score edge ( w i , w j ) = max T wi ,T wj [ score span ( T w i ) + score span ( T w j ) + score link ( T w i , T w j )] (17) The final score for tree T is given by: score ( T ) = (cid:88) ( w i w j ) T score edge ( w i , w j ) (18) Here, MST can be readily used for decoding.",
"We carry out experiments on three widely used dependency parsing benchmarks: the English Penn Treebank v3.0 (PTB) dataset (Marcus et al., 1993), the Chinese Treebank v5.1 (CTB) dataset (Xue et al., 2002) and the Universal Dependency Tree-banks v2.2 (UD) (Nivre et al., 2016) where we select 12 languages for evaluation.",
"We follow Ma et al. (2018) to process all datasets.",
"The PTB dataset contains 39832 sentences for training and 2416 sentences for test.",
"The CTB dataset contains 16091 sentences for training and 1910 sentences for test.",
"The statistics for 12 languages in UD dataset are the same with Ma et al. (2018).",
"We use the unlabeled attachment score (UAS) and labeled attachment score (LAS) for evaluation.",
"Punctuations are ignored in all datasets during evaluation.",
"We compare the proposed model to the following baselines: (1) Biaffine , (2) StackPTR , (3) GNN , (4) MP2O , (5) CVT , (6) LRPTR , (7) HiePTR , (8) TreeCRF , (9) HPSG , (10) HPSG+LA , (11) MulPTR , (12) SynTr .",
"The details of these baselines are left to the supplementary materials due to page limitation.",
"We group experiments into three categories: without pretrained models, with BERT and with RoBERTa.",
"To implement a span-prediction parsing model without pretrained models, we use the QAnet (Yu et al., 2018) for span prediction.",
"To enable apple-to-apple comparisons, we implement our proposed model, the Biaffine model, MP2O (Wang and Tu, 2020) based on BERT large (Devlin et al., 2018) and RoBERTa large (Liu et al., 2019) for PTB, BERT and RoBERTa-wwm large (Cui et al., 2019) for CTB, BERTBase-Multilingual-Cased and XLM-RoBERTa large for UD.",
"We apply both projective decoding and nonprojective MST decoding for all datasets.",
"For all experiments, we concatenate 100d POS tag embedding with 1024d pretrained token embeddings, then project them to 1024d using a linear layer.",
"Following Mrini et al. (2020), we further add 1-3 additional encoder layers on top to let POS embed-dings well interact with pretrained token embed-dings.",
"POS tags are predicted using the Stanford NLP package (Manning et al., 2014).",
"We tried two different types of additional encoders: Bi-LSTM (Hochreiter and Schmidhuber, 1997) and Transformer (Vaswani et al., 2017a).",
"For Bi-LSTM, the number of hidden size is 1024d.",
"For Transformer, the number of attention heads and hidden size remain the same as pretrained models (16 for attention heads and 1024d for hidden size).",
"We use 0.1 dropout rate for pretrained models and 0.3 dropout rate for additional layers.",
"We use Adam (Kingma and Ba, 2014) as optimizer.",
"The weight parameter is tuned on the development set.",
"The code is implemented by PyTorch 1.6.0 and MindSpore.",
"Table 2 compares our model to existing state-of-the-art models on PTB/CTB test sets.",
"As can be seen, for models without pretrained LM, the proposed span-prediction model based on QAnet outperforms all baselines, illustrating the effectiveness of the proposed span-prediction framework for dependency parsing.",
"For BERT-based models, the proposed span-prediction models outperform Biaffine model based on BERT, along with other competitive baselines.",
"On PTB, performances already outperform all previous baselines, except on the LAS metric in comparison to HiePTR (95.46 vs. 95.47) on PTB, but underperform RoBERTa-based models.",
"On CTB, the proposed span-prediction model obtains a new SOTA performance of 93.14% UAS.",
"For RoBERTa-based models, the proposed model achieves a new SOTA performance of 97.24% UAS and 95.49% LAS on PTB.",
"As PTB and CTB contain almost only projective trees, the projective decoding strategy significantly outperforms the non-2432 PTB CTB UAS LAS UAS LAS with additional labelled constituency parsing data MulPTR (cid:91) 96.06 94.50 90.61 89.51 MulPTR+BERT (cid:91) 96.91 95.35 92.58 91.42 HPSG (cid:91) 97.20 95.72 -HPSG+LA (cid:91) 97.42 96.26 94.56 89.28 without Pretrained Models Biaffine 95.74 94.08 89.30 88.23 StackPTR 95.87 94.19 90.59 89.29 GNN 95.87 94.15 90.78 89.50 LRPTR 96.04 94.43 -HiePTR 96.18 94.59 90.76 89.67 TreeCRF 96.14 94.49 -Ours-Proj 96.42 94.71 91.15 89.68 Ours-Nproj 96.33 94.60 90.12 89.55 with Pretrained Models with BERT Biaffine 96.78 95.29 92.58 90.70 MP2O 96.91 95.34 92.55 90.69 SynTr+RNGTr 96.66 95.01 92.98 91.18 HiePTR 97.05 95.47 92.70 91.50 Ours-Proj 97.18 95.46 93.14 91.27 Ours-Nproj 97.09 95.35 93.06 91.21 with RoBERTa Biaffine 96.87 95.34 92.45 90.48 MP2O 96.94 95.37 92.37 90.40 Ours-Proj 97.24 95.49 92.68 90.91 Ours-Nproj 97.14 95.39 92.58 90.83 Table 2: Results for different models on PTB and CTB.",
"projective MST algorithm.",
"It is worth noting that, since MulPTR, HPSG and HPSG+LA rely on additional labeled data of constituency parsing, results for HPSG are not comparable to ours.",
"We list them here for reference purposes.",
"Table 3 compares our model with existing state-of-the-art methods on UD test sets.",
"Other than es, where the proposed model slightly underperforms the SOTA model by 0.02, the proposed model enhanced with XLM-RoBERTa achieves SOTA performances on all other 11 languages, with an average performance boost of 0.3.",
"As many languages in UD have a notable portion of non-projective trees, MST decoding significantly outperforms projective decoding, leading to new SOTA performances in almost all language sets.",
"We use PTB to understand behaviors of the proposed model.",
"As projective decoding works best for PTB, scores reported in this section are all from projective decoding.",
"We would like to study the effect of the number of candidate spans proposed by the span proposal module, i.e., the value of k .",
"We vary the value of k from 1 to 25.",
"As shown in Table 4, increasing values of k leads to higher UAS, and the performance stops increasing once k is large enough ( k > 15 ).",
"More interestingly, even though k is set to 1, which means that only one candidate span is proposed for each word, the final UAS score is 96.94, a score that is very close to the best result 97.24 and surpasses most existing methods as shown in Table 2.",
"These results verify that the proposed approach can accurately extract and link the dependency spans.",
"As shown in Table 5, span recall significantly improves with the presence of the span linking stage.",
"This is in line with our expectation, since spans missing at the proposal module can be retrieved by QA model in the span linking stage.",
"Recall boost narrows down when k becomes large, which is expected as more candidates are proposed at the proposal stage.",
"The span linking stage can improve computational efficiency by using a smaller number of proposed spans while achieving the same performance.",
"We study the effect of each part of the scoring functions used in the proposed model.",
"Table 6 shows the results.",
"We have the following observations: (1) token(query)-token(answer) : we simplify the model by only signifying root token in queries (child) and extract the root token in the context (parent).",
"The model actually degenerates into a model similar to Biaffine by working at the token-token level.",
"We observe significant performance decreases, 0.57 in UAS and 0.34 in LAS.",
"(3) span(query)-token(answer) : signifying spans in queries (child) but only extracting token in answers (parent) leads to a decrease of 0.07 and 0.05 respectively for UAS and LAS.",
"(1), (2) and (3) demonstrate the necessity of modeling span-span rather than token-token relations in dependency parsing: replacing span-based strategy with token-based strategy for either parent or child progressively leads to performance decrease.",
"(4) Removing the Mutual Dependency module which only uses child parent relation and ignores parent child relation also leads to performance decrease.",
"Following Ma et al. (2018); Ji et al. (2019), we analyze performances of the Biaffine parser and the proposed method with respect to sentence length, dependency length, and subtree span length.",
"Results are shown in Figure 2.",
"Sentence Length.",
"As shown in Figure",
"2(a), the proposed parser achieves better performances on long sentences compared with Biaffine.",
"Specially, when sentence length is greater than 50, the performance of the Biaffine parser decreases significantly, while the proposed parser has a much smaller drop (from 0.97 to 0.964).",
"Dependency Length.",
"Figure",
"2(b) shows the results with respect to dependency length.",
"The proposed parser shows its advantages on long-range dependencies.",
"We suppose span-level information is beneficial for long-range dependencies.",
"Subtree Span Length.",
"We further conduct experiments on subtree span length.",
"We divide the average lengths of the two spans in the span linking module into seven buckets.",
"We suppose our parser should show advantages on long subtree span, and the results in Figure",
"2(c) verify our conjecture.",
"In summary, the span-span strategy works significantly better than the token-token strategy, especially for long sequences.",
"This explanation is as follows: the token-token strategy can be viewed as a coarse simplification of the span-span strategy, where the root token in the token-token strategy can be viewed as the average of all spans covering it, while in the span-span strategy, it represents the exact span, rather than the average.",
"The deviation from the average is relatively small from the extract when sequences are short, but becomes larger as sequence length grows, since the number of spans covering the token exponentially grows with length.",
"This makes the token-token strategy work significantly worse for long sequences.",
"In this paper, we propose to construct dependency trees by directly modeling span-span instead of",
"token-token relations.",
"We use the machine reading comprehension framework to formalize the span linking module, where one span is used as a query to extract the text span/subtree it should be linked to.",
"Extensive experiments on the PTB, CTB and UD benchmarks show the effectiveness of the proposed method.",
"This work is supported by the Science and Technology Innovation 2030 New Generation Artificial Intelligence Major Project (No. 2021ZD0110201), the Key R & D Projects of the Ministry of Science and Technology (2020YFC0832500) and CAAI-Huawei MindSpore Open Fund.",
"We would like to thank anonymous reviewers for their comments and suggestions."
] | [
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"other",
"other"
] |
[
"Advanced machine learning techniques have boosted the performance of natural language processing.",
"Nevertheless, recent studies, e.g., Zhao et al. (2017) show that these techniques inadvertently capture the societal bias hidden in the corpus and further amplify it.",
"However, their analysis is conducted only on models' top predictions.",
"In this paper, we investigate the gender bias amplification issue from the distribution perspective and demonstrate that the bias is amplified in the view of predicted probability distribution over labels.",
"We further propose a bias mitigation approach based on posterior regularization.",
"With little performance loss, our method can almost remove the bias amplification in the distribution.",
"Our study sheds the light on understanding the bias amplification.",
"Data-driven machine learning models have achieved high performance in various applications.",
"Despite the impressive results, recent studies (e.g., Wang et al. (2019); Hendricks et al. (2018)) demonstrate that these models may carry societal biases exhibited in the dataset they trained on.",
"In particular, Zhao et al. (2017) show that a model trained on a biased dataset may amplify the bias.",
"For example, we can consider a task of labeling the activity and objects depicted in an image.",
"The training set contains 30% more images with woman cooking than man cooking.",
"However, when evaluating the top predictions of a trained model, the disparity between males and females is amplified to around 70%.",
"Based on this observation, Zhao et al. (2017) conduct a systematic study and propose to calibrate the top predictions of a learned model by injecting Both authors contributed equally to this work and are listed in alphabetical order.",
"However, when analyzing the top predictions, the models are forced to make one decision.",
"Therefore, even if the model assigns high scores to both labels of woman cooking and man cooking, it has to pick one as the prediction.",
"This process obviously has a risk to amplify the bias.",
"However, to our surprise, we observe that gender bias is also amplified when analyzing the posterior distribution of the predictions.",
"Since the model is trained with regularized maximal likelihood objective, the bias in distribution is a more fundamental perspective of analyzing the bias amplification issue.",
"In this paper, we conduct a systematic study to quantify the bias in the predicted distribution over labels.",
"Our analysis demonstrates that when evaluating the distribution, though not as significant as when evaluating top predictions, the bias amplification exists.",
"About half of activities show significant bias amplification in the posterior distribution, and on average, they amplify the bias by 3.2%.",
"We further propose a new bias mitigation technique based on posterior regularization because the approaches described in Zhao et al. (2017) can not be straightforwardly extended to calibrate bias amplification in distribution.",
"With the proposed technique, we successfully remove the bias amplification in the posterior distribution while maintain the performance of the model.",
"Besides, the bias amplification in the top predictions based on the calibrated distribution is also mitigated by around 30%.",
"These results suggest that the bias amplification in top predictions comes from both the requirement of making hard predictions and the bias amplification in the posterior distribution of the model predictions.",
"Our study advances the understanding of the bias amplification issue in natural language processing models.",
"The code and data are available at https://github.com/uclanlp/reducingbias .",
"Algorithmic Bias Machine learning models are becoming more and more prevalent in the real world, and algorithmic bias will have a great societal impact (Tonry, 2010; Buolamwini and Gebru, 2018).",
"Researchers have found societal bias in different applications such as coreference resolution (Rudinger et al., 2018; Zhao et al., 2018), machine translation (Stanovsky et al., 2019) and online advertisement (Sweeney, 2013).",
"Without appropriate adjustments, the model can amplify the bias (Zhao et al., 2017).",
"Different from the previous work, we aim at understanding the bias amplification from the posterior perspective instead of directly looking at the top predictions of the model.",
"Posterior Regularization The posterior regularization framework (Ganchev et al., 2010) is aiming to represent and enforce constraints on the posterior distribution.",
"It has been shown effective to inject domain knowledge for NLP applications.",
"For example, Ji et al. (2012); Gao et al. (2014) design constraints based on similarity to improve question answering and machine translation, respectively.",
"Yang and Cardie (2014) propose constraints based on lexical patterns in sentiment analysis.",
"Meng et al. (2019) apply corpus-level constraints to guide a dependency parser in the cross-lingual transfer setting.",
"In this paper we leverage corpus-level constraints to calibrate the output distribution.",
"Our study resembles to the confidence calibration (Guo et al., 2017; Naeini et al., 2015).",
"However, the temperature turning and binning methods proposed in these papers cannot straightforwardly be extended to calibrate the bias amplification.",
"We follow the settings in Zhao et al. (2017) to focus on the imSitu vSRL dataset (Yatskar et al., 2016), in which we are supposed to predict the activities and roles in given images and this can be regraded as a structure prediction task (see Fig. 1).",
"We apply the Conditional Random Field (CRF) model for the structure prediction task.",
"We denote y as a joint prediction result for all instances, and y i as a prediction result for instance i .",
"We use y v to denote the predicted activity, and y r to denote the predicted role.",
"An activity can have multiple roles and usually one of them conveys the gender information.",
"For an instance i , the CRF model predicts the scores for every activity and role, and Figure 1: An instance from the imSitu dataset.",
"the score for a prediction is the summation of all these scores.",
"Formally, f ( y i , i ) = s ( y iv , i ) + (cid:88) e y ir s ( y iv , e, i ) , where s ( y iv , i ) and s ( y iv , e, i ) are the scores for activity y iv of instance i , and the score for role e of instance i with activity y iv , respectively.",
"We can infer the top structure for instance i by: arg max y i Y i f ( y i , i ) , where Y i refers to all the possible assignments to the instance.",
"Zhao et al. (2017) demonstrate bias amplification in the top prediction and present a bias mitigation technique by inference with corpus-level constraints.",
"In the following, we extend their study to analyze the bias amplification in the posterior distribution by the CRF model and define the corresponding corpus-level constraints.",
"p ( y i , i ) exp( f ( y i , i )) , p ( y ) = (cid:89) i p ( y i , i ) ,",
"In this section, we will define how to quantify the bias and the bias amplification in the distribution, and introduce the corpus-level constraints towards restricting the bias in the distribution.",
"We focus on the gender bias on activities in the vSRL task.",
"To quantify the gender bias given a particular activity v , Zhao et al. (2017) uses the percentage that v is predicted together with male agents among all prediction with genders.",
"This evaluation focuses on the top prediction.",
"In the contrast, we define bias function B ( p, v , D ) w.r.t distribution p and activity v , evaluating the bias toward male in dataset D based on the conditional probability P ( X | Y ) , where event Y : given an instance, its activity is predicted to be v and its role is predicted to have a gender; event X : this instance is predicted to have gender male.",
"Formally, B ( p, v , D ) = P i D, y p ( y ir M | y iv = v y ir M W ) = (cid:80) i D (cid:80) y i : y iv = v , y ir M p ( y i , i ) (cid:80) i D (cid:80) y i : y iv = v , y ir M W p ( y i , i ) .",
"(2) This bias can come from the training set D tr .",
"Here we use b ( v , male ) to denote the dataset bias toward male in the training set, measured by the ratio of between male and female from the labels: b = (cid:80) i D tr 1 [ y iv = v , y ir M ] (cid:80) i D tr 1 [ y iv = v , y ir M W ] , where y i denotes the label of instance i .",
"Ideally, the bias in the distribution given by CRF model should be consistent with the bias in the training set, since CRF model is trained by maximum likelihood.",
"However, the amplification exists in practice.",
"Here we use the difference between the bias in the posterior distribution and in training set to quantify the bias amplification, and average it over all activities to quantify the amplification in the whole dataset: A ( p, v , D ) = sgn ( b 0 . 5)[ B ( p, v , D ) b ] , A ( p, D ) = 1 | V | (cid:88) v VA ( p, v , D ) .",
"Note that if we use the top prediction indicator function to replace p in A, A , it is the same as the definition of the bias amplification in top prediction in Zhao et al. (2017).",
"The corpus-level constraints aim at mitigating the bias amplification in test set D ts within a pre-defined margin , v , | A ( p, v , D ts ) | .",
"Posterior regularization (Ganchev et al., 2010) is an algorithm leveraging corpus-level constraints to",
"regularize the posterior distribution for a structure model.",
"Specifically, given corpus-level constraints and a distribution predicted by a model, we 1) define a feasible set of the distributions with respect to the constraints; 2) find the closest distribution in the feasible set from given distribution; 3) do maximum a posteriori (MAP) inference on the optimal feasible distribution.",
"The feasible distribution set Q is defined by the corpus-level constraints defined in Eq.",
"(3): Q = { q | v , | B ( q, v , D ts ) b | } , (4) where B ( ) is defined in Eq.",
"(2).",
"Given the feasible set Q and the model distribution p defined by Eq.",
"(1), we want to find the closest feasible distribution q : q = arg min q QKL ( q (cid:107) p ) .",
"(5) This is an optimization problem and our variable is the joint distribution q with constraints, which is intractable in general.",
"Luckily, according to the results in Ganchev et al. (2010), if the feasible set Q is defined in terms of constraints feature functions and their expectations: Q = { q | E y q [ ( y ) c ] } , (6) Eq.",
"(5) will have a close form solution q ( y ) = p ( y ) exp( ( y )) Z ( ) , (7) where is the solution of = arg max 0 c log Z ( ) .",
"Z ( ) = (cid:88) y p ( y ) exp( ( y )) .",
"(8) Actually, we can derive the constraints into the form we want.",
"We set c = 0 and ( y ) = (cid:88) i i ( y i ) .",
"We can choose a proper i ( y i ) to make Eq.",
"(4) equal to Eq.",
"(6).",
"The detailed derivation and the definition of i ( y i ) are shown in Appendix A. We can solve Eq.",
"(8) by gradient-based methods to get , and further compute the close form solution in Eq.",
"(7).",
"Actually, considering the relation between y and y i in Eq.",
"(1) and (9), we can factorize the solution in Eq.",
"(7) on instance level: q ( y i , i ) = p ( y i , i ) exp( i ( y i )) Z i ( ) , and the derivation details are in Appendix B. With this, we can reuse original inference algorithm to conduct MAP inference based on the distribution q for every instance seperately.",
"We conduct experiments on the vSRL task to analyze the bias amplification issue in the posterior distribution and demonstrate the effectiveness of the proposed bias mitigation technique.",
"Dataset Our experiment settings follow Zhao et al. (2017).",
"We evaluate on imSitu (Yatskar et al., 2016) that activities are selected from verbs, roles are from FrameNet (Baker et al., 1998) and nouns from WordNet (Fellbaum, 1998).",
"We filter out the non-human oriented verbs and images with labels that do not indicate the genders.",
"Model We analyze the model purposed together with the dataset.",
"The score functions we describe in Sec. 3 are modeled by VGG (Simonyan and Zisserman, 2015) with a feedforward layer on the top of it.",
"The scores are fed to CRF for inference.",
"Figures 2a and 2c demonstrate the bias amplification in both posterior distribution p and the top predictions y defined in Sec.4, respectively.",
"For most activities with the bias toward male (i.e., higher bias score) in the training set, both the top prediction and posterior distribution are even more biased toward male, vise versa.",
"If the bias is not amplified, the dots should be scattered around the reference line.",
"However, most dots are on the top-right or bottom-left, showing the bias is amplified.",
"The black regression line with slope > 1 also indicates the amplification.",
"Quantitatively, 109 and 173 constraints are violated when analyzing the bias in distribution an in top predictions.",
"Most recent models are trained by minimizing the cross-entropy loss which aims at fitting the model's predicted distribution with observed distribution on the training data.",
"In the inference time, 0 5 10 15 20 25 30 35 40 45 50 #Epoch 0.0 0.2 0.4 0.6 0.8 1.0 A cc u r a c y train_acc test_acc Amp.",
"the model outputs the top predictions based on the underlying prediction distribution.",
"Besides, in practice, the distribution has been used as an indicator of confidence in the prediction.",
"Therefore, understanding bias amplification in distribution provides a better view about this issue.",
"To analyze the cause of bias amplification, we further show the degree of amplification along with the learning curve of the model (see Fig. 3).",
"We observed that when the model is overfitted, the distribution of the model prediction becomes more peaky 1 .",
"We suspect this is one of the key reasons causes the bias amplification.",
"We set the margin = 0 .",
"05 for every constraint in evaluation.",
"However, we employ a stricter margin ( = 0 . 001 ) in performing posterior regularization to encourage the model to achieve a better feasible solution.",
"We use mini-batch to estimate the gradient w.r.t with Adam optimizer (Kingma and Ba, 2015) when solving Eq.",
"(5).",
"We set the batchsize to be 39 and train for 10 epochs.",
"The learning rate is initialized as 0 .",
"1 and decays after every mini-batch with the decay factor 0 .",
"998 .",
"Results We then apply the posterior regularization technique to mitigate the bias amplification in distribution.",
"Results are demonstrated in Figures 2b (distribution) and 2d (top predictions).",
"The posterior regularization effectively calibrates the bias in distribution and only 5 constraints are violated 1 This effect, called overconfident, has been also discussed in the literature (Guo et al., 2017).",
"after the calibration.",
"The average bias amplification is close to 0 ( A : 0 . 032 to 0 . 005 ).",
"By reducing the amplification of bias in distribution, the bias amplification in top predictions also reduced by 30.9% ( A : 0 . 097 to 0 . 067 ).",
"At the same time, the model's performance is kept (accuracy: 23 . 2% to 23 . 1% ).",
"Note that calibrating the bias in distribution cannot remove all bias amplification in the top predictions.",
"We posit that the requirement of making hard predictions (i.e., maximum a posteriori estimation) also amplifies the bias when evaluating the top predictions.",
"We analyzed the bias amplification from the posterior distribution perspective, which provides a better view to understanding the bias amplification issue in natural language models as these models are trained with the maximum likelihood objective.",
"We further proposed a bias mitigation technique based on posterior regularization and show that it effectively reduces the bias amplification in the distribution.",
"Due to the limitation of the data, we only analyze the bias over binary gender.",
"However, our analysis and the mitigation framework is general and can be adopted to other applications and other types of bias.",
"One remaining open question is why the gender bias in the posterior distribution is amplified.",
"We posit that the regularization and the over-fitting nature of deep learning models might contribute to the bias amplification.",
"However, a comprehensive study is required to prove the conjecture and we leave this as future work.",
"Acknowledgement This work was supported in part by National Science Foundation Grant IIS-1927554.",
"We thank anonymous reviewers and members of the UCLA-NLP lab for their feedback."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"other",
"other",
"other",
"other",
"abstain",
"other",
"objective",
"other",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain"
] |
[
"Tracking entities throughout a procedure described in a text is challenging due to the dynamic nature of the world described in the process.",
"Firstly, we propose to formulate this task as a question answering problem.",
"This enables us to use pre-trained transformer-based language models on other QA benchmarks by adapting those to the procedural text understanding.",
"Secondly, since the transformer-based language models cannot encode the flow of events by themselves, we propose a TimeStamped Language Model (TSLM model) to encode event information in LMs architecture by introducing the timestamp encoding.",
"Our model evaluated on the Propara dataset shows improvements on the published state-of-the-art results with a 3 .",
"1% increase in F1 score.",
"Moreover, our model yields better results on the location prediction task on the NPN-Cooking dataset.",
"This result indicates that our approach is effective for procedural text understanding in general.",
"A procedural text such as a recipe or an instruction usually describes the interaction between multiple entities and their attribute changes at each step of a process.",
"For example, the photosynthesis procedure can contain steps such as",
"1. Roots absorb water from soil",
"; 2. The water flows to the leaf",
"; 3. Light from the sun and CO2 enter the leaf",
"; 4. The water, light, and CO2 combine into a mixture",
"; 5. Mixture forms sugar .",
"Procedural text understanding is a machine reading comprehension task defined on procedural texts.",
"Answering questions such as \"what is the location of the mixture at step 4\", in the above example, requires tracking entities' interactions to predict their attributes at each step (Dalvi et al., 2018; Bosselut et al., 2018).",
"This is quite challenging due to the dynamic nature of the entities' attributes in the context.",
"Transformer-based language models have shown promising results on multi-hop or single-hop question answering benchmarks such as Hot-potQA (Yang et al., 2018), SQuAD (Rajpurkar et al., 2016), and Drop (Dua et al., 2019).",
"However, it is hard to expect LMs to understand the flow of events and pay attention to the time in the procedure (e.g., step 4) without extra modeling efforts.",
"In recent research, different approaches are taken to address procedural reasoning based on language models using QA formulations.",
"Following the intuition that attributes of entities can be retrieved based on the current and previous steps, DynaPro (Amini et al., 2020) modifies the input to only contain those sentences in the input at each time.",
"This will provide a different input to the model based on each question to help it detect changes after adding each step.",
"KG-MRC (Das et al., 2018) also generates a dynamic knowledge graph at each step to answer the questions.",
"However, this intuition is contradicted in some scenarios such as detecting inputs of the process.",
"For instance, the answer to the question \"Where is light as step 0?\" is \"Sun\", even if it is not mentioned in the first sentence of the process.",
"Inputs are entities that are not created in the process.",
"The architecture of the QA transformer-based LMs is very similar to the traditional attention mechanism.",
"Other methods such as ProLocal (Dalvi et al., 2018) and ProGlobal (Dalvi et al., 2018) have structured this task by finding the attention of each entity to the text at each step.",
"To be sensitive to the changes at each step, ProLocal manually changes the model's input by removing all steps except the one related to the question.",
"ProGlobal computes attention to the whole context while adding a distance value.",
"Distance value is computed for each token based on its distance to the direct mention of the entity at each step.",
"solving various NLP tasks (Liu et al., 2019; Devlin et al., 2019; Yang et al., 2019).",
"That is why most of the state-of-the-art models on procedural reasoning are also built based on current language models (Amini et al., 2020; Gupta and Durrett, 2019).",
"Following the same idea, we investigate the challenges that current models are facing for dealing with procedural text and propose a new approach for feeding the procedural information into LMs in a way that the LM-based QA models are aware of the taken steps and can answer the questions related to each specific step in the procedure.",
"We propose the Time-Stamped Language model (TSLM model), which uses timestamp embedding to encode past, current, and future time of events as a part of the input to the model.",
"TSLM utilizes timestamp embedding to answer differently to the same question and context based on different steps of the process.",
"As we do not change the portion of the input manually, our approach enables us to benefit from the pre-trained LMs on other QA benchmarks by using their parameters to initialize our model and adapt their architecture by introducing a new embedding type.",
"Here, we use RoBERTa (Liu et al., 2019) as our baseline language model.",
"We evaluate our model on two benchmarks, Propara (Dalvi et al., 2018) and NPN-Cooking (Bosselut et al., 2018).",
"Propara contains procedural paragraphs describing a series of events with detailed annotations of the entities along with their status and location.",
"NPN-Cooking contains cooking recipes annotated with their ingredients and their changes after each step in criteria such as location, cleanliness, and temperature.",
"TSLM differs from previous research as its primary focus is on using pre-trained QA models and integrating the flow of events in the global representation of the text rather than manually changing the part of the input fed to the model at each step.",
"TSLM outperforms the state-of-the-art models in nearly all metrics of two different evaluations defined on the Propara dataset.",
"Results show a 3 .",
"1% F1 score improvement and a 10 .",
"4% improvement in recall.",
"TSLM also achieves the state-of-the-art result on the location accuracy on the NPN-Cooking location change prediction task by a margin of 1 .",
"55% .",
"In summary, our contribution is as follows: We propose Time-Stamped Language Model (TSLM model) to encode the meaning of past, present, and future steps in processing a procedural text in language models.",
"Our proposal enables procedural text understanding models to benefit from pre-trained LM-based QA models on general-domain QA benchmarks.",
"TSLM outperforms the state-of-the-art models on the Propara benchmark on both document-level and sentence-level evaluations.",
"TSLM improves the performance state-of-the-art models on the location prediction task of the NPN-Cooking (Bosselut et al., 2018) benchmark.",
"Improving over two different procedural text understanding benchmarks suggests that our approach is effective, in general, for solving the problems that require the integration of the flow of events in a process.",
"An example of a procedural text is shown in Table",
"1. The example is taken from the Propara (Dalvi et al., 2018) dataset and shows the photosynthesis procedure.",
"At each row, the first column is list of the sentences, each of which forms one step of the procedure.",
"The second column contains the number of the step in the process and the rest are the entities interacting in the process and their location at each step.",
"The location of entities at step 0 is their initial location, which is not affected by this process.",
"If an entity has a known or unknown location (specified by ?) at step 0, we call it an input.",
"The procedural text understanding task is defined as follows.",
"Given a procedure p containing a list of n sentences P = { s 1 , ...s n } , an entity e and a time step t i , we find L , the location of that entity and specify the status S of that entity.",
"Status S is one value in the predefined set of { non-existence, unknown-location, known-location }.",
"location L is a span of text in the procedure that is specified with its beginning and end token.",
"We formulate the task as finding function F that maps each triplet of entity, procedure and time step to a pair of entity location and status: ( S, L ) = F ( e, P, t i ) 3 Proposed Procedural Reasoning Model 3.1 QA Setting To predict the status and the location of entities at each step, we model F with a question answering setting.",
"For each entity e , we form the input Q e as Participants Paragraph State number Water Light CO2 Mixture Sugar (Before the process starts) State 0 Soil Sun ?",
"Although Q e is not a step-dependent representation and does not incorporate any different information for each step, our mapping function needs to generate different answers for the question \"Where is entity e ?\" based on each step of the procedure.",
"For instance, consider the example in Table 1 and the question \"where is water?\", our model should generate different answers at four different steps.",
"The answer will be root, leaf, leaf, non-existence for steps 1 to 4, respectively.",
"To model this, we create pairs of ( Q e , t i ) for each i { 0 , 1 , ..., n } .",
"For each pair, Q e is timestamped according to t i using T imestamp ( . ) function described in Sec. 3.2 and mapped to an updated step-dependent representation, Q t i e = T imestamp ( Q e , t i ) .",
"The updated input representation is fed to a language model (here ROBERTA) to obtain the step-dependent entity representation, R t i e , as shown in Equation",
"2. We discuss the special case of i = 0 in more details in Sec. 3.2.",
"We use the step-dependent entity representation, R t i e , and forward it to another mapping function g ( . ) to obtain the location and status of the entity e in the output.",
"In particular the output includes the following three vectors, a vector representing the predictions of entity status S , another vector for each token's probability of being the start of the location span L , and a third vector carrying the probability of each word being the last token of the location span.",
"The outputs of the model are computed according to the Equation",
"3. ( status, Start _ prob, End _ prob ) = g ( R t i e ) (3) where R e is the tokens' representations output of RoBERTa (Liu et al., 2019), and g ( . ) is a function we apply on the token representations to get the final predictions.",
"We will discuss each part of the model separately in the following sections.",
"The timestamp embedding adds the step information to the input Q e to be considered in the attention mechanism.",
"The step attention is designed to distinguish between current (what is happening now), past (what has happened before), and future (what has not yet happened) information.",
"We use the mapping function T imestamp ( . ) from the pair ( Q e , t i ) to add a number along with each token in Q e and retrieve the step-dependent input Q t i e as shown in Figure",
"1. The Mapping function T imestamp ( . ) integrates past, current, and future representations to all of the tokens related to each part.",
"T imestamp ( . ) function assigns number 1 for past, 2 for current, and 3 for future tokens in the paragraph by considering one step of the process as the current event.",
"These values are used to compute an embedding vector for each token, which will be added to its initial representation as shown in Figure",
"2. The special number 0 is assigned to the question tokens, which are not part of the process timeline.",
"For predicting State 0 (The inputs of the process), we set all the paragraph information as the current step.",
"where Re [ C ] is the representation of the [ CLS token which is the first token in R e .",
"We predict a location span for each entity for each step of the process as shown in Equation 5, we follow the popular approach of selecting start/end tokens to detect a span of the text as the final answer.",
"We compute the probability of each token being the start or the end of the answer span.",
"If the index with the highest probability to be the start token is token start and for the end token is token end , the answer location will be Location = P [ token start : token end ] .",
"Start_prob = Softmax ( WT start R t i e ) End_prob = Softmax ( WT end R t i e ) token start = arg max i ( Start_prob ) token end = arg max i ( End_prob ) (5) 3.5 Training We use the cross-entropy loss function to train the model.",
"At each prediction for entity e at timestamp t i , we compute one loss value loss attribute regarding the status prediction and one loss value loss location for the span selection.",
"The variable loss location is the summation of the losses of the start token and the end token prediction, loss location = loss location start + loss location end .",
"The final loss of entity e at time t i is computed as in Equation 6.",
"At inference time, we apply two different postprocessing rules on the outputs of the model.",
"First, we impose that the final selected location answer should be a noun phrase in the original procedure.",
"Considering that a location span is a noun phrase, we limit the model to do a softmax over tokens of noun phrases in the paragraph to select the start and end tokens.",
"Second, we apply consistency rules to make sure that our predicted status of entities are consistent.",
"We define the two following rules: An entity can not be created if it has been already destroyed : if S t i e is \"non-existence\" and S t i +1 e is unknown or known location, then for every step j , if S t j e is unknown or known location and S t j +1 e is \"non-existence\", then i < j .",
"An entity cannot be created/destroyed twice in a process: if S t j e and S t i e are both \"-\", S t j +1 e and S t i +1 are both either known or unknown location, then i = j .",
"S t i e is the status of entity e at step t i of the process.",
"We do not apply an optimization/search algorithm to find the best assignment over the predictions according to the defined constraints.",
"The constraints are only applied based on the order of the steps to ensure that the later predictions are consistent with the ones made before.",
"Propara (Dalvi et al., 2018): This dataset was created as a benchmark for procedural text understanding to track entities at each step of a process.",
"Propara contains 488 paragraphs and 3,300 sentences with annotations that are provided by crowd-workers.",
"The annotations ( 81,000) are the location of entities at each step of the process.",
"The location can be either the name of the location, unknown location, or specified as non-existence.",
"NPN-Cooking (Bosselut et al., 2018): This is a benchmark containing textual cooking instructions.",
"Annotators have specified ingredients of these recipes and explained the recipe using different changes happening on each ingredient at each step of the instructions.",
"These changes are reported in categories such as location, temperature, cleanliness, and shape.",
"We evaluate our model on the location prediction task of this benchmark, which is the hardest task due to having more than 260 candidate answers.",
"We do not use the candidates to find the locations in our setting; Instead, we find a span of the text as the final location answer.",
"This is a relatively harder setting but more flexible and generalizable than the classification setting.",
"We use SGD optimizer implemented by Py-torch (Paszke et al., 2017) to update the model parameters.",
"The learning rate for the Propara implementation is set to 3 e 4 and is updated by a scheduler with a 0 .",
"5 coefficient every 50 steps.",
"We use 1 e 6 as the learning rate and a scheduler with 0 .",
"5 coefficient to update the parameters every ten steps on the NPN-Cooking implementation.",
"The implementation code is publicly available at Step Entity Action Before After 1 Water Move Root Leaf 2 Water Destroy Leaf 1 Sugar Create Leaf 2 Sugar None Leaf Leaf Table 2: A sample table to evaluate the Propara document-level task.",
"We use RoBERTa (Liu et al., 2019) question answering architecture provided by Hugging-Face (Wolf et al., 2019).",
"RoBERTa is pretrained with SQuAD (Rajpurkar et al., 2016) and used as our base language model to compute the token representations.",
"Our model executes batches containing an entity at every step and makes updates based on the average loss of entities per procedure.",
"The network parameters are updated after executing one whole example.",
"The implementation code will be publicly available on GitHub after acceptance.",
"Sentence-level evaluation is introduced in (Dalvi et al., 2018) for Propara dataset.",
"This evaluation focuses on the following three categories.",
"Cat1 Is e created (destroyed/moved) during the process?",
"Cat2 When is e created (destroyed/moved) during the process?",
"Cat3 Where is e created (destroyed/moved from or to) during the process?",
"Document-level evaluation is a more comprehensive evaluation process and introduced later in (Tan-don et al., 2018) for Propara benchmark.",
"Currently, this is the default evaluation in the Propara leader-board containing four criteria: What are the Inputs?",
"Which entities existed before the process began and do not exist after the process ends.",
"What are the Outputs?",
"Which entities got created during the process?",
"What are the Conversions?",
"Which entities got converted to other entities?",
"What are the Moves?",
"Which entities moved from one location to another?",
"The document-level evaluation requires models to reformat their predictions in a tabular format as shown in Table",
"2. At each row of this table, for each entity at a specific step, we can see the action 1 https://github.com/HLR/TSLM Sentence-level Document-level Model Cat1 Cat2 Cat3 Macro Avg Micro Avg P R F1 ProLocal (Dalvi et al., 2018) 62.7 30.5 10.4 34.5 34.0 77.4 22.9 35.3 ProGlobal (Dalvi et al., 2018) 63.0 36.4 35.9 45.1 45.4 46.7 52.4 49.4 EntNet (Henaff et al., 2017) 51.6 18.8 7.8 26.1 26.0 50.2 33.5 40.2 QRN (Seo et al., 2017) 52.4 15.5 10.9 26.3 26.5 55.5 31.3 40.0 KG-MRC (Das et al., 2018) 62.9 40.0 38.2 47.0 46.6 64.5 50.7 56.8 NCET (Gupta and Durrett, 2019) 73.7 47.1 41.0 53.9 54.0 67.1 58.5 62.5 XPAD (Dalvi et al., 2019) ---70.5 45.3 55.2 ProStruct (Tandon et al., 2018) ---74.3 43.0 54.5 DYNAPRO (Amini et al., 2020) 72.4 49.3 44.5 55.4 55.5 75.2 58.0 65.5 TSLM (Our Model) 78.81 56.8 40.9 58.83 58.37 68.4 68.9 68.6 Table 3: Results from sentence-level and document-level evaluation on Propara.",
"applied on that entity, the location of that entity before that step, and the location of the entity after that step.",
"Action takes values from a predefined set including, None, Create, Move, and De-stroy.",
"The exact action can be specified based on the before and after locations.",
"We have to process our (Status S , Location L ) predictions at each step to generate a similar tabular format as in Table",
"2. We define r ie as a row in this table which stores the predictions related to entity e at step t i .",
"To fill this row, we first process the status predictions.",
"If the status prediction S is either or ?, we fill those values directly in the after location column.",
"The before location column value of r ie is always equal to the after location column value of r i 1 e .",
"If the status is predicted to be a Known Location, we fill the predicted location span L into the after location column of r ie .",
"The action column is filled based on the data provided in before and after locations columns.",
"If the before location is/isn't \"-\" and after location is not/is \"-\", then the action is \"Create\"/\"Destroy\".",
"If the before and after locations are equal, then the action is \"None\" and if the before and after locations are both spans and are different from each other, the action is \"Move\".",
"NPN-Cooking location change: We evaluate our model on the NPN-Cooking benchmark by computing the accuracy of the predicted locations at steps where the locations of ingredients change.",
"We use the portion of the data that has been annotated by the location changes to train and evaluate our model.",
"In this evaluation, we do not use the status prediction part of our proposed TSLM model.",
"Since training our model on the whole training set takes a very long time (around 20 hours per iter-ation), we use a reduced number of samples for training.",
"This is a practice that is also used in other prior work (Das et al., 2018).",
"The performance of our model on Propara dataset (Dalvi et al., 2018) is quantified in Table",
"3. Results show that our model improves the SOTA by a 3 .",
"1% margin in the F1 score and improves the Recall metric with 10 .",
"4% on the document-level evaluation.",
"On the sentence-level evaluation, we outperform SOTA models with a 5 .",
"11% in Cat1, and 7 .",
"49% in Cat2 and by a 3 .",
"4% margin in the macro-average.",
"We report Table 3 without considering the consistency rules and evaluate the effect of those in the ablation study in Sec. 4.5.",
"In Table 5, we report a more detailed quantified analysis of TSLM model's performance based on each different criteria defined in the document-level evaluation.",
"Table 5 shows that our model performs best on detecting the procedure's outputs and performs worst on detecting the moves.",
"Detecting moves is essentially hard for TSLM as it is predicting outputs based on the whole paragraph at once.",
"Outperforming SOTA results on the input and output detection suggests that TSLM model can understand the interactions between entities and detect the entities which exist before the process begins.",
"The detection of input entities is one of the weak aspects of the previous research that we improve here.",
"A recent unpublished research (Zhang et al., 2021) reports better results than our model.",
"However, their primary focus is on common-sense rea-Model Accuracy Training Samples Prediction task NPN-cooking (Bosselut et al., 2018) 51.3 83 , 000 (all data) Classification KG-MRC (Das et al., 2018) 51.6 10 , 000 Span Prediction DynaPro (Amini et al., 2020) 62.9 83 , 000 (all data) Classification TSLM (Our Model) 63.73 10 , 000 Span Prediction 64.45 15 , 000 Span Prediction Table 4: Results on the NPN-Cooking benchmark.",
"soning and their goal is orthogonal to our main focus in proposing TSLM model.",
"Such approaches can be later integrated with TSLM to benefit from common-sense knowledge on solving the Propara dataset.",
"The reason that TSLM performs better at recall and worse at precision is that our model looks at the global context, which increases the recall and lowers the precision when local information is strongly important.",
"The same phenomenon (better recall) is observed in ProGlobal, which also considers global information as we do, compared to ProLocal.",
"Table 4 shows our results on the NPN-Cooking benchmark for the location prediction task.",
"Results are computed by only considering the steps that contain a location change and are reported by computing the accuracy of predicting those changes.",
"Our results show that TSLM outperforms the SOTA models with a 1 .",
"55% margin on accuracy even after training on 15,000 training samples.",
"To be comparable with the KG-MRC (Das et al., 2018) experiment on NPN-Cooking which is only trained on 10k samples, we report the performance of our model trained on the same number of samples, where TSLM gets a 12 .",
"1% improvement over the performance of KG-MRC (Das et al., 2018).",
"To evaluate the importance of each module one at time, we report the performance of the TSLM",
"by removing the noun-phrase filtering at inference, the consistency rules, timestamp embedding, SQuAD (Rajpurkar et al., 2016) pre-training, and by replacing RoBERTa (Liu et al., 2019) with BERT (Devlin et al., 2019).",
"These variations are evaluated on the development set of the Propara dataset and reported in Table 6.",
"As stated before and shown in Table 6, it is impossible to remove the timestamp embedding as that is the only part of the model enabling changes in the answer at each step.",
"Hence, by removing that, the model cannot converge and yields a 25% decrease on the F1 score.",
"The simple consistency and span filtering rules are relatively easy to be learned by the model based on the available data, therefore adding those does not affect the final performance of the model.",
"TSLMBERT experiment is designed to ensure a fair comparison with previous research (Amini et al., 2020) which has used BERT as their base language model.",
"The comparison of TSLMBERT to -SQuAD Pre-training and Timestamp Embedding in Table 6 indicates that using RoBERTa instead of BERT is not as much important as our main proposal (using Time-stamp encoding) in TSLM model.",
"Also, TSLMBERT achieves 66 .",
"7% F1 score on the Propara test set, which is 1 .",
"2% better than the current SOTA performance.",
"By removing the SQuAD pre-training phase, the model performance drops with a 10 .",
"6% in the F1 score.",
"This indicates that despite the difference between the procedural text understanding and the general MRC tasks, it is quite beneficial to design methods that can transfer knowledge from other QA data sources to help with procedural reasoning.",
"This is crucial as annotating procedural texts is relatively more expensive and time-consuming.",
"We provide more samples to support our hypothesis in solving the procedural reasoning task and answer some of the main questions about the ideas presented in TSLM model.",
"Why is the whole context important?",
"The main intuition behind TSLM is that the whole context, not just previous information, matters in reasoning over a process.",
"Here, we provide some samples from Propara to show why this intuition is correct.",
"Consider this partial paragraph, \"Step i : With enough time the pressure builds up greatly. Step i + 1 : The resulting volcano may explode.\".",
"Looking at the annotated status and location, the \"volcano\" is being created at Step i without even being mentioned in that step.",
"This is only detectable if we look at the next step saying \"The resulting Volcano...\".",
"As another example, consider this partial paragraph: \"Step i : Dead plants form layers called peat. ... Step i + 3 : Pressure squeezes water out of the peat.\".",
"The annotation indicates that the location of \"water\" is being changed to \"peat\" at step i , which is only possible to detect if the model is aware of the following steps indicating that the water comes out of the peat.",
"Positional Embedding VS Time-stamp encoding : As mentioned before the whole context (fu-ture and past events) is essential for procedural reasoning at a specific step.",
"However, the reasoning should focus on one step at a time, given the whole context.",
"While positional encoding encodes the order of information at the token-level for reasoning over the entire text, we need another level of encoding to specify the steps' positions (bound-aries) and, more importantly, to indicate the step that the model should focus on when answering a question.",
"Advantages/Disadvantages of TSLM model : TSLM integrates higher-level information into the token representations.",
"This higher-level information can come from event-sequence (time of events), sentence-level, or any other higher source than the token-level information.",
"The first advantage of TSLM is that it enables designing a model which is aware of the whole context, while previous methods had to customize the input at each step to only contain the information of earlier steps.",
"Furthermore, using TSLM enables us to use pretrained QA models on other datasets without requiring us to retrain them with the added time-stamped encoding.",
"One main disadvantage of TSLM model, which is natural due to the larger context setting in this model, is not being sensitive to local changes, which is consistent with the observation in the comparison between ProGlobal and ProLocal models.",
"ScoNe (Long et al., 2016), NPN-Cooking (Bosselut et al., 2018), bAbI (Weston et al., 2015), ProcessBank (Berant et al., 2014), and Propara (Dalvi et al., 2018) are benchmarks proposed to evaluate models on procedural text understanding.",
"Processbank (Be-rant et al., 2014) contains procedural paragraphs mainly concentrated on extracting arguments and relations for the events rather than tracking the states of entities.",
"ScoNe (Long et al., 2016) aims to handle co-reference in a procedural text expressed about a simulated environment.",
"bAbI (Weston et al., 2015) is a simpler machine-generated textual dataset containing multiple procedural tasks such as motion tracking, which has encouraged the community to develop neural network models supporting explicit modeling of memories (Sukhbaatar et al., 2015; Santoro et al., 2018) and gated recurrent models (Cho et al., 2014; Henaff et al., 2017).",
"NPN-Cooking (Bosselut et al., 2018) contains recipes annotated with the state changes of ingredients on criteria such as location, temperature, and composition.",
"Propara (Dalvi et al., 2018) provides procedural paragraphs and detailed annotations of entity locations and the status of their existence at each step of a process.",
"Inspired by Propara and NPN-Cooking benchmarks, recent research has focused on tracking entities in a procedural text.",
"Query Reduction Networks (QRN) (Seo et al., 2017) performs gated propagation of a hidden state vector at each step.",
"Neural Process Network (NPN) (Bosselut et al., 2018) computes the state changes at each step by looking at the predicted actions and involved entities.",
"Prolocal (Dalvi et al., 2018) predicts locations and status changes locally based on each sentence and then globally propagates the predictions using a persistence rule.",
"Proglobal (Dalvi et al., 2018) predicts the status changes and locations over the whole paragraph using distance values at each step and predicts current status based on current representation and the predictions of the previous step.",
"ProStruct (Tandon et al., 2018) aims to integrate manually extracted rules or knowledge-base information on VerbNet (Schuler, 2005) as constraints to inject common-sense into the model.",
"KG-MRC (Das et al., 2018) uses a dynamic knowledge graph of entities over time and predicts locations with spans of the text by utilizing reading comprehension models.",
"Ncet (Gupta and Durrett, 2019) updates entities representation based on each sentence and connects sentences together with an LSTM.",
"To ensure the consistency of predictions, Ncet uses a neural CRF over the changing entity representations.",
"XPAD (Dalvi et al., 2019) is also proposed to make dependency graphs on the Propara dataset to explain the dependencies of events over time.",
"Most recently, DynaPro (Amini et al., 2020) feeds an incremental input to pretrained LMs' question answering architecture to predict entity status and transitions jointly.",
"TSLM differs from recent research, as we propose a simple, straightforward, and effective technique to make our model benefit from pre-trained LMs on general MRC tasks and yet enhance their ability to operate on procedural text understanding.",
"We explicitly inject past, current, and future timestamps into the language models input and implicitly train the model to understand the events' flow rather than manually feeding different portions of the context at each step.",
"Procedural reasoning has also been pursued within the multi-modality domain (Yagcioglu et al., 2018; Rajaby Faghihi et al., 2020; Amac et al., 2019) which has additional challenges of aligning the representation spaces of different modalities.",
"We proposed the Time-Stamped Language Model (TSLM model), a novel approach based",
"on a simple and effective idea, which enables pre-trained QA models to process procedural texts and produce different outputs based on each step to track entities and their changes.",
"TSLM utilizes a timestamp function that causes the attention modules in the transformer-based LM architecture to incorporate past, current, and future information by computing a timestamp embedding for each input token.",
"Our experiments show a 3 .",
"1% improvement on the F1 score and a 10 .",
"4% improvement over the Recall metric on Propara Dataset.",
"Our model further outperforms the state-of-the-art models with a 1 .",
"55% margin in the NPN-Cooking dataset accuracy for the location prediction task.",
"As a future direction, it is worth investigating how common-sense knowledge can be integrated with the TSLM setting by augmenting the process context using external sources of related domain knowledge.",
"We also intend to investigate the effectiveness of our approach on similar tasks on other domains and benchmarks.",
"As another future direction, it can be effective to apply an inference algorithm to impose the global consistency constraints over joint predictions in procedural reasoning instead of using naive post-processing rules.",
"This project is partially funded by National Science Foundation (NSF) CAREER Award # 2028626 and the Office of Naval Research (ONR) grant # N00014-20-1-2005."
] | [
"abstain",
"objective",
"method",
"objective",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"other",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"other"
] |
[
"Knowledge Graph (KG) completion research usually focuses on densely connected benchmark datasets that are not representative of real KGs.",
"We curate two KG datasets that include biomedical and encyclopedic knowledge and use an existing commonsense KG dataset to explore KG completion in the more realistic setting where dense connectivity is not guaranteed.",
"We develop a deep convolutional network that utilizes textual entity representations and demonstrate that our model outperforms recent KG completion methods in this challenging setting.",
"We find that our model's performance improvements stem primarily from its robustness to sparsity.",
"We then distill the knowledge from the convolutional network into a student network that re-ranks promising candidate entities.",
"This re-ranking stage leads to further improvements in performance and demonstrates the effectiveness of entity re-ranking for KG completion.",
"1 1 Introduction Knowledge graphs (KGs) have been shown to be useful for a wide range of NLP tasks, such as question answering (Bordes et al., 2014a,b), dialog systems (Ma et al., 2015), relation extraction (Mintz et al., 2009; Vashishth et al., 2018), and recommender systems (Zhang et al., 2016).",
"However, because scaling the collection of facts to provide coverage for all the true relations that hold between entities is difficult, most existing KGs are incomplete (Dong et al., 2014), limiting their utility for downstream applications.",
"Because of this problem, KG completion (KGC) has come to be a widely studied task (Yang et al., 2015; Trouillon et al., 2016; Shang et al., 2018; Dettmers et al., 2018; Work performed while at Carnegie Mellon University. 1 https://github.com/justinlovelace/ robust-kg-completion Sun et al., 2019; Balazevic et al., 2019; Malaviya et al., 2020; Vashishth et al., 2020a).",
"The increased interest in KGC has led to the curation of a number of benchmark datasets such as FB15K (Bordes et al., 2013), WN18 (Bordes et al., 2013), FB15k-237 (Toutanova and Chen, 2015), and YAGO3-10 (Rebele et al., 2016) that have been the focus of most of the work in this area.",
"However, these benchmark datasets are often curated in such a way as to produce densely connected networks that simplify the task and are not representative of real KGs.",
"For instance, FB15K includes only entities with at least 100 links in Freebase, while YAGO3-10 is limited to only include entities in YAGO3 (Rebele et al., 2016) that have at least 10 relations.",
"Real KGs are not as uniformly dense as these benchmark datasets and have many sparsely connected entities (Pujara et al., 2017).",
"This can pose a challenge to typical KGC methods that learn entity representations solely from the knowledge that already exists in the graph.",
"Textual entity identifiers can be used to develop entity embeddings that are more robust to sparsity (Malaviya et al., 2020).",
"It has also been shown that textual triplet representations can be used with BERT for triplet classification (Yao et al., 2019).",
"Such an approach can be extended to the more common ranking paradigm through the exhaustive evaluation of candidate triples, but that does not scale to large KG datasets.",
"In our work, we found that existing neural KGC models lack the complexity to effectively fit the training data when used with the pre-trained textual embeddings that are necessary for representing sparsely connected entities.",
"We develop an expressive deep convolutional model that utilizes textual entity representations more effectively and improves sparse KGC.",
"We also develop a student reranking model that is trained using knowledge distilled from our original ranking model and demonstrate that the re-ranking procedure is particularly effective for sparsely connected entities.",
"Through these innovations, we develop a KGC pipeline that is more robust to the realities of real KGs.",
"Our contributions can be summarized as follows.",
"We develop a deep convolutional architecture that utilizes textual embeddings more effectively than existing neural KGC models and significantly improves performance for sparse KGC.",
"We develop a re-ranking procedure that distills knowledge from our ranking model into a student network that re-ranks promising candidate entities.",
"We curate two sparse KG datasets containing biomedical and encyclopedic knowledge to study KGC in the setting where dense connectivity is not guaranteed.",
"We release the encyclopedic dataset and the code to derive the biomedical dataset to encourage future work.",
"Knowledge Graph Completion: KGC models typically learn entity and relation embeddings based on known facts (Nickel et al., 2011; Bordes et al., 2013; Yang et al., 2015) and use the learned embeddings to score potential candidate triples.",
"Recent work includes both non-neural (Nickel et al., 2016; Trouillon et al., 2016; Liu et al., 2017; Sun et al., 2019) and neural (Socher et al., 2013; Dong et al., 2014; Dettmers et al., 2018; Vashishth et al., 2020b) approaches for embedding KGs.",
"However, most of them only demonstrate their efficacy on artificially dense benchmark datasets.",
"Pujara et al. (2017) show that the performance of such methods varies drastically with sparse, unreliable data.",
"We compare our proposed method against the existing approaches in a realistic setting where the KG is not uniformly dense.",
"Prior work has effectively utilized entity names or descriptions to aid KGC (Socher et al., 2013; Ruobing Xie, 2016; Xiao et al., 2016).",
"In more recent work, Malaviya et al. (2020) explore the problem of KGC using commonsense KGs, which are much sparser than standard benchmark datasets.",
"They adapt an existing KGC model to utilize BERT (Devlin et al., 2019) embeddings.",
"In this paper, we develop a deep convoluational architecture that is more effective than adapting existing shallow models which we find to be underpowerered for large KG datasets.",
"Yao et al. (2019) developed a triplet classification model by directly fine-tuning BERT with textual entity representations and reported strong classification results.",
"They also adapted their triplet classification model to the ranking paradigm by exhaustively evaluating all possible triples for a given query, ( e 1 , r, ?) .",
"However, the ranking performance was not competitive 2 , and such an approach is not scalable to large KG datasets like those explored in this work.",
"Exhaustively applying BERT to compute all rankings for the test set for our largest dataset would take over two months.",
"In our re-ranking setting, we reduce the number of triples that need to be evaluated by over 7700 , reducing the evaluation time to less than 15 minutes.",
"BERT as a Knowledge Base: Recent work (Petroni et al., 2019; Jiang et al., 2020; Rogers et al., 2020) has utilized the masked-language-modeling (MLM) objective to probe the knowledge contained within pre-trained models using fill-in-the-blank prompts (e.g. Dante was born in [MASK] ).",
"This body of work has found that pre-trained language models such as BERT capture some of the relational knowledge contained within their pre-training corpora.",
"This motivates us to utilize these models to develop entity representations that are well-suited for KGC.",
"Re-Ranking: Wang et al. (2011) introduced cascade re-ranking for document retrieval.",
"This approach applies inexpensive models to develop an initial ranking and utilizes expensive models to improve the ranking of the top-k candidates.",
"Reranking has since been successfully applied across many retrieval tasks (Matsubara et al., 2020; Pei et al., 2019; Nogueira and Cho, 2019).",
"Despite re-ranking's widespread success, recent KGC work utilizes a single ranking model.",
"We develop an entity re-ranking procedure and demonstrate the effectiveness of the re-ranking paradigm for KGC.",
"Knowledge Distillation: Knowledge distillation is a popular technique that is often used for model compression where a large, high-capacity teacher is used to train a simpler student network (Hinton et al., 2015).",
"However, knowledge distillation has since been shown to be useful for improving model performance beyond the original setting of model compression.",
"Li et al. (2017) demonstrated that knowledge distillation improved image classification performance in a setting with noisy 2 Their reported Hits@10 for FB15K-237 was .",
"420 which is lower than all of the models evaluated in this work.",
"labels.",
"The incompleteness of KGs leads to noisy training labels which motivates us to use knowledge distillation to train a student re-ranking model that is more robust to the label noise.",
"We examine KGC in the realistic setting where KGs have many sparsely connected entities.",
"We utilize a commonsense KG dataset that has been used in past work and curate two additional sparse KG datasets containing biomedical and encyclopedic knowledge.",
"We release the encyclopedic dataset and the code to derive the biomedical dataset to encourage future work in this challenging setting.",
"The summary statistics for all datasets are presented in Table 1 and we visualize the connectivity of the datasets in Figure",
"1. 3.1 SNOMED CT Core For constructing SNOMED CT Core, we use the knowledge graph defined by SNOMED CT (Don-nelly, 2006), which is contained within the Unified Medical Language System (UMLS) (Bodenreider, 2004).",
"SNOMED CT is well-maintained and is one of the most comprehensive knowledge bases contained within the UMLS (Jimenez-Ruiz et al., 2011; Jiang and Chute, 2009).",
"We first extract the UMLS 3 concepts found in the CORE Problem List Subset of the SNOMED CT knowledge base.",
"This subset is intended to contain the concepts most useful for documenting clinical information.",
"We 3 We work with the 2020AA release of the UMLS.",
"then expand the graph to include all concepts that are directly linked to those in the CORE Problem List Subset according to the relations defined by the SNOMED CT KG.",
"Our final KG consists of this set of concepts and the SNOMED CT relations connecting them.",
"Importantly, we do not filter out rare entities from the KG, as is commonly done during the curation of benchmark datasets.",
"To avoid leaking data from inverse, or otherwise informative, relations, we divide the facts into training, validation, and testing sets based on unordered tuples of entities { e 1 , e 2 } so that all relations between any two entities are confined to a single split.",
"Unlike some other KG datasets that filter out inverse relations, we divide our dataset in such a way that this is not necessary; our dataset already includes inverse relations, and they do not need to be manually added for training and evaluation as is standard practice (Dettmers et al., 2018; Malaviya et al., 2020).",
"Because we represent entities using textual descriptions in this work, we also mine the enti-ties' preferred concept names (e.g. Traumatic hematoma of left kidney ) from the UMLS.",
"The FB15k-237 (Toutanova and Chen, 2015) dataset contains encyclopedic knowledge about the world, e.g. (Barack Obama, placeOfBirth, Honolulu) .",
"Although the dataset is very densely connected, that density is artificial.",
"FB15K (Bordes et al., 2013), the precursor to FB15k-237, was curated to only include entities with at least 100 links in Freebase (Bollacker et al., 2008).",
"The dense connectivity of FB15k-237 does allow us to to ablate the effect of this density.",
"We utilize the FB15k-237 dataset and also develop a new dataset, denoted FB15k-237-Sparse, by randomly downsampling the facts in the training set of FB15k-237 to match the average in-degree of the ConceptNet-100K dataset.",
"We use this to directly evaluate the effect of increased sparsity.",
"For the FB15k-237 dataset, we use the textual identifiers released by Ruobing Xie (2016).",
"They released both entity names (e.g. Jason Frederick Kidd) as well as brief textual descriptions (e.g. Jason Frederick Kidd is a retired American professional basketball player. . . ) for most entities.",
"We utilize the textual descriptions when available.",
"ConceptNet (Speer and Havasi, 2013) is a KG that contains commonsense knowledge about the world such as the fact (go to dentist, motivatedBy, prevent tooth decay) .",
"We utilize ConceptNet-100k (CN-100K) (Li et al., 2016) which consists of the Open Mind Common Sense entries in the ConceptNet dataset.",
"This KG is much sparser than benchmark datasets like FB15k-237, which makes it well-suited for our purpose.",
"We use the training, validation, and testing splits of Malaviya et al. (2020) to allow for direct comparison.",
"We also use the textual descriptions released by Malaviya et al. (2020) to represent the KG entities.",
"We provide an overview of our model architecture in Figure",
"2. We first extract feature representations from BERT (Devlin et al., 2019) to develop textual entity embeddings.",
"Motivated by our observation that existing neural KG architectures are underpowered in our setting, we develop a deep convolutional network utilizing architectural innovations from deep convolutional vision models.",
"Our model's design improves its ability to fit complex relationships in the training data which leads to downstream performance improvements.",
"Finally, we distill our ranking model's knowledge into a student re-ranking network that adjusts the rankings of promising candidates.",
"In doing so, we demonstrate the effectiveness of the re-ranking paradigm for KGC and develop a KGC pipeline with greater robustness to the sparsity of real KGs.",
"We follow the standard formulation for KGC.",
"We represent a KG as a set of entity-relation-entity facts ( e 1 , r, e 2 ) .",
"Given an incomplete fact, ( e 1 , r, ?) , our model computes a score for all candidate entities e i that exist in the graph.",
"An effective KGC model should assign greater scores to correct entities than incorrect ones.",
"We follow recent work (Dettmers et al., 2018; Malaviya et al., 2020) and consider both forward and inverse relations (e.g. treats and treated by) in this work.",
"For the datasets that do not already include inverse relations, we introduce an inverse fact, ( e 2 , r 1 , e 1 ) , for every fact, ( e 1 , r, e 2 ) , in the dataset.",
"We utilize BERT (Devlin et al., 2019) to develop entity embeddings that are invariant to the connectivity of the KG.",
"We follow the work of Malaviya et al. (2020) and adapt BERT to each KG's naming style by fine-tuning BERT using the MLM objective with the set of entity identifiers in the KG.",
"For CN-100K and FB15k-237, we utilize the BERT-base uncased model.",
"For SNOMED CT Core KG, we utilize PubMedBERT (Gu et al., 2020) which is better suited for the biomedical terminology in the UMLS.",
"We apply BERT to the textual entity identifiers and mean-pool across the token representations from all BERT layers to obtain a summary feature vector for the concept name.",
"We fix these embeddings during training because we must compute scores for a large number of potential candidate entities for each training example.",
"This makes fine-tuning BERT prohibitively expensive.",
"Inspired by the success of deep convolutional models in computer vision (Krizhevsky et al., 2012; Simonyan and Zisserman, 2015; He et al., 2016; Huang et al., 2019, 2017), we develop a knowledge base completion model based on the seminal ResNet architecture (He et al., 2016) that is sufficiently expressive to model complex interactions between the BERT feature space and the relation embeddings.",
"Given an incomplete triple ( e i , r j , ?) , we be-gin by stacking the precomputed entity embedding e R 1 d with the learned relation embedding of the same dimension r R 1 d to produce a feature vector of length d with two channels q R 2 d .",
"We then apply a one-dimensional convolution with a kernel of width 1 along the length of the feature vector to project each position i to a two-dimensional spatial feature map x i R f f where the convolution has f f filters.",
"Thus the convolution produces a two-dimensional spatial feature map X R f f d with d channels, representing the incomplete query triple ( e i , r j , ?) .",
"The spatial feature map, X R f f d , is analogous to a square image with a side length of f and d channels, allowing for the straightforward application of deep convolutional models such as ResNet.",
"We apply a sequence of 3 N bottleneck blocks to the spatial feature map where N is a hyperparameter that controls the depth of the network.",
"A bottleneck block consists of three consecutive convolutions: a 1 1 convolution, a 3 3 convolution, and then another 1 1 convolution.",
"The first 1 1 convolution reduces the feature map dimensionality by a factor of 4 and then the second 1 1 convolution restores the feature map dimensionality.",
"This design reduces the dimensionality of the expensive 3 3 convolutions and allows us to increase the depth of our model without dramatically increasing its parameterization.",
"We double the feature dimensionality of the bottleneck blocks after N and 2 N blocks so the dimensionality of the final feature map produced by the sequence of convolutions is 4 d .",
"We add residual connections to each bottleneck block which improves training for deep networks (He et al., 2016).",
"If we let F ( X ) represent the application of the bottleneck convolutions, then the output of the bottleneck block is Y = F ( X ) + X .",
"We apply batch normalization followed by a ReLU nonlinearity (Nair and Hinton, 2010) before each convolutional layer (He et al., 2016) .",
"We utilize circular padding (Wang et al., 2018; Vashishth et al., 2020a) with the 3 3 convolutions to maintain the spatial size of the feature map and use a stride of 1 for all convolutions.",
"For the bottleneck blocks that double the dimensionality of the feature map, we utilize a projection shortcut for the residual connection (He et al., 2016).",
"Given an incomplete fact ( e i , r j , ?) , our convolutional architecture produces a feature map X R f f 4 d .",
"We average pool this feature representation over the spatial dimension which produces a summary feature vector x R 4 d .",
"We then apply a fully connected layer followed by a PReLU nonlinearity (He et al., 2015) to project the feature vector back to the original embedding dimensionality d .",
"We denote this final vector e and compute scores for candidate entities using the dot product with candidate entity embeddings.",
"The scores can be efficiently computed for all entities simultaneously using a matrix-vector product with the embedding matrix y = eE T where E R m d stores the embeddings for all m entities in the KG.",
"Adopting the terminology used by Ruffinelli et al. (2020), we utilize a 1vsAll training strategy with the binary cross-entropy loss function.",
"We treat every fact in our dataset, ( e i , r j , e k ) , as a training sample where ( e i , r j , ?) is the input to the model.",
"We compute scores for all entities as described previously and apply a sigmoid operator to induce a probability for each entity.",
"We treat all entities other than e k as negative candidates and then compute the binary cross-entropy loss.",
"We train our model using the Adam optimizer (Kingma and Ba, 2015) with decoupled weight decay regularization (Loshchilov and Hutter, 2019) and label smoothing.",
"We train our models for a maximum of 200 epochs and terminate training early if the validation Mean Reciprocal Rank (MRR) has not improved for 20 epochs.",
"We trained all of the models used in this work using a single NVIDIA GeForce GTX 1080 Ti.",
"We use our convolutional network to extract the topk entities for every unique training query and then train a re-ranking network to rank these entities.",
"We design our student re-ranking network as a triplet classification model that utilizes the full candidate fact, ( e i , r j , e k ) , instead of an incomplete fact, ( e i , r j , ?) .",
"This allows the network to model interactions between all elements of the triple.",
"The re-ranking setting also enables us to directly fine-tune BERT which often improves performance (Pe-ters et al., 2019).",
"We introduce relation tokens 4 for each relation in the knowledge graph and construct the textual input by prepending the head and tail entities with the relation token and then concatenating the two sequences.",
"Thus the triple (head name, r i , tail name) would be represented as [CLS] [REL i] head name [SEP] [REL i] tail name [SEP] .",
"We use a learned linear combination of the [CLS] embedding from each layer as the final feature representation for the prediction.",
"A sufficiently performant ranking model can provide an informative prior that can be used to smooth the noisy training labels and improve our re-ranking model.",
"For each training query i , we normalize the logits produced by our teacher ranking model, f T ( x i ) , for the k candidate triples, f T ( x i ) 0: k , as s ik :( i +1) k = softmax ( f T ( x i ) 0: k /T ) where T is the temperature (Hinton et al., 2015).",
"Our training objective for our student model, f S ( x i ) , is a weighted average of the binary cross entropy loss, L bce , using the teacher's normalized logits, s , and the noisy training labels, y .",
"4 We use relation tokens instead of free-text relation representations because the relation identifiers for our datasets are not all well-formed using natural language, and the different styles would introduce a confounding factor that would complicate our evaluation.",
"Utilizing appropriate free-text relation identifiers may improve performance, but we leave that to future work.",
"For our experiments, we extract the top k = 10 candidates produced by our ranking model for every query in the training set.",
"We train our student network using the Adam optimizer (Kingma and Ba, 2015) with decoupled weight decay regularization (Loshchilov and Hutter, 2019).",
"We fine-tune BERT for a maximum of 10 epochs and terminate training early if the Mean Reciprocal Rank (MRR) on validation data has not improved for 3 epochs.",
"For every query, we apply our re-ranking network to the top k = 10 triples and compute the final ranking using an ensemble of the teacher and student networks.",
"The final ranking are computed with s ik :( i +1) k = ( softmax ( f S ( x ik :( i +1) k ))) + (1 )( softmax ( f T ( x i ) 0: k ))) where 0 1 controls the impact of the student re-ranker.",
"The cost of computing s ik :( i +1) k is negligible, so we sweep over [0 , 1] in increments of .",
"01 and select the that achieves the best validation MRR.",
"We utilize the same representative selection of KG models from Malaviya et al. (2020) as baselines: DistMult (Yang et al., 2015), ComplEx (Trouillon et al., 2016) ConvE (Dettmers et al., 2018), and ConvTransE (Shang et al., 2018).",
"This is not an exhaustive selection of all recent KG methods, but a recent replication study by Ruffinelli et al. (2020) found that the baselines that we use are competitive with the state-of-the-art and often outperform more recent models when trained appropriately.",
"We develop additional baselines by adapting the shallow convolutional KGC models to use BERT embeddings to evaluate the benefits of utilizing our proposed convolutional architecture instead of simply repurposing existing KGC models.",
"We refer to these models as BERT-ConvE and BERT-ConvTransE.",
"Malaviya et al. (2020) used BERT embeddings in conjunction with ConvTransE for commonsense KGC, but their model was prohibitively large to reproduce.",
"We refer to SNOMED CT Core CN-100K MR MRR H@1 H@3 H@10 MR MRR H@1 H@3 H@10 DistMult [ ] 5146 .",
"their model as BERT-Large-ConvTransE and compare directly against their reported results.",
"We also develop a deep convolutional baseline, termed BERT-DeepConv, to evaluate the effect of the architectural innovations used in our model.",
"BERT-DeepConv transforms the input embeddings to a spatial feature map like our proposed model, but it then applies a stack of 3 3 convolutions instead of a sequence of bottleneck blocks with residual connections.",
"We select hyperparameters (detailed in the Appendix) for all of our BERT baselines so that they have a comparable number of trainable parameters to our proposed model.",
"We discuss the size of these models in detail in in Section 6.4.",
"To evaluate the impact of our re-ranking stage, we ablate the use of knowledge distillation and ensembling.",
"Thus we conduct experiments where our re-ranker uses only knowledge distillation, uses only ensembling, and uses neither.",
"This means that in the most naive setting, we train the re-ranker using the hard training labels and re-rank the candidates using only the re-ranker.",
"We report standard ranking metrics: Mean Rank (MR), Mean Reciprocal Rank (MRR), Hits at 1 (H@1), Hits at 3 (H@3), and Hits at 10 (H@10).",
"We follow past work and use the filtered setting (Bordes et al., 2013), removing all positive entities other than the target entity before calculating the target entity's rank.",
"We utilize paired bootstrap significance testing (Berg-Kirkpatrick et al., 2012) with the MRR to validate the statistical significance of improvements.",
"To account for the large number of comparisons being performed, we apply the HolmBonferroni method (Holm, 1979) to correct for multiple hypothesis testing.",
"We define families for the three primary hypotheses that we tested with our experiments.",
"They are as follows: (1) The deep convolutional BERT models outperform the shallow convolutional BERT models.",
"(2) BERT-ResNet improves upon our BERT-DeepConv baseline.",
"(3) The re-ranking procedure improves the original rankings.",
"This selection has the benefit of allowing for a more granular analysis of each conclusion while significantly reducing the number of hypotheses.",
"The first family includes all pairwise comparisons between the two deep convolutional models and the two shallow convolutional models.",
"The second family involves all comparisons between BERT-ResNet and BERT-DeepConv.",
"The third family includes comparisons between all re-ranking configurations and the original rankings.",
"We note that the p-value for each family bounds the strict condition that we report any spurious finding within the family.",
"We report results across all of our datasets in Table",
"2. Our ranking model, BERT-ResNet, outperforms the previously published models and our baselines across all of the sparse datasets.",
"We find that for all sparse datasets, the models that use free text entity representations outperform the models that learn the entity embeddings during training.",
"Among the models utilizing textual information, the deep convolutional methods generally outperform the adaptations of existing neural KG models.",
"BERT-ResNet outperforms BERT-DeepConv across all datasets, demonstrating that the architectural innovations do improve downstream performance.",
"On the full FB15k-237 dataset, our proposed model is able to achieve competitive results compared to strong baselines.",
"However, the focus of this work is not to achieve state-of-the-art performance on densely connected benchmark datasets such as FB15k-237.",
"These results do, however, allow us to observe the outsized impact of sparsity on models that do not utilize textual information.",
"Re-ranking entities without knowledge distillation or ensembling leads to poor results, degrading the",
"MRR across most datasets.",
"We note that the performance of our re-ranking model could be limited by our use of a pointwise loss function.",
"Further exploration of pairwise or listwise learning learning-to-rank methods is a promising direction for future exploration that could lead to further improvements Guo et al. (2020).",
"The inclusion of either knowledge distillation or ensembling improves performance.",
"Ensembling is particularly important, achieving a statistically significant improvement over the initial rankings across most datasets.",
"Our final setting using both knowledge distillation and ensembling is the only setting to achieve a statistically significant improvement across all four datasets, although using both does not consistently improve performance over ensembling alone.",
"A plausible explanation for this is that knowledge distillation improves performance by reducing the divergence between the re-ranker and the teacher, but ensembling can already achieve a similar effect by simply increasing the weight of the teacher in the final prediction.",
"We observe that the weight of the teacher is reduced across all four datasets when knowledge distillation is used which would be consistent with this explanation.",
"Knowledge distillation has also been shown to be useful in situations with noisy labels (Li et al., 2017) which may explain why it was particularly effective for our sparsest dataset, CN-100K, where training with the hard labels led to particularly poor performance.",
"We bin test examples by the in-degree of the tail nodes and compute the MRR within these bins for our model before and after re-ranking.",
"We report this breakdown for the SNOMED CT Core dataset in Figure",
"3. Our re-ranking stage improves performance uniformly across all levels of sparsity, but it is particularly useful for entities that are rarely seen during training.",
"This is also consistent with the comparatively smaller topline improvement for the densely connected FB15k-237 dataset.",
"We report the number of trainable parameters for the models that use textual representations along with the train and test set MRR for SNOMED CT Core in Table",
"3. We observe a monotonic relationship between training and testing performance and note that the shallow models fail to achieve Figure 3: Effect of re-ranking on performance for SNOMED CT Core across varying levels of sparsity.",
"our model's test performance on the training set.",
"This demonstrates that the shallow models lack the complexity to adequately fit the training data.",
"A similar trend held for all datasets except for FB15k-237-Sparse whose smaller size reduces the risk of underfitting.",
"This explains the smaller performance improvement for that dataset.",
"Malaviya et al. (2020) scaled up BERT-Large-ConvTransE to use over 524 M trainable parameters, and their model did outperform our smaller BERT-ConvTransE baseline.",
"However, their model still fails to match the performance of either of our deep convolutional models despite using over 15 the number of trainable parameters.",
"KGs often include many sparsely connected entities where the use of textual entity embeddings is necessary for strong performance.",
"We develop a deep convolutional network that is better-suited for this setting than existing neural models developed on artificially dense benchmark KGs.",
"We also introduce a re-ranking procedure to distill the knowledge from our convolutional model into a student re-ranking network and demonstrate that our procedure is particularly effective at improving the ranking of sparse candidates.",
"We utilize these innovations to develop a KGC pipeline with greater robustness to the realities of KGs and demonstrate the generalizability of our improvements across biomedical, commonsense, and encyclopedic KGs.",
"This work was supported by the National Science Foundation grant IIS 1917955 and the National Library Medicine of the National Institutes of Health under award number T15 LM007059."
] | [
"abstain",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"method",
"method",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"objective",
"other",
"other",
"objective",
"method",
"method",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"method",
"other",
"objective",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"other"
] |
[
"We introduce MemSum (Multi-step Episodic Markov decision process extractive SUMma-rizer), a reinforcement-learning-based extractive summarizer enriched at each step with information on the current extraction history.",
"When MemSum iteratively selects sentences into the summary, it considers a broad information set that would intuitively also be used by humans in this task: 1) the text content of the sentence, 2) the global text context of the rest of the document, and 3) the extraction history consisting of the set of sentences that have already been extracted.",
"With a lightweight architecture, MemSum obtains state-of-the-art test-set performance (ROUGE) in summarizing long documents taken from PubMed, arXiv, and GovReport.",
"Ablation studies demonstrate the importance of local, global, and history information.",
"A human evaluation confirms the high quality and low redundancy of the generated summaries, stemming from MemSum's awareness of extraction history.",
"Automatic text summarization is the task of automatically summarizing a long document into a relatively short text while preserving most of the information (Tas and Kiyani, 2007).",
"Text summarization methods can be categorized into abstractive and extractive summarization (Gambhir and Gupta, 2017; Nenkova and McKeown, 2012).",
"Given a document d consisting of an ordered list of N sentences, extractive summarization aims to pick up M ( M N ) sentences as the summary of the document.",
"The extracted summaries tend to be both grammatically and semantically more reliable than abstractive summaries (Liu* et al., 2018; Liu and Lapata, 2019a; Luo et al., 2019; Liao et al., 2020), as they are directly selected from the source text.",
"Step Remaining sentences Extracted sentences Select Stop extraction Select Agent Agent Agent Extracted summary Score & Select Action",
"sentence scoring phase, an affinity score is computed for each sentence by neural networks such as bidirectional RNNs (Dong et al., 2018; Narayan et al., 2018; Luo et al., 2019; Xiao and Carenini, 2019) or BERT (Zhang et al., 2019; Liu and Lapata, 2019b).",
"In the sentence selection phase, sentences are selected by either",
"i) predicting a label (1 or 0) for each sentence based on its score, and selecting sentences with label 1 (Zhang et al., 2019; Liu and Lapata, 2019b; Xiao and Carenini, 2019), or",
"ii) ranking sentences based on their scores and selecting the top K sentences as the summary (Narayan et al., 2018), or",
"iii) sequentially sampling sentences without replacement, where the normalized scores of the remaining sentences are used as sampling likelihoods (Dong et al., 2018; Luo et al., 2019).",
"In these approaches, sentence scores are generally not updated based on the current partial summary of previously selected sentences, indicating a lack of knowledge of extraction history .",
"We deem extractive summarizers that are not aware of the extraction history to be susceptible to redundancy in a document, because they will repeatedly add sentences with high scores to a summary, regardless of whether similar sentences have been selected before.",
"And, redundancy leads to performance decreases evaluated by ROUGE F1.",
"In this paper, we propose to model extractive summarization as a multi-step episodic Markov Decision Process (MDP).",
"As shown in Figure 1, at 6507 each time step in an episode, we define a sentence state composed of three sub-states: 1) the local content of the sentence, 2) the global context of the sentence within the document, and 3) information on the extraction history, including the previously selected set of unordered sentences and the remaining sentences.",
"At each time step, the policy network (agent) takes the current sentence state as input and produces scores used to select an action of either stopping the extraction process or selecting one of the remaining sentences into the candidate summary.",
"Unlike one-step episodic MDP-based models (Narayan et al., 2018; Dong et al., 2018; Luo et al., 2019) that encode the state information only once at the beginning of the episode, in our multi-step policy, the agent updates at each time step the extraction history before selecting an action.",
"Such a step-wise state-updating strategy enables the agent to consider the content of the partial summary when selecting a sentence.",
"To efficiently encode local and global sentence states, we design an extraction agent based on LSTM networks (Hochreiter and Schmidhuber, 1997).",
"To encode the extraction history and to select actions, we use a reduced number of attention layers (Vaswani et al., 2017) of relatively low dimensionality.",
"These choices enable our model to be easily trainable and to summarize long documents such as scientific papers (Cohan et al., 2018; Huang et al., 2021) or reports (Huang et al., 2021).",
"The contributions of our work are as follows: 1) We propose to treat extractive summarization as a multi-step episodic MDP that is aware of the extraction history.",
"2) We show that extraction-history awareness allows our model to extract more compact summaries than models without history awareness and behave more robustly to redundancies in documents.",
"3) Our model outperforms both extractive and abstractive summarization models on PubMed, arXiv (Cohan et al., 2018), and GovReport (Huang et al., 2021) datasets.",
"4) Finally, human evaluators rate the MemSum summaries to be of higher quality than those from a competitive approach, especially by virtue of lower redundancy 1 .",
"Extraction history awareness was previously considered in NeuSum (Zhou et al., 2018), where a GRU encoded previously selected sentences into",
"a hidden vector that then was used to update the scores of the remaining sentences to bias the next selection.",
"NeuSum contains no stopping mechanism and therefore it can only extract a fixed number of sentences, which likely is sub-optimal.",
"Also, the potential benefits of extraction history have not been quantified and so the idea remains unexplored to a large extent.",
"Recently, BERT-based extractors such as MatchSum (Zhong et al., 2020) achieved SOTA performance in extractive summarization of relatively short documents from the CNN/DM (Hermann et al., 2015) dataset.",
"However, the quadratic computational and memory complexities (Huang et al., 2021) of such models limit their scalability for summarizing long documents with thousands of tokens, which is common for scientific papers and government reports.",
"Although large pre-trained transformers with efficient attention (Huang et al., 2021) have been adapted for abstractive summarization of long documents, we believe that extractive summarization is more faithful in general, which is why we chose an extractive approach.",
"This section outlines the multi-step episodic MDP policy for extractive summarization.",
"In an episodic task with a terminal state (i.e. end of summary ), policy gradient methods aim to maximize the objective function J ( ) = E [ R 0 ] , where the return R t = (cid:80) Tk = t +1 r k is the cumulative reward from time t + 1 until the end of the episode when the summary is complete.",
"In applications of RL to extractive summarization, the instantaneous reward r t is zero except at the end of the episode when the final reward r is computed according to Equation (1), so R t R 0 = r .",
"The reward r is usually expressed as (Dong et al., 2018): r = 1 3( ROUGE-1 f + ROUGE-2 f + ROUGE-L f ) (1) According to the REINFORCE algorithm (Williams, 1992), the policy gradient is defined as: J ( ) = E [ R t log ( A t | S t , )] , (2) where ( A t | S t , ) denotes the likelihood that at time step t the policy selects action A t given the 6508 Update extracted sentences with LSE MHP words of sentence GCE All sentences' embeddings Extracted sentences Remaining sentences sample actions Sampled actions stop?",
"state S t .",
"With as the learning rate, the parameter update rule is (Sutton and Barto, 2018): t +1 t + R t log ( A t | S t , t ) , (3) 3.2 Multi-step Episodic MDP Policy Different from one-step episodic MDP policies (Narayan et al., 2018; Dong et al., 2018; Luo et al., 2019) that extract the entire summary via a single action, we define an episode, i.e., the generation of a summary, consisting of multiple time steps.",
"At each time step t , corresponding to extracting sentence number t , the action A t is either to stop extraction or to select a sentence s a t from the remaining sentences.",
"The agent's policy is: ( A t | S t , t ) = p ( stop | S t , t ) p ( a t | stop , S t , t ) p ( a t | stop , S t , t ) = u at ( S t , t ) (cid:80) j It u j ( S t , t ) if stop = false 1 | I t | if stop = true , (4) where I t denotes the index set of remaining sentences at time step t .",
"If the agent does not stop, it first computes a score u j for each remaining sentence and samples a sentence s a t according to the probability distribution of normalized scores.",
"When the agent stops the extraction, no sentence is selected and the conditional likelihood p ( a t | stop=false , S t , t ) is set to 1 | I t | (where | I t | represents the number of remaining sentences at time t ), which is independent of the policy parameters to prohibit the gradient from being passed to the policy parameters via the conditional likelihood.",
"After calculating the reward according to Equation (1), the policy parameters are updated according to Equation (3) (for all time steps).",
"The state S t in Equation (4) is designed to be informative on: 1) the local content of the sentence, 2) the global context of the sentence within the document, and 3) the current extraction history.",
"To encode these three properties in the state, we use a local sentence encoder, a global context encoder, and an extraction history encoder, respectively.",
"Subsequently, the state is mapped by an extractor to an output score for each of the remaining sentences and the extraction stop signal.",
"The overall framework of our model is depicted in Figure",
"2. In the Local Sentence Encoder (LSE), ordered words ( w 1 , w 2 , . . . w M ) in a sentence s i are first mapped onto word embeddings using a word embedding matrix.",
"Subsequently, a N l -layer bidirectional LSTM (Hochreiter and Schmidhuber, 1997) transforms the word embeddings and maps them onto sentence embeddings l s i via a multihead pooling layer (MHP) (Liu and Lapata, 2019a).",
"The Global Context Encoder (GCE) consists of a N g -layer bi-LSTM that takes the L local sentence embeddings ( l s 1 , l s 2 , . . . l s L ) as inputs and produces for each sentence s i an embedding g s i that encodes global contextual information such as the sentence's position in the document and information on neighboring sentences.",
"The Extraction History Encoder (EHE) encodes the extraction history information and produces the extraction history embedding h s ri for each remaining sentence s ri .",
"The EHE is composed of a stack of N h identical layers.",
"Within one layer, there are two multi-head attention sublayers, as contained in the transformer decoder in Vaswani et al. (2017).",
"One sublayer is used to perform multi-head self-attention (MHA) among the local embeddings 6509 of the remaining sentences, so that each remaining sentence can capture the context provided by other remaining sentences.",
"The other attention sublayer is used to perform multi-head attention over the embeddings of extracted sentences to enable each remaining sentence to attend to all the extracted sentences.",
"The output of the two attention sublayers, one for each remaining sentence, captures the contextual information of both extracted and remaining sentences.",
"The final output of the N th h layer of the EHE constitutes the extraction history embedding, one for each remaining sentence.",
"There is no positional encoding and the EHE produces the extraction history embeddings non-autoregressively by attending to both precedent and subsequent positions.",
"Consequently, the extraction history embeddings h s ri for the remaining sentences are invariant to the order of the previously selected sentences.",
"We believe that the sequential information of previously selected sentences is not crucial for reducing redundancy and for deciding whether to stop extraction or not.",
"The Extractor computes the score of each remaining sentence and outputs an extraction stop signal.",
"As input to the extractor, we form for each of the remaining sentences s ri an aggregated embedding by concatenating the local sentence embedding l s ri , the global context embedding g s ri , and the extraction history embedding h s ri .",
"As shown in Figure 2, to produce the score u s ri , the concatenated embedding of remaining sentence s ri is passed to fully connected layers with ReLU activation and then projected to a scalar by a Linear-1 layer followed by a sigmoid function.",
"Note that the same fully connected layers are applied identically to all remaining sentences.",
"We deem that the extractor can learn to stop extraction based on the remaining sentences' states.",
"Therefore, we apply an MHP to the last hidden vectors of all remaining sentences to output a single vector.",
"This vector is then passed to a linear layer with a sigmoid function, producing a stopping probability p stop .",
"We train the parameterized policy network according to the update rule in Equation (3).",
"At each training iteration, an episode is sampled to compute the final return r and the action probabilities ( A t | S t , t ) for all time steps t .",
"An example episode with T extracted sentences looks like: ( S 0 , s a 0 , . . . , ST 1 , s a T 1 , ST , A stop , r ) , where S t represents the concatenated state information introduced in Section 3.3, s a t represents the selection of sentence a t , A stop represents the extraction stops at the final time step T , and r is the reward as defined in Equation (1).",
"To encourage the agent to select compact summaries, we multiply the final reward r by a length penalty term 1 / ( T + 1) (Luo et al., 2019).",
"Consequently, the return R t r T +1 .",
"1: for each document-summary pair ( D i , G i ) do 2: LSE outputs local sent.",
"embed l s 1 , . . . , l s L 3: GCE outputs global context embed g s 1 , . . . , g s L 4: Sample an episode S 0 , s a 0 , . . . , ST 1 , s a T 1 , ST , A stop , r from the high-ROUGE episodes set E p of document D i 5: for each time step: t = 0,1,...,T: do 6: if t > 0 then 7: EHE outputs extraction history embed h s r 1 , . . . , h s rL Et for remaining sentences 8: else 9: Initialize h s r 1 ,..., h s rL E 0 to 0 10: Extractor outputs scores u s r 1 ,..., u s rL Et for remaining sentences and outputs p stop 11: Compute the action probability ( A t | S t , ) according to Equation (4) 12: + r T +1 log ( A t | S t , ) Algorithm 1 summarizes the training procedure of MemSum.",
"We initialize the extraction history embeddings to 0 , because at t = 0 no sentences have been extracted.",
"E t represents the number of sentences that have been extracted into the summary up to time step t .",
"Following the strategy in Narayan et al. (2018) and Mohsen et al. (2020), instead of sampling an episode following the current policy ( | , t ) , we sample an episode from a set E p of episodes with high ROUGE scores, which enables the agent to quickly learn from optimal policies and to rapidly converge.",
"Details on creating a set of high-ROUGE episodes for training are described in Appendix E. 4 Experiments In this section, we report implementation details of our model and describe the datasets used for training and for evaluation.",
"Datasets.",
"The documents to be summarized in the PubMed and arXiv datasets (Cohan et al., 2018) 6510 Datasets avg.",
"are the full bodies of scientific papers and the gold summaries are the corresponding abstracts.",
"Zhong et al. (2020) proposed a truncated version of the PubMed dataset (PubMed trunc for simplicity) by defining a doument as the introduction section of a paper.",
"The GovReport dataset (Huang et al., 2021) contains U.S. government reports with gold summaries written by experts.",
"Except PubMed trunc , all the other datasets contain significantly longer documents than the popular dataset CNN/DM (Table 1).",
"Baselines.",
"Extractive baselines include Lead (directly using the first several sentences as the summary) (Gidiotis and Tsoumakas, 2020), SummaRuNNer (Nallapati et al., 2017), Atten-Cont (Xiao and Carenini, 2019), Sent-CLF and Sent-PTR (Pilault et al., 2020), MatchSum (Zhong et al., 2020), and the NeuSum model (Zhou et al., 2018) that we trained on our datasets.",
"Abstractive summarization models include PEGASUS (Zhang et al., 2020), BigBird (Zaheer et al., 2020), Dancer (Gidiotis and Tsoumakas, 2020), and Hepos (Huang et al., 2021) that achieved the state-of-the-art in long document summarization using a large-scale pretrained BART model (Lewis et al., 2020) with memory-efficient attention encoding schemes including Locality Sensitive Hashing (Kitaev et al., 2020) (Hepos-LSH) and Sinkhorn attention (Hepos-Sinkhorn).",
"We also present the performance of the oracle extraction model based on the greedy approach (Nallapati et al., 2017) which sequentially selects from the document the sentence that maximally improves the average of R-1 and R-2 of selected sentences.",
"Implementation Details.",
"We computed local sentence embeddings using pretrained Glove word embeddings (Pennington et al., 2014) of dimension d = 200 , keeping the word embeddings fixed during training.",
"For the LSE, we used N l = 2 bi-LSTM layers and for the GCE N g = 2 .",
"For the EHE, we used N h = 3 attention layers, and we set the number of attention heads to 8 and the dimension of the feed-forward hidden layer to 1024 ; during training we set the dropout rate to 0 .",
"1 .",
"The extractor consisted of 2 fully-connected hidden layers with output dimensions 2 d and d , respectively.",
"We trained our model using the Adam optimizer with 1 = 0 .",
"9 , 2 = 0 .",
"999 (Kingma and Ba, 2015), fixed learning rate = 1 e 4 , and weight decay 1 e 6 .",
"The training was stopped when the validation performance started to degrade.",
"During validating and testing, the agent extracted sentences in a deterministic way: after computing the scores u s ri for the remaining sentences and the stop likelihood p stop , the agent stopped the extraction if p stop p thres or if the maximum admissible number N max of extracted sentences was reached; otherwise, the agent selected the sentence with the largest score.",
"The model was trained on eight RTX 2080 Ti GPUs.",
"On the validating datasets we selected the best checkpoint of each model and determined the optimal N max and stopping criterion p thres .",
"For Pubmed, arXiv, Pubmed trunc , and GovReport, N max was set to 7 , 5 , 7 , and 22 , and p thres was set to 0 .",
"6 , 0 .",
"5 , 0 .",
"8 , and 0 .",
"6 , respectively.",
"For the detailed selection procedure of the optimal stopping threshold, see Appendix D. Information on reproducibility is available in Appendix I. Evaluation.",
"We evaluated the performance of our model using F 1 ROUGE (Lin, 2004), including ROUGE-1,2, and L for measuring unigram, bigram, and longest common subsequence.",
"We also conducted human evaluation in Section 5.4.",
"Here we present the results on various extractive summarization tasks and analyze the contribution of different modules via ablation studies.",
"By comparing with extractive baselines on the PubMed and arXiv datasets, we observed that models utilizing extraction history, such as NeuSum and our MemSum, perform significantly better than other models, revealing the effectiveness of the extraction history.",
"MemSum also significantly outperformed NeuSum, suggesting a better utilization of extraction history , which we ascribed to the following factors: 1) In MemSum, we treat stopping extraction also as an action and train the policy network to output a stopping probability.",
"There-6511 Model PubMed arXiv R-1 R-2 R-L R-1 R-2 R-L ORACLE 61.99 34.95 56.76 60.00 30.60 53.03 Extractive summarization baselines Lead-10 37.45 14.19 34.07 35.52 10.33 31.44 SummaRuNNer 43.89 18.78 30.36 42.81 16.52 28.23 Atten-Cont 44.85 19.70 31.43 43.62 17.36 29.14 Sent-CLF 45.01 19.91 41.16 34.01 8.71 30.41 Sent-PTR 43.30 17.92 39.47 42.32 15.63 38.06 NeuSum 47.46 21.92 42.87 47.49 21.56 41.58 Abstractive summarization baselines PEGASUS 45.97 20.15 41.34 44.21 16.95 38.83 BigBird 46.32 20.65 42.33 46.63 19.02 41.77 Dancer 46.34 19.97 42.42 45.01 17.60 40.56 Hepos-Sinkhorn 47.93 20.74 42.58 47.87 20.00 41.50 Hepos-LSH 48.12 21.06 42.72 48.24 20.26 41.78 MemSum (ours) 49.25 * 22.94 * 44.42 * 48.42 20.30 42.54 * Table 2: Results on the PubMed and arXiv test sets.",
"fore, MemSum is able to automatically stop extracting at an optimal time step based on extraction history, while NeuSum can only extract a predefined number of sentences; 2) With the policy gradient method REINFORCE we can train MemSum to maximize the ROUGE score directly, while in NeuSum the loss was set to the KL-divergence between the model-computed sentence scores and the ROUGE score gains at each step, which is less intuitive.",
"We further compare MemSum with NeuSum via human evaluation in Section 5.4.",
"We observed that the ROUGE performance on the PubMed trunc dataset is significantly lower than that on the PubMed dataset, with a 16.87 drop in R-1 for the extractive oracle and a 6.23 drop in R-1 for MemSum, indicating that the introduction section is not sufficient to generate summaries close to the ground truth (abstracts).",
"Even so, our model still significantly outperformed MatchSum on PubMed trunc , and we attribute this improvement to the fact that MatchSum truncates the introduc-2 https://pypi.org/project/rouge-score/ 0 5 10 15 20 25 30 Extracted Sentence Position 0 10 % o f s e n t e n c e p o s i t i o n ORACLE MEMSUM (ours) MatchSum Figure 3: The position distribution of extracted sentences in the PubMed trunc dataset.",
"Human-written Summary: (...) While CMS is generally required to disallow, or recoup, federal funds from states for eligibility-related improper payments if the state's eligibility error rate exceeds 3 percent , it has not done so for decades, (...) CMS issued revised procedures through which it can recoup funds for eligibility errors, beginning in fiscal year 2022 .",
"(...)",
"Hepos-Sinkhorn (abstractive): (...) The selected states also reported that they did not have adequate processes to address these issues.",
"CMS has taken steps to improve its oversight of the Medicaid program, including issuing guidance to states on the use of MAGI-exempt bases for determining eligibility, but these efforts have not been fully implemented.",
"(...)",
"MemSum (ours, extractive): (...) implemented its statutory requirement to recoup funds associated with Medicaid eligibility-related improper payments for states with an eligibility error rate above 3 percent through its MEQC program.",
"(...)",
"However, the agency has introduced new procedures through which it can, under certain circumstances, begin to recoup funds based on eligibility errors in fiscal year 2022 .",
"(...)",
"tion section further to 512 tokens because it needs to compute document embeddings using Bert.",
"Consequently, MatchSum extracts sentences mainly from the first 15 sentences of the document, while our MemSum produces a similar distribution of extracted sentence positions as the extractive oracle, Figure",
"3. Thus, summarizing long documents is a non-trivial task , and models that work well on summarizing short documents (e.g., CNN/DM) may fail to generalize to long documents.",
"MemSum also significantly outperformed the state-of-the-art abstractive summarization model Hepos as measured by ROUGE scores, especially on the GovReport dataset.",
"A comparison of an exemplary MemSum-extracted summary and the corresponding Hepos-Sinkhorn-generated summary from the GovReport dataset (Table 4) is consistent with the ROUGE comparison, showing that the MemSum-extracted summary is more accurate 6512 Model R-1 R-2 R-L MemSum 49.25 22.94 44.42 MemSum w/o LSE 48.12 22.04 43.36 MemSum w/o GCE 46.85 20.31 41.95 MemSum w/o EHE 48.08 22.77 43.55 MemSum with GRU-EHE 49.11 22.86 44.28 MemSum w/o auto-stop 48.25 22.63 43.70 MemSum with STOP 47.18 21.81 42.20 Table 5: Ablation study on the PubMed dataset.",
"than the Hepos-Sinkhorn-generated summary and has higher overlap with the gold summary.",
"We deem that this particularly good extraction performance on the GovReport dataset results from the higher extractiveness of the gold summaries in the GovReport dataset compared to other datasets, which may be due in part to technical language being difficult to abstractively summarize without a change in meaning.",
"This is evidenced by the fact that the ROUGE scores of the extractive oracle on the GovReport dataset (Table 3) are higher than those of the PubMed and arXiv datasets (Table 2).",
"Therefore, extractive summarization may be more proper than abstractive summarization due to the requirement of stringent faithfulness of government report summaries .",
"We conduct ablation studies by comparing the full MemSum model with the following variations in structures : 1) MemSum w/o LSE, where we obtain local sentence embeddings by replacing the bi-LSTM based LSE by simple averages of word embeddings; 2) MemSum w/o GCE where we remove the GCE; 3) MemSum w/o EHE where we remove EHE, compute the scores for all sentences in one step, and samples sentences following the BanditSum policy (Dong et al., 2018); 4) MemSum with GRU-EHE where we use a GRU to encode previously extracted sentences at each time step, and uses the last hidden state as the extraction history embedding for all remaining sentences, following Zhou et al. (2018).",
"Meanwhile, we also tested two variations that adopted different stopping mechanisms : 1) MemSum w/o auto-stop that does not stop extraction automatically based on p stop , but that extracts a fixed number of sentences; 2) MemSum with STOP that inserts a special stop sentence (e.g. STOP\") into the document, and stops extraction once the agent selects this sentence. Contribution of Modules. Removing GCE has Model R-1 R-2 R-L duplicatepercentage MemSum 49.16 22.78 44.39 0% MemSum w/o auto-stop 48.21 22.59 43.76 0% MemSum w/o EHE 42.82 18.18 36.68 41% MemSum w/o EHE +3gram blocking 46.85 19.93 42.40 0% Table 6: Performance on the redundant PubMed dataset. a greater impact on performance than removing LSE (Table 5), suggesting that modeling global contextual information is more critical than modeling local sentence information in our MemSum framework, which contrasts with the result that modeling local sentence information is more important in the Atten-Cont (Xiao and Carenini, 2019) framework. Furthermore, we observed a significant performance degradation when removing EHE, but no significant difference between MemSum and MemSum with GRU-EHE, indicating that EHE is necessary, but our MemSum policy is not strongly dependent on the specific structure of this module (e.g., attention-based or RNN-based). Influence of Stopping Mechanisms. MemSum w/o auto-stop achieves lower ROUGE scores than MemSum, revealing the necessity of auto stopping in our MemSum architecture. Meanwhile, MemSum with STOP produced summaries with fewer extracted sentences (3.9 vs. 6.0 sentences on average) and significantly lower ROUGE scores.",
"We attribute this reduction to the predictable positive reward obtained from selecting the special stop sentence that ends an episode, which leads to a preference for this final action and increases the likelihood of taking this action prematurely.",
"We hypothesized that the extraction history allows MemSum to avoid sentences that are similar to existing sentences in the current partial summary, intuitively mimicking what humans do when extractively summarizing documents.",
"To verify this, we created a redundant PubMed dataset in which we repeated each sentence in the document, with the replicated sentences immediately following the originals.",
"On this dataset, we trained and tested MemSum and MemSum w/o EHE (no history awareness), and we compared different models in terms of ROUGE scores and average duplicate percentage that is defined as the average percentage of the duplicated sentences among all extracted sentences in a summary.",
"As reported in Table 6, for MemSum w/o EHE, on average 41% of sentences in the extracted summaries were duplicated.",
"Along with the high duplicate ratio came a significant decrease in ROUGE score.",
"By contrast, the performance of the full MemSum model with history awareness was only slighted affected when comparing the results of the MemSum on the PubMed dataset (Table 2) and on the redundant PubMed dataset (Table 6).",
"Meanwhile, using the Trigram Blocking method that skips a sentence if it has a trigram that overlaps with the current summary (Liu and Lapata, 2019b) is also successful in avoiding repetitive sentences.",
"However, the ROUGE scores associated with Trigram Blocking were significantly lower than those of the MemSum with awareness of extraction history.",
"In summary, the history-aware MemSum model spontaneously learns an optimized strategy to avoid redundant sentences without explicit human guidance or crude rules, and thus shows better performance.",
"Case Study: How does MemSum Avoid Redundancy?",
"We let MemSum summarize a document sampled from the test set of the redundant PubMed dataset and monitored the sentence scores produced by the Extractor during each extraction step.",
"The results are shown in Figure",
"4. At time step 0 , the 10 th sentence obtained the maximum score and was thus selected into the summary.",
"At time step 1 , we noticed that the 11 th sentence, which is a replica of the 10 th sentence, had a score close to zero.",
"The same was also true for the other selected sentences and their following sentences, revealing competent Criteria Experiment I Experiment II NeuSum MemSum NeuSum MemSum w/o auto-stop overall 1.58 1.37 1.57 1.38 coverage 1.46 1.49 1.44 1.51 non-redundancy 1.67 1.28 * 1.65 1.30 * avg.",
"repetition avoidance of the Extractor.",
"Because the EHE is insensitive to the extraction order and to sentence position information, as described in Section 3.3, we can conclude that the full MemSum avoids redundancy by evaluating the similarity between selected and remaining sentences, rather than by remembering\" selected sentences' positions.",
"We conducted human evaluation following Wu and Hu (2018); Dong et al. (2018); Luo et al. (2019).",
"For each document sampled from the test set of the PubMed dataset, we provide a reference summary, and volunteers are asked to rank a pair of randomly ordered summaries produced by two models according to three criteria: non-redundancy, coverage, and overall quality.",
"The better model will be ranked #1 while the other is ranked #2, and if both models extract the same summary, then they will both get the #1 rank.",
"In experiment 1, we compared NeuSum, which always extracts 7 sentences, and MemSum, which extracts a flexible number of sentences thanks to automatic stopping.",
"In experiment 2, we discounted for differences in the number of extracted sentences by making MemSum w/o auto-stop to also extract 7 sentences.",
"A user-friendly interactive web interface was implemented to assist the evaluation process, with details in Appendix G. Table 7 reports the human evaluation results for both experiments.",
"Both MemSum and MemSum w/o auto-stop ranked significantly higher (p<0.005) than NeuSum in terms of non-redundancy and achieved a better average overall quality.",
"In terms of word count, MemSum produces shorter summaries than NeuSum in both experiments, even though both models extract the same number of 6514 sentences in experiment",
"2. These results show that redundancy avoidance of MemSum is particularly good, even without the auto-stop mechanism.",
"The slightly better performance of NeuSum in terms of coverage needs to be weighed against it extracting significantly longer summaries.",
"Note that neither NeuSum nor our model is trained to optimize the order of the extracted sentences.",
"Therefore, we did not use fluency, which depends on sentence order, as a metric for human evaluation.",
"Improving the fluency of the extracted summaries will be the subject of our future research.",
"Extractive summarization can be achieved effectively with a multi-step episodic Markov decision process with history awareness.",
"Using encoders of local sentence, global context, and extraction history, MemSum is given information that is intuitively also used by humans when they summarize a document.",
"Awareness of the extraction history helps MemSum to produce compact summaries and to be robust against redundancy in the document.",
"As a lightweight model (Appendix C), MemSum outperforms both extractive and abstractive baselines on diverse long document summarization tasks.",
"Because MemSum achieves SOTA performance on these tasks, MDP approaches will be promising design choices for further research.",
"We acknowledge support from the Swiss National Science Foundation (grant 31003A_182638) and the NCCR Evolving Language, Swiss National Science Foundation Agreement No. 51NF40_180888.",
"We also thank the anonymous reviewers for their useful comments."
] | [
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"result",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"objective",
"result",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"result",
"other",
"other",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"result",
"other",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"Learned self-attention functions in state-of-the-art NLP models often correlate with human attention.",
"We investigate whether self-attention in large-scale pre-trained language models is as predictive of human eye fixation patterns during task-reading as classical cognitive models of human attention.",
"We compare attention functions across two task-specific reading datasets for sentiment analysis and relation extraction.",
"We find the predictiveness of large-scale pretrained self-attention for human attention depends on what is in the tail', e.g., the syntactic nature of rare contexts.",
"Further, we observe that task-specific fine-tuning does not increase the correlation with human task-specific reading.",
"Through an input reduction experiment we give complementary insights on the sparsity and fidelity trade-off, showing that lower-entropy attention vectors are more faithful.",
"The usefulness of learned self-attention functions often correlates with how well it aligns with human attention (Das et al., 2016; Klerke et al., 2016; Barrett et al., 2018; Zhang and Zhang, 2019; Klerke and Plank, 2019).",
"In this paper, we evaluate how well attention flow (Abnar and Zuidema, 2020) in large language models, namely BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019) and T5 (Raffel et al., 2020), aligns with human eye fixations during task-specific reading, compared to other shallow sequence labeling models (Lecun and Bengio, 1995; Vaswani et al., 2017) and a classic, heuristic model of human reading (Reichle et al., 2003).",
"We compare the learned attention functions and the heuristic model across two task-specific English reading tasks, namely sentiment analysis (SST movie reviews) and relation extraction (Wikipedia), as well as natural reading, using a publicly available dataset with eye-tracking recordings of native speakers of English (Hollen-stein et al., 2018).",
"Contributions We compare human and model attention patterns on both sentiment reading and relation extraction tasks.",
"In our analysis, we compare human attention to pre-trained Transformers (BERT, RoBERTa and T5), from-scratch training of two shallow sequence labeling architectures (Le-cun and Bengio, 1995; Vaswani et al., 2017), as well as to a frequency baseline and a heuristic, cognitively inspired model of human reading called the E-Z Reader (Reichle et al., 2003).",
"We find that the heuristic model correlates well with human reading, as has been reported in Sood et al. (2020b).",
"However when we apply attention flow (Abnar and Zuidema, 2020), the pre-trained Transformer models also reach comparable levels of correlation strength.",
"Further fine-tuning experiments on BERT did not result in increased correlation to human fixations.",
"To understand what drives the differences between models, we perform an in-depth analysis of the effect of word predictability and POS tags on correlation strength.",
"It reveals that Transformer models do not accurately capture tail phenomena for hard-to-predict words (in contrast to the E-Z Reader) and that Transformer attention flow shows comparably weak correlation on (proper) nouns while the E-Z Reader predicts importance of these more accurately, especially on the sentiment reading task.",
"In addition, we investigate a subset of the ZuCo corpus for which aligned task-specific and natural reading data is available and find that Transformers correlate stronger to natural reading patterns.",
"We test faithfulness of these different attention patterns to produce the correct classification via an input reduction experiment on task-tuned BERT models.",
"Our results highlight the trade-off between model faithfulness and sparsity when comparing importance scores to human attention, i.e., less sparse (higher entropy) attention vectors tend to be less faithful with respect to model predictions.",
"Our code is available at github.com/ oeberle/task_gaze_transformers .",
"Church and Liberman (2021) discuss how NLP has historically benefited from rationalist and empiricist methodologies, something that holds for cognitive modeling in general.",
"The vast majority of application-oriented work in NLP today relies on pre-trained language models or other large-scale data-driven models, but in cognitive modeling, most approaches remain heuristic and rule-based, or hybrid, e.g., relying on probabilistic language models to quantify surprisal (Rayner and Reichle, 2010; Milledge and Blythe, 2019).",
"This is for good reasons: Cognitive modeling values interpretability (even) more, often suffers from data scarcity, and is less concerned with model reusability across different contexts.",
"This paper presents a head-to-head comparison of the E-Z Reader and pre-trained Transformer-based language models.",
"We are not the first to evaluate pre-trained language models and large-scale data-driven models as if they were cognitive models.",
"Chrupaa and Alishahi (2019), for example, use representational similarity analysis to correlate sentence encodings in pre-trained language models with fMRI signals; Abdou et al. (2019) correlate sentence encodings with gaze-derived representations.",
"More generally, it has been argued that cognitive evaluations are in some cases practically superior to standard evaluation methodologies in NLP (Sgaard, 2016; Hollenstein et al., 2019).",
"We return to this in the Discussion and Conclusion 6.",
"Commonly, pre-trained language models are disregarded as cognitive models, since they are most often implemented as computationally demanding batch learning algorithms, processing data at once.",
"Gnther et al. (2019) points out that this is an artefact of their implementation, and online learning of pre-trained language models is possible, yet impractical.",
"Generally, several researchers have argued for taking pre-trained language models seriously as cognitive models (Rogers and Wolmetz, 2016; Mandera et al., 2017; Gnther et al., 2019).",
"In the last section, 6, we discuss some of the implications of comparisons of pre-trained language models and cognitive models for cognitive modeling, as well as for NLP.",
"In our experiments, we focus on Transformer architectures that are currently the dominating pre-trained language models and a de facto baseline for modern NLP research.",
"The ZuCo dataset (Hollenstein et al., 2018) contains eye-tracking data for 12 participants (all English native speakers) performing natural reading and relation extraction on 300 and 407 English sentences from the Wikipedia relation extraction corpus (Culotta et al., 2006) respectively and sentiment reading on 400 samples of the Stanford Sentiment Treebank (SST) (Socher et al., 2013).",
"For our analysis, we extract and average word-based total fixation times across participants and focus on the task-specific relation extraction and sentiment reading samples.",
"Below we briefly describe our used models and refer to Appendix A for more details.",
"Transformers The superior performance of Transformer architectures across broad sets of NLP tasks raises the question of how task-related attention patterns really are.",
"In our experiments, we focus on comparing task-modulated human fixations to attention patterns extracted from the following commonly used models:",
"(a) We use both pre-trained uncased BERT-base and large models (Devlin et al., 2019) as well as fine-tuned BERT models on the respective tasks.",
"BERT was originally pre-trained on the English Wikipedia and the BookCorpus.",
"(b) The RoBERTa model has the same architecture as BERT and demonstrates better performance on downstream tasks using an improved pre-training scheme and the use of additional news article data (Liu et al., 2019).",
"(c) The Text-to-Text Transfer Transformer (T5) uses an encoder-decoder structure to enable parallel task-training and has demonstrated state-of-the-art performance over several transfer tasks including sentiment analysis and natural language inference (Raf-fel et al., 2020).",
"We evaluate different ways of extracting token-level importance scores: We collect attention representations and compute the mean attention vector over the final layer heads to capture the mixing of information in Transformer self-attention modules as in Hollenstein and Beinborn (2021) and present this as mean for all aforementioned Transformers.",
"To capture the layer-wise structure of deep Transformer models we compute attention flow (Abnar and Zuidema, 2020).",
"This approach considers the 4296 Sentiment Reading (SST) Relation Extraction (Wikipedia) Figure 1: Spearman correlation analysis between human attention and different models for two task settings.",
"attention matrices as a graph, where tokens are represented as nodes and attention scores as edges between consecutive layers.",
"The edge values define the maximal flow possible between a pair of nodes.",
"Flow between edges is thus",
"(i) limited to the maximal attention between any two consecutive layers for this token and",
"(ii) conserved such that the sum of incoming flow must be equal to the sum of outgoing flow.",
"We denote the attention flow propagated back from layer L as flow L .",
"Shallow Models We ground our analysis on Transformers by comparing them to relatively shallow models that were trained from-scratch and evaluate how well they coincide with human fixation.",
"We train a standard CNN (Kim, 2014) network with multiple filter sizes on pre-trained GloVe embeddings (Pennington et al., 2014).",
"Importance scores over tokens are extracted using Layerwise Relevance Propagation (LRP) (Arras et al., 2016, 2017) which has been demonstrated to produce robust explanations by iterating over layers and redistributing relevance from outer layers towards the input (Bach et al., 2015; Samek et al., 2021).",
"In parallel, we use a shallow multi-head self-attention network (Lin et al., 2017) on GloVe vectors with a linear read-out layer for which we compute token relevance scores using LRP.",
"E-Z Reader As a cognitive model for human reading, we compute task-neutral fixation times using the E-Z Reader (Reichle et al., 1998) model.",
"The E-Z Reader is a multi-stage, hybrid model, which relies on an n -gram model and several heuristics, based, for example, on theoretical assumptions about the role of predictability and average saccade length.",
"Additionally, we compare to a frequency baseline using word statistics of the BNC (British National Corpus, Kilgarriff (1995)) 1 as proposed by Barrett et al. (2018).",
"For training models on the different tasks we remove all sentences that overlap between ZuCo and the original SST and Wikipedia datasets.",
"Models are then trained on the remaining train-split data until early stopping is reached and we report results over five runs.",
"We provide further details on the optimization and model task performance in Appendix A. 3.4 Metric To compare models with human attention, we compute Spearman correlation between human and model-based importance vectors after concatenation of individual sentences as well as on a token-level, see Hollenstein and Beinborn (2021).",
"This enables us to distinguish unrelated effects caused by varying sentence length from token-level importance.",
"As described before, we extract human attention from gaze (ZuCo), simulated gaze (E-Z Reader), raw attentions (BERT, RoBERTa, T5), relevance scores (CNN, self-attention) and inverse token probability scores (BNC).",
"2 We use ZuCo to-1 We compute the negative log-transformed probability of each lower-cased token corresponding to an inverse relation between word-frequency and human gaze duration (Rayner and Duffy, 1986) 2 First and last token bins from each sentence are ignored to avoid the influence of sentence border effects in Transformers (Clark et al., 2019) and for which the E-Z Reader does not compute fixations.",
"To evaluate how well model and human attention patterns for sentiment reading and relation extraction align, we compute pair-wise correlation scores as displayed in Figure 1. Reported correlations are statistically significant with p < 0 .",
"01 if not indicated otherwise (ns: not significant).",
"After ranking based on the correlations on sentence-level, we observe clear differences between sentiment reading on SST and relation extraction on Wikipedia for the different models.",
"For sentiment reading, the E-Z Reader and BNC show the highest correlations followed by the Transformer attention flow values (the ranking between E-Z/BNC and Transformer flows is significant at p < 0 . 05 ).",
"For relation extraction, we see the highest correlation for BERT-base attention flows (with and without fine-tuning) and BERT-large followed by the E-Z Reader (rank-ing is significant at p < 0 . 05 ).",
"On the lower end, computing means over BERT attentions across the last layer shows weak to no correlations for both tasks.",
"3 The shallow architectures result in low to moderate correlations with a distinctive gap to attention flow.",
"Focusing on flow values for Transformers, BNC and E-Z Reader, correlations are stable across word and sentence length.",
"Correlations grouped by sentence length shows stable values around 0 .",
"6 (SST) and 0 .",
"4 0 .",
"6 (Wikipedia) except for shorter sentences where correlations fluctuate.",
"To check the linear relationship between human and model attention patterns we additionally compute tokenand sentence-level Pearson correlations which can be found in Appendix B. Results confirm that Spearman and Pearson correlation coefficients as well as rankings hardly differ which suggests a linear relationship and that correlation strength is in line with Hollenstein and Beinborn (2021).",
"In addition to our main result that pre-trained language models are competitive to heuristic cognitive models in predicting human eye fixations during reading we present a detailed analysis, investigating what our main results depend on, where",
"3 We have experimented with oracle analyses selecting the maximally correlating attention head in the last layer for each sentence and find that correlations are generally weaker than with attention flow.",
"Fine-tuning BERT does not change correlations to human attention We find that fine-tuning base and large BERT models on either task does not significantly change correlations and are of similar strength to untuned models.",
"This observation can be embedded into findings that Transformers are equipped with overcomplete sets of attention functions that hardly change until the later layers, if at all, during fine-tuning and that this change is also dependent on the tuning task itself (Kovaleva et al., 2019; Zhao and Bethard, 2020).",
"In addition, we observe that Transformer flows propagated back from early, medium and final layers do not considerably change correlations to human attention.",
"This can be explained by attention flow filtering the path of minimal value at each layer as discussed in Abnar and Zuidema (2020).",
"Attention flow is important The correlation analysis emphasizes that we need to capture the layered propagation structure in Transformer models, e.g., by using attention flow, in order to extract importance scores that are competitive with cognitive models.",
"Interestingly, selecting the highest correlating head for the last attention layer produces generally weaker correlation than attention flows.",
"3 This offers additional evidence that raw attention weights do not reliably correspond to token relevance (Serrano and Smith, 2019; Abnar and Zuidema, 2020) and, thus, are of limited use to compare task attention to human gaze.",
"Differences between language models BERT, RoBERTa and T5 are large-scale pretrained language models based on Transformers, but they also differ in various ways.",
"One key difference is that BERT and RoBERTa use absolute position encodings, while T5 uses relative encodings.",
"BERT and RoBERTa differ in that",
"(i) BERT has a next-sentence-prediction auxiliary objective;",
"(ii) RoBERTa and T5 were trained on more data;",
"(iii) RoBERTa uses dynamic masking and trains with larger mini-batches and learning rates, while T5 uses multi-word masking;",
"(iv) RoBERTa uses byte pair encoding for subword segmentation.",
"We leave it as an open question whether the superior attention flows of BERT, compared to RoBERTa and T5, has to do with training data, next sentence prediction, or fortunate hyper-parameter settings, but note that BERT is also known to have 4298 Figure 2: Upper: Correlations between human fixation and different models for SST (left) and Relation Extraction (right) for the six most common POS tags.",
"higher alignment with human-generated explanations than other large-scale pre-trained language models (Prasad et al., 2021).",
"E-Z Reader is less sensitive to hard-to-predict words and POS We compare correlations to human fixations with attention flow values for Transformer models in the last layer, the E-Z Reader and the BNC baseline for different word predictability scores computed with a 5-gram Kneser-Ney language model (Kneser and Ney, 1995; Chelba et al., 2013).",
"Figure 3 shows the results on SST and Wikipedia for equally sized bins of word predictability scores.",
"We can see that the Transformer models correlate better for more predictable words on both datasets whereas the E-Z Reader is less influenced by word predictability and already shows medium correlation on the most hard-to-predict words ( 0 . 3 0 . 4 for both, SST and Wikipedia).",
"In fact, on SST, Transformers only pass the E-Z Reader on the most predictable tokens (word predictability > 0 . 03 ).",
"based on the top-6 (most tokens) Part-of-speech (POS) tags.",
"On SST, correlations with E-Z Reader are very consistent across POS tags whereas attention flow shows weak correlations on proper nouns ( 0 . 12 ), nouns ( 0 . 16 ) and verbs ( 0 . 16 ) as presented in Figure 2. The BNC frequency baseline correlates well with human fixations on adpositions (ADP) which both assign comparably low values.",
"Proper nouns (PROPN) are overestimated in BNC as a result of their infrequent occurrence.",
"Input reduction When comparing machines to humans we typically regard the psychophysical data as the gold standard.",
"We will now take the model perspective and test fidelity of both human and model attention patterns in task-tuned models.",
"By this we aim to test how effective the exact token ranking based on attention scores is at producing the correct output probability.",
"We perform such an input reduction analysis (Feng et al., 2018) using fine-tuned BERT models for both sentiment classification and relation extraction as the reference model and present results in Figure 4. In 4299 0 0.5 1 .",
"our analysis, we observe as to be expected that adding tokens according to token probability (BNC prob) performs even worse than randomly adding tokens.",
"From-scratch trained models (CNN and self-attention) are most effective in selecting task-relevant tokens, and even more so than using any Transformer attention flow.",
"Adding tokens based on human attention is as effective for the sentiment task as the E-Z Reader.",
"Interestingly, for the relation extraction task, human attention vectors provide the most effective flipping order after the relevance-based shallow methods.",
"All Transformer-based flows perform comparably in both tasks.",
"To better understand what drives these effects we extract the fraction of POS tags for the first added token (see Figure 4 and full results in the Appendix Figure 5).",
"For sentiment reading, the flipping according to CNN relevances puts more emphasis on adjectives (ADJ) whereas the other methods tend to flip nouns (NOUN) first.",
"Across the Transformer models RoBERTa relies much less on adjectives than any other model.",
"In the relation extraction task, we observe that proper nouns (PROPN) are dominant (and adjectives play almost no role) in all model systems which highlights the role of task nature on the importance assignment.",
"In addition, TSR ( Z u C o ) E-ZR ea d e r BNC i nv p r ob CNN ( LRP ) s e l fa tt e n ti on ( LRP ) BERT fl o w 11 R o BERT a fl o w 11 T 5 fl o w 11 BERT m ea n R o BERT a m ea n T 5 m ea n SR 3.44 3.44 3.40 2.93 2.16 3.57 3.61 3.61 2.37 2.65 2.45 TSR 3.38 3.46 3.39 2.98 1.81 3.54 3.60 3.63 2.48 2.56 2.29 Table 1: Mean entropy over all sentences for each task setting.",
"we see that the E-Z Reader overestimates the importance of punctuation, whereas proper nouns are least dominant in comparison to the other models.",
"Entropy levels of Transformer flow is similar to those in human attention Averaged sentence-level entropy values on both datasets reveal that BERT, RoBERTa and T5 attention flow, the E-Z Reader and BNC obtain similar levels of sparsity as human attention around 3.4-3.6 bits as summarized in Table 1. Entropies are lower for the shallow networks with self-attention (LRP) at 1.8-2.2 bits and CNN (LRP) at around 2.9 bits.",
"This difference in sparsity levels might explain the advantage of CNN and shallow self-attention in the input reduction analysis: Early addition of few but very relevant words has a strong effect on the model's decision compared to less sparse scoring as, e.g. in Transformers.",
"The shallow models were also trained from-scratch for the respective tasks whereas all other models (including human attention) are heavily influenced by a more general modeling of language which could explain attention to be distributed more broadly over all tokens.",
"Natural reading versus task-specific reading A unique feature of the ZuCo dataset is that it contains a subset of sentences that were presented to participants both in a task-specific (relation extraction) and a natural reading setting.",
"This allows for 4300 a direct comparison of how correlation strength is influenced by the task.",
"In Table 2 correlations of human gaze-based attention with model attentions are shown.",
"The highest correlation can be observed when comparing human attention for task-specific and natural reading ( 0 . 72 ).",
"The remaining model correlations correspond to the ranking and correlation strength observed in the main result (see Figure 1).",
"We observe lower correlation scores for the task-specific reading as compared to normal reading among attention flow, the E-Z Reader and BNC.",
"This suggests that these models capture the statistics of natural reading as is expected for a cognitive model designed to the natural reading paradigm and that task-related changes in human fixation patterns are not reflected in Transformer attention flows.",
"Interestingly, averaged last layer attention heads show a reverse effect (but at much weaker correlation strength).",
"This might suggest that pre-training in Transformer models induces specificity of later layer attention heads to task-solving instead of general natural reading patterns.",
"Saliency modeling Early computational models of visual attention have used bottom-up approaches to model the neural circuitry representing pre-attentive selection processes from visual input (Koch and Ullman, 1985) and later the central idea of a saliency map was introduced (Niebur and Koch, 1996).",
"A central hypothesis studying eye movements under task conditions is known as Yarbus theorem stating that a task can be directly decoded from fixation patterns (Yarbus, 1967) which has found varying support (Greene et al., 2012; Henderson et al., 2013; Borji and Itti, 2014).",
"More recently, extracting features from deep pre-trained filters in combination with readout networks has boosted performance on the saliency task (Kmmerer et al., 2016).",
"This progress has enabled modeling of more complex gaze patterns, e.g. vision-language tasks such as image captioning (Sugano and Bulling, 2016), visual question answering (Das et al., 2016) or text-guided object detection (Vasudevan et al., 2018).",
"Predicting text gaze patterns has been studied extensively, often in the context of probabilistic (Feng, 2006; Hara et al., 2012; Matthies and Sgaard, 2013; Hahn and Keller, 2016) or token transition models (Nilsson and Nivre, 2009; Haji-Abolhassani and Clark, 2014; Coutrot et al., 2017).",
"More recently deep language features have been used as feature extractors in modeling text saliency (Sood et al., 2020a; Hollenstein et al., 2021) opening the question of their cognitive plausibility.",
"Eye-tracking signals for NLP Augmenting machine learning models using human gaze information has been shown to improve performance for a number of different settings: Human attention patterns as regularization during model training have resulted in comparable or improved task performance in tagging part-of-speech (Barrett and Sgaard, 2015a,b; Barrett et al., 2018), sentence compression (Klerke et al., 2016), detecting sentiment (Mishra et al., 2016, 2017) or reading comprehension (Malmaud et al., 2020).",
"In these works, general free-viewing gaze data is used without consideration of the specific training task which opens the question of task-modulation in human reading.",
"From natural to task-specific reading Recent work on reading often analyses eye-tracking data in combination with neuroimaging techniques such as EEG (Wenzel et al., 2017) and f-MRI (Hillen et al., 2013; Choi et al., 2014).",
"Research questions thereby focus either on detecting relevant parts in text (Loboda et al., 2011; Wenzel et al., 2017) or the difference between natural and pseudo-reading, i.e., text without syntax/semantics (Hillen et al., 2013) or pseudo-words (Choi et al., 2014).",
"To the best of our knowledge there has not been any work on comparing fixations between natural reading and task-specific reading on classical NLP tasks such as relation extraction or sentiment classification.",
"In this paper, we have compared attention and relevance mechanisms of a wide range of models to human gaze patterns when solving sentiment classification on SST movie reviews and relation extraction on Wikipedia articles.",
"We generally found that Transformer architectures are competitive with the E-Z Reader, but only when computing attention flow scores.",
"We generally saw weaker correlations for relation extraction on Wikpedia, presumably due to simpler sentence structures and the occurrence of polarity words.",
"In the following, we discuss implications of our findings on NLP and Cognitive Science in more detail.",
"flow in our experiments: Using human gaze to regularize or supervise attention weights has proven effective in previous work (5), but we observed that correlations with task-specific human attention increase significantly by using layer-dependent attention flow compared to using raw attention weights.",
"This insight motivates going beyond regularizing raw attention weights or directly injecting human attention vectors during training, to instead optimize for correlation between attention flow and human attention.",
"Jointly modeling language and human gaze has recently shown to yield competitive performance on paraphrase generation and sentence compression while resulting in more task-specific attention heads (Sood et al., 2020b).",
"For this study natural gaze patterns were also simulated using the E-Z Reader.",
"Another potential implication concerns interpretability.",
"It remains an open problem how best to interpret self-attention modules (Jain and Wallace, 2019; Wiegreffe and Pinter, 2019), and whether they provide meaningful explanations for model predictions.",
"Including gradient information to explain Transformers has recently been considered to improve their interpretability (Chefer et al., 2021b,a; Ali et al., 2022).",
"A successful explanation of a machine learning model should be faithful, human-interpretable and practical to apply (Samek et al., 2021).",
"Faithfulness and practicality is often evaluated using automated procedures such as input reduction experiments or measuring time and model complexity.",
"By contrast, judging human-interpretability typically requires costly experiments in well-controlled settings and obtaining human gold-standards for interpretability remain difficult (Miller, 2019; Schmidt and Bie-mann, 2019).",
"Using gaze data to evaluate the faithfulness and trustworthiness of machine learning models is a promising approach to increase model transparency.",
"Lessons for Cognitive Science Attention flow in Transformers, especially for BERT models, correlates surprisingly well with human task-specific reading, but what does this tell us about the shortcomings of our cognitive models?",
"We know that word frequency and semantic relationships between words influence word fixation times (Rayner, 1998).",
"In our experiments, we see relatively high correlation between human fixations and the inverse word probability baseline which raises the question to what extent reading gaze is driven by low-level patterns such as word frequency or syntactic structure in contrast to more high-level semantic context or wrap-up effects.",
"In computer vision, cognitively inspired bottom-up models, e.g., using intensity and contrast features, are able to explain at most half of the gaze fixation information in comparison to the human gold standard (Kmmerer et al., 2017).",
"The robustness of the E-Z Reader on movie reviews is likely due to its explicit modeling of low-level properties such as word frequency or sentence length.",
"BERT was recently shown to be primarily modeling higher-order word co-occurrence statistics (Sinha et al., 2021).",
"We argue that while Transformers are limited, e.g., in not capturing the dependency of human gaze on word length (Kliegl et al., 2004), cognitive models seem to underestimate the role of word co-occurrence statistics.",
"During reading, humans are faced with a tradeoff between the precision of reading comprehension and reading speed, by avoiding unnecessary fixations (Hahn and Keller, 2016).",
"This trade-off is related to the input reduction experiments performed in Section 4. Here, we observe that shallow methods score well at being sparse and effective in changing model output towards the correct class, but produce only weak correlation to human reading patterns when compared to layered language models.",
"In comparison, extracted attention flow from pre-trained Transformer models correlates much better with human attention, but offers less sparse token attention.",
"In other words, our results show that task-specific reading is sub-optimal relative to solving tasks and heavily regularized by natural reading patterns (see also our comparison of task-specific and natural reading in Section 4).",
"Conclusion In our experiments, we first and foremost found that Transformers, and especially BERT models, are competitive to the E-Z Reader in terms of explaining human attention in task-specific reading.",
"For this to be the case, computing attention flow scores (rather than raw attention weights) is important.",
"Even so, the E-Z Reader remains better at hard-to-predict words and is less sensitive to part of speech.",
"While Transformers thus have some limitations compared to the E-Z Reader, our results indicate that cognitive models have placed too little weight on high-level word co-occurrence statistics.",
"Generally, Transformers and the E-Z Reader correlate much better with human attention than other, shallow from-scratch trained 4302 sequence labeling architectures.",
"Our input reduction experiments suggest that in a sense, both pretrained language models and humans have suboptimal, i.e., less sparse, task-solving strategies, and are heavily regularized by what is optimal in natural reading contexts.",
"This work was partially funded by the German Ministry for Education and Research as BIFOLD Berlin Institute for the Foundations of Learning and Data (ref. 01IS18025A and ref. 01IS18037A), as well as by the Platform Intelligence in News project, which is supported by Innovation Fund Denmark via the Grand Solutions program.",
"We thank Mostafa Abdou for fruitful discussions and Heather Lent, Miryam de Lhoneux and Vinit Rav-ishankar for proof-reading and valuable inputs on the manuscript."
] | [
"abstain",
"objective",
"method",
"result",
"result",
"result",
"abstain",
"method",
"method",
"objective",
"method",
"result",
"method",
"abstain",
"method",
"abstain",
"objective",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"other",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"objective",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"other",
"other"
] |
[
"Self-attention networks (SANs) have drawn increasing interest due to their high parallelization in computation and flexibility in modeling dependencies.",
"SANs can be further enhanced with multi-head attention by allowing the model to attend to information from different representation subspaces.",
"In this work, we propose novel convolutional self-attention networks , which offer SANs the abilities to 1) strengthen dependencies among neighboring elements, and 2) model the interaction between features extracted by multiple attention heads.",
"Experimental results of machine translation on different language pairs and model settings show that our approach outperforms both the strong Transformer baseline and other existing models on enhancing the locality of SANs.",
"Comparing with prior studies, the proposed model is parameter free in terms of introducing no more parameters.",
"Self-attention networks (SANs) (Parikh et al., 2016; Lin et al., 2017) have shown promising empirical results in various natural language processing (NLP) tasks, such as machine translation (Vaswani et al., 2017), natural language inference (Shen et al., 2018a), and acoustic modeling (Sperber et al., 2018).",
"One appealing strength of SANs lies in their ability to capture dependencies regardless of distance by explicitly attending to all the elements.",
"In addition, the performance of SANs can be improved by multi-head attention (Vaswani et al., 2017), which projects the input sequence into multiple subspaces and applies attention to the representation in each subspace.",
"count all the elements, which disperses the attention distribution and thus overlooks the relation of neighboring elements and phrasal patterns (Yang et al., 2018; Wu et al., 2018; Guo et al., 2019).",
"Second, multi-head attention extracts distinct linguistic properties from each subspace in a parallel fashion (Raganato and Tiedemann, 2018), which fails to exploit useful interactions across different heads.",
"Recent work shows that better features can be learned if different sets of representations are present at feature learning time (Ngiam et al., 2011; Lin et al., 2014).",
"To this end, we propose novel convolutional self-attention networks (CSAN s), which model locality for self-attention model and interactions between features learned by different attention heads in an unified framework.",
"Specifically, in order to pay more attention to a local part of the input sequence, we restrict the attention scope to a window of neighboring elements.",
"The localness is therefore enhanced via a parameter-free 1-dimensional convolution.",
"Moreover, we extend the convolution to a 2-dimensional area with the axis of attention head.",
"Thus, the proposed model allows each head to interact local features with its adjacent subspaces at attention time.",
"We expect that the interaction across different subspaces can further improve the performance of SANs.",
"We evaluate the effectiveness of the proposed model on three widely-used translation tasks: WMT14 English-to-German, WMT17 Chinese-to-English, and WAT17 Japanese-to-English.",
"Experimental results demonstrate that our approach consistently improves performance over the strong TRANSFORMER model (Vaswani et al., 2017) across language pairs.",
"Comparing with previous work on modeling locality for SANs (e.g. Shaw et al., 2018; Yang et al., 2018; Sperber et al., 2018), our model boosts performance on both translation quality and training efficiency.",
"Bush held a talk with Sharon",
"2 Multi-Head Self-Attention Networks SANs produce representations by applying attention to each pair of tokens from the input sequence, regardless of their distance.",
"Vaswani et al. (2017) found it is beneficial to capture different contextual features with multiple individual attention functions.",
"Given an input sequence X = { x 1 , . . . , x I } RI d , the model first transforms it into queries Q , keys K , and values V : Q , K , V = XWQ , XWK , XWV RI d (1) where { WQ , WK , WV } R d d are trainable parameters and d indicates the hidden size.",
"The three types of representations are split into H different subspaces, e.g., [ Q 1 , . . . , QH ] = Q with Q h RI dH .",
"In each subspace h , the element o hi in the output sequence O h = { o h 1 , . . . , o hI } is calculated by o hi = ATT ( q hi , K h ) V h R dH (2) where ATT ( ) is an attention model (Bahdanau et al., 2015; Vaswani et al., 2017) that retrieves the keys K h with the query q hi .",
"The final output representation O is the concatenation of outputs generated by multiple attention models: O = [ O 1 , . . . , OH ] RI d (3) 3 Approach As shown in Figure",
"1(a), the vanilla SANs use the query q hi to compute a categorical distribution over all elements from K h (Equation 2).",
"It may inherit the attention to neighboring information (Yu et al., 2018; Yang et al., 2018; Guo et al., 2019).",
"In this work, we propose to model locality for SANs by restricting the model to attend to a local region via convolution operations (1D-CS AN s, Figure",
"1(b)).",
"Accordingly, it provides distance-aware information (e.g. phrasal patterns), which is complementary to the distance-agnostic dependencies modeled by the standard SANs (Section 3.1).",
"Moreover, the calculation of output o h are restricted to the a single individual subspace, overlooking the richness of contexts and the dependencies among groups of features, which have proven beneficial to the feature learning (Ngiam et al., 2011; Wu and He, 2018).",
"We thus propose to convolute the items in adjacent heads (2D-CS AN s, Figure",
"1(c)).",
"The proposed model is expected to improve performance through interacting linguistic properties across heads (Section 3.2).",
"For each query q hi , we restrict its attention region (e.g., K h = { k h 1 , . . . , k hi , . . . , k hI } ) to a local scope with a fixed size M + 1 ( M I ) centered at the position i :",
"Accordingly, the calculation of corresponding output in Equation (2) is modified as:",
"As seen, SANs are only allowed to attend to the neighboring tokens (e.g., (cid:98) K h , (cid:98) V h ), instead of all the tokens in the sequence (e.g., K h , V h ).",
"The SAN-based models are generally implemented as multiple layers, in which higher layers tend to learn semantic information while lower layers capture surface and lexical information (Pe-ters et al., 2018; Raganato and Tiedemann, 2018).",
"Therefore, we merely apply locality modeling to the lower layers, which same to the configuration in Yu et al. (2018) and Yang et al. (2018).",
"In this way, the representations are learned in a hierarchical fashion (Yang et al., 2017).",
"That is, the distance-aware and local information extracted by the lower SAN layers, is expected to complement distance-agnostic and global information captured by the higher SAN layers.",
"Mutli-head mechanism allows different heads to capture distinct linguistic properties (Raganato and Tiedemann, 2018; Li et al., 2018), especially in diverse local contexts (Yang et al., 2018).",
"We hypothesis that exploiting local properties across heads can further improve the performance of SANs.",
"To this end, we expand the 1-dimensional window to a 2-dimensional area with the new dimension being the index of attention head.",
"Suppose that the area size is ( N + 1) ( M + 1) ( N H ), the keys and values in the area are: (cid:101) K h = (cid:91) [ (cid:98) K h N 2 , . . . , (cid:98) K h , . . . , (cid:98) K h + N 2 ] (7) (cid:101) V h = (cid:91) [ (cid:98) V h N 2 , . . . , (cid:98) V h , . . . , (cid:98) V h + N 2 ] (8) where (cid:98) K h , (cid:98) V h are elements in the h -th subspace, which are calculated by Equations 4 and 5 respectively.",
"The union operation (cid:83) means combining the keys and values in different subspaces.",
"The corresponding output is calculated as: o hi = ATT ( q hi , (cid:101) K h ) (cid:101) V h (9) The 2D convolution allows SANs to build relevance between elements across adjacent heads, thus flexibly extract local features from different subspaces rather than merely from an unique head.",
"The vanilla SAN models linearly aggregate features from different heads, and this procedure limits the extent of abstraction (Fukui et al., 2016; Li et al., 2019).",
"Multiple sets of representations presented at feature learning time can further improve the expressivity of the learned features (Ngiam et al., 2011; Wu and He, 2018).",
"Self-Attention Networks Recent studies have shown that SAN s can be further improved by capturing complementary information.",
"For example, Hao et al. (2019) complemented SAN s with recurrence modeling, while Yang et al. (2019) modeled contextual information for SAN s.",
"2014) to fuse local information, the output of which is fed to the subsequent SAN layer.",
"Several researches proposed to revise the attention distribution with a parametric localness bias, and succeed on machine translation (Yang et al., 2018) and natural language inference (Guo et al., 2019).",
"While both models introduce additional parameters, our approach is a more lightweight solution without introducing any new parameters.",
"Closely related to this work, Shen et al. (2018a) applied a positional mask to encode temporal order, which only allows SANs to attend to the previous or following tokens in the sequence.",
"In contrast, we employ a positional mask (i.e. the tokens outside the local window is masked as 0 ) to encode the distance-aware local information.",
"In the context of distance-aware SANs, Shaw et al. (2018) introduced relative position encoding to consider the relative distances between sequence elements.",
"While they modeled locality from position embedding, we improve locality modeling from revising attention scope.",
"To make a fair comparison, we re-implemented the above approaches under a same framework.",
"Empirical results on machine translation tasks show the superiority of our approach in both translation quality and training efficiency.",
"Multi-Head Attention Multi-head attention mechanism (Vaswani et al., 2017) employs different attention heads to capture distinct features (Raganato and Tiedemann, 2018).",
"Along this direction, Shen et al. (2018a) explicitly used multiple attention heads to model different dependencies of the same word pair, and Strubell et al. (2018) employed different attention heads to capture different linguistic features.",
"Li et al. (2018) introduced disagreement regularizations to encourage the diversity among attention heads.",
"Inspired by recent successes on fusing information across layers (Dou et al., 2018, 2019), Li et al. (2019) proposed to aggregate information captured by different attention heads.",
"Based on these findings, we model interactions among attention heads to exploit the richness of local properties distributed in different heads.",
"We conducted experiments with the Transformer model (Vaswani et al., 2017) on English German (En De), Chinese English (Zh En) and Japanese English (Ja En) translation tasks.",
"For the En De and Zh En tasks, the models were trained on widely-used WMT14 and WMT17 corpora, consisting of around 4 .",
"5 and 20 .",
"62 million sentence pairs, respectively.",
"Concerning Ja En, we used the first two sections of WAT17 corpus as the training data, which consists of 2M sentence pairs.",
"To reduce the vocabulary size, all the data were tokenized and segmented into subword symbols using byte-pair encoding (Sennrich et al., 2016) with 32K merge operations.",
"Following Shaw et al. (2018), we incorporated the proposed model into the encoder, which is a stack of 6 SAN layers.",
"Prior studies revealed that modeling locality in lower layers can achieve better performance (Shen et al., 2018b; Yu et al., 2018; Yang et al., 2018), we applied our approach to the lowest three layers of the encoder.",
"About configurations of NMT models, we used the Base and Big settings same as Vaswani et al. (2017), and all models were trained on 8 NVIDIA P40 GPUs with a batch of 4096 tokens.",
"We first investigated the effects of window size (1D-CS AN s) and area size (2D-CS AN s) on En De validation set, as plotted in Figure",
"2. For 1D-CS AN s, the local size with 11 is superior to other settings.",
"This is consistent with Luong et al. (2015) who found that 10 is the best window size in their local attention experiments.",
"Then, we fixed the number of neighboring tokens being 11 and varied the number of heads.",
"As seen, by considering the features across heads (i.e. > 1 ), 2D-CS AN s further improve the translation quality.",
"However, when the number of heads in attention goes up, the translation quality inversely drops.",
"One possible reason is that the model still has the flexibility of learning a different distribution for each head with few interactions, while a large amount of interactions assumes more heads make similar contributions (Wu and He, 2018).",
"One intuition of our approach is to capture useful phrasal patterns via modeling locality.",
"To evaluate the accuracy of phrase translations, we calculate the improvement of the proposed approaches over multiple granularities of n-grams, as shown in Figure",
"3. Both the two model variations consistently outperform the baseline on larger granularities, indicating that modeling locality can raise the ability of self-attention model on capturing the",
"phrasal information.",
"Furthermore, the dependencies among heads can be complementary to the localness modeling, which reveals the necessity of the interaction of features in different subspaces.",
"We re-implemented and compared several exiting works (Section 4) upon the same framework.",
"Table 1 lists the results on the En De translation task.",
"As seen, all the models improve translation quality, reconfirming the necessity of modeling locality and distance information.",
"Besides, our models outperform all the existing works, indicating the superiority of the proposed approaches.",
"In particular, CSAN s achieve better performance than Model Parameter Speed BLEU (cid:52) TRANSFORMER-BASE (Vaswani et al., 2017) 88.0M 1.28 27.31 -+ BIDIRECT (Shen et al., 2018a) +0.0M -0.00 27.58 +0.27 + RELPOS (Shaw et al., 2018) +0.1M -0.11 27.63 +0.32 + NEIGHBOR (Sperber et al., 2018) +0.4M -0.06 27.60 +0.29 + LOCALHARD (Luong et al., 2015) +0.4M -0.06 27.73 +0.42 + LOCALSOFT (Yang et al., 2018) +0.8M -0.09 27.81 +0.50 + BLOCK (Shen et al., 2018b) +6.0M -0.33 27.59 +0.28 + CNN s (Yu et al., 2018) +42.6M -0.54 27.70 +0.39 + 1D-CS AN s +0.0M -0.00 27.86 +0.55 + 2D-CS AN s +0.0M -0.06 28.18 +0.87 Table 1: Comparing with the existing approaches on WMT14 En De translation task.",
"CNN s, revealing that extracting local features with dynamic weights is superior to assigning fixed parameters.",
"Moreover, while most of the existing approaches (except for Shen et al. (2018a)) introduce new parameters, our methods are parameter-free and thus only marginally affect training efficiency.",
"To validate the universality of our approach on MT tasks, we evaluated the proposed approach on different language pairs and model settings.",
"Table 2 lists the results on En De, Zh En and Ja En translation tasks.",
"As seen, our model consistently improves translation quality across language pairs, which demonstrates the effectiveness and universality of the proposed approach.",
"It is encouraging to see that CSAN s with base setting yields comparable performance with TRANSFORMER-BIG .",
"In this paper, we propose a parameter-free convolutional self-attention model to enhance the feature extraction of neighboring elements across",
"multiple heads.",
"Empirical results of machine translation task on a variety of language pairs demonstrate the effectiveness and universality of the proposed methods.",
"The extensive analyses suggest that: 1) modeling locality is beneficial to SANs; 2) interacting features across multiple heads at attention time can further improve the performance; and 3) to some extent, the dynamic weights are superior to their fixed counterpart (i.e. CSAN s vs. CNN s) on local feature extraction.",
"Moreover, it is interesting to validate the proposed model in other sequence modeling tasks.",
"The work was partly supported by the National Natural Science Foundation of China (Grant No. 61672555), the Joint Project of Macao Science and Technology Development Fund and National Natural Science Foundation of China (Grant No. 045/2017/AFJ) and the Multiyear Research Grant from the University of Macau (Grant No. MYRG2017-00087-FST).",
"We thank the anonymous reviewers for their insightful comments."
] | [
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"objective",
"method",
"objective",
"other",
"abstain",
"method",
"abstain",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"objective",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"Context gates are effective to control the contributions from the source and target contexts in the recurrent neural network (RNN) based neural machine translation (NMT).",
"However, it is challenging to extend them into the advanced Transformer architecture, which is more complicated than RNN.",
"This paper first provides a method to identify source and target contexts and then introduce a gate mechanism to control the source and target contributions in Transformer.",
"In addition, to further reduce the bias problem in the gate mechanism, this paper proposes a regularization method to guide the learning of the gates with supervision automatically generated using pointwise mutual information.",
"Extensive experiments on 4 translation datasets demonstrate that the proposed model obtains an averaged gain of 1.0 BLEU score over a strong Transformer baseline.",
"An essence to modeling translation is how to learn an effective context from a sentence pair.",
"Statistical machine translation (SMT) models the source context from the source-side of a translation model and models the target context from a target-side language model (Koehn et al., 2003; Koehn, 2009; Chiang, 2005).",
"These two models are trained independently.",
"On the contrary, neural machine translation (NMT) advocates a unified manner to jointly learn source and target context using an encoder-decoder framework with an attention mechanism, leading to substantial gains over SMT in translation quality (Sutskever et al., 2014; Bahdanau et al., 2014; Gehring et al., 2017; Vaswani et al., 2017).",
"Prior work on attention mechanism (Luong et al., 2015; Liu et al., 2016; Mi et al., 2016; Chen et al., 2018; Li et al., 2018; Elbayad et al., 2018; Yang et al., 2020) have shown a better context representation is helpful to translation performance.",
"However, a standard NMT system is incapable of effectively controlling the contributions from source and target contexts (He et al., 2018) to deliver highly adequate translations as shown in Figure 1.",
"As a result, Tu et al. (2017) carefully designed context gates to dynamically control the influence from source and target contexts and observed significant improvements in the recurrent neural network (RNN) based NMT.",
"Although Transformer (Vaswani et al., 2017) delivers significant gains over RNN for translation, there are still one third translation errors related to context control problem as described in Section 3.3.",
"Obviously, it is feasible to extend the context gates in RNN based NMT into Transformer, but an obstacle to accomplishing this goal is the complicated architecture in Transformer, where the source and target words are tightly coupled.",
"Thus, it is challenging to put context gates into practice in Transformer.",
"In this paper, under the Transformer architecture, we firstly provide a way to define the source and target contexts and then obtain our model by combining both source and target contexts with context gates, which actually induces a probabilistic model indicating whether the next generated word is contributed from the source or target sentence (Li et al., 2019).",
"In our preliminary experiments, this model only achieves modest gains over Transformer because the context selection error reduction is very limited as described in Section 3.3.",
"To further address this issue, we propose a probabilistic model whose loss function is derived from external supervision as regularization for the context gates.",
"This probabilistic model is jointly trained with the context gates in NMT.",
"As it is too costly to annotate this supervision for a large-scale training corpus manually, we instead propose a simple yet effective method to automatically generate supervision using pointwise mutual information, inspired by word collocation (Bouma, 2009).",
"In this way, the resulting NMT model is capable of controlling the contributions from source and target contexts effectively.",
"We conduct extensive experiments on 4 benchmark datasets, and experimental results demonstrate that the proposed gated model obtains an averaged improvement of 1.0 BLEU point over corresponding strong Transformer baselines.",
"In addition, we design a novel analysis to show that the improvement of translation performance is indeed caused by relieving the problem of wrongly focusing on the source or target context.",
"Given a source sentence x = (cid:104) x 1 , , x | x | (cid:105) and a target sentence y = (cid:104) y 1 , , y | y | (cid:105) , our proposed model is defined by the following conditional probability under the Transformer architecture: 1",
"Throughout this paper, a variable in bold font such as x denotes a sequence while regular font such as x denotes an element which may be a scalar x , vector x or matrix X .",
"context in the decoder with L layers which is obtained from the representation of y <i and h L , i.e., the top layer hidden representation of x , similar to the original Transformer.",
"To finish the overall definition of our model in equation 1, we will expand the definition c Li based on context gates in the following subsections.",
"To develop context gates for our model, it is necessary to define the source and target contexts at first.",
"Unlike the case in RNN, the source sentence x and the target prefix y <i are tightly coupled in our model, and thus it is not trivial to define the source and target contexts.",
"Suppose the source and target contexts at each layer l are denoted by s li and t li .",
"We recursively define them from c l 1 <i as follows.",
"2 t li = rn ln att (cid:16) c l 1 i , c l 1 <i (cid:17) , s li = ln att (cid:0) t li , h L (cid:1) , (2) where is functional composition, att ( q , kv ) denotes multiple head attention with q as query, k as key, v as value, and rn as a residual network (He et al., 2016), ln is layer normalization (Ba et al., 2016), and all parameters are removed for simplicity.",
"In order to control the contributions from source or target side, we define c li by introducing a context gate z li to combine s li and t li as following: c li = rn ln (cid:0) ( 1 z li ) t li + z li s li (cid:1) (3) with z li = (cid:0) (cid:0) t li (cid:107) s li (cid:1)(cid:1) , (4) where ff denotes a feedforward neural network, (cid:107) denotes concatenation, ( ) denotes a sigmoid function, and denotes an element-wise multiplication.",
"z li is a vector (Tu et al. (2017) reported that a gating vector is better than a gating scalar).",
"Note that each component in z li actually induces a probabilistic model indicating whether the next generated word y i is mainly contributed from the source ( x ) or target sentence ( y <i ) , as shown in Figure 1.",
"It is worth mentioning that our proposed model is similar to the standard Transformer with boiling down to replacing a residual connection",
"2 For the base case, c 0 <i is word embedding of y <i .",
"with a high way connection (Srivastava et al., 2015; Zhang et al., 2018): if we replace ( 1 z li ) t li + z li s li in equation 3 by t li + s li , the proposed model is reduced to Transformer.",
"In our preliminary experiments, we found learning context gates from scratch cannot effectively reduce the context selection errors as described in Section 3.3.",
"To address this issue, we propose a regularization method to guide the learning of context gates by external supervision z i which is a binary number representing whether y i is contributed from either source ( z i = 1 ) or target sentence ( z i = 0 ).",
"Formally, the training objective is defined as follows: (cid:96) = log P ( y | x )+ (cid:88) l,i (cid:18) z i max( 0 . 5 z li , 0 ) + (1 z i ) max( z li 0 . 5 , 0 ) (cid:19) , (5) where z li is a context gate defined in equation 4 and is a hyperparameter to be tuned in experiments.",
"Note that we only regularize the gates during the training, but we skip the regularization during inference.",
"Because golden z i are inaccessible for each word y i in the training corpus, we ideally have to annotate it manually.",
"However, it is costly for human to label such a large scale dataset.",
"Instead, we propose an automatic method to generate its value in practice in the next subsection.",
"To decide whether y i is contributed from the source ( x ) or target sentence ( y <i ) (Li et al., 2019), a metric to measure the correlation between a pair of words ( (cid:104) y i , x j (cid:105) or (cid:104) y i , y k (cid:105) for k < i ) is first required.",
"This is closely related to a well-studied problem, i.e., word collocation (Liu et al., 2009), and we simply employ the pointwise mutual information (PMI) to measure the correlation between a word pair (cid:104) , (cid:105) following Bouma (2009): pmi ( , ) = log P ( , ) P ( ) P ( ) = log Z + log C ( , ) C ( ) C ( ) , (6) where C ( ) and C ( ) are word counts, C ( , ) is the co-occurrence count of words and , and Z is the normalizer, i.e., the total number of all possible ( , ) pairs.",
"To obtain the context gates, we define two types of PMI according to different C ( , ) including two scenarios as follows.",
"PMI in the Monolingual Scenario In the translation scenario, only the words in the preceding context of a target word should be considered.",
"So for any target sentence y in the training set, C ( y i , y k ) is added by one if both y i y and y k y <i .",
"Given the two kinds of PMI for a bilingual sentence (cid:104) x , y (cid:105) , each z i for each y i is defined as follows, z i = 1 max j pmi( y i , x j ) > max k<i pmi( y i , y k ) , (7) where 1 b is a binary function valued by 1 if b is true and 0 otherwise.",
"In equation 7, we employ max strategy to measure the correlation between y i and a sentence ( x or y <i ).",
"Indeed, it is similar to use the average strategy, but we did not find its gains over max in our experiments.",
"The proposed methods are evaluated on NIST ZH EN 3 , WMT14 EN DE 4 , IWSLT14 DE EN 5 and IWSLT17 FR EN 6 tasks.",
"To make our NMT models capable of open-vocabulary translation, all datasets are preprocessed with Byte Pair Encoding (Sennrich et al., 2015).",
"All proposed methods are implemented on top of Transformer (Vaswani et al., 2017) which is the state-of-the-art NMT system.",
"Case-insensitive BLEU score (Pa-pineni et al., 2002) is used to evaluate translation quality of ZH EN, DE EN and FR EN.",
"For the fair comparison with the related work, EN DE is evaluated with case-sensitive BLEU score.",
"Setup details are described in Appendix A. 3.1 Tuning Regularization Coefficient In the beginning of our experiments, we tune the regularization coefficient on the DE EN task.",
"Table 2 shows the robustness of , because the translation performance only fluctuates slightly over various .",
"In particular, the best performance 3 LDC2000T50, LDC2002L27, LDC2002T01, LDC2002E18, LDC2003E07, LDC2003E14, LDC2003T17, LDC2004T07 4 WMT14: http://www.statmt.org/wmt14/ 5 IWSLT14: http://workshop2014.iwslt.org/ 6 IWSLT17: http://workshop2017.iwslt.org/ Models params 10 6 ZH EN EN DE DE EN FR EN MT05 MT06 MT08 RNN based NMT 84 30.6 31.1 23.2 Tu et al. (2017) 88 34.1 34.8 26.2 Vaswani et al. (2017) 65 27.3 Ma et al. (2018) 36.8 35.9 27.6 Zhao et al. (2018) 43.9 44.0 33.3 Cheng et al. (2018) 44.0 44.4 34.9 Transformer 74 46.9 47.4 38.3 27.4 32.2 36.8 This Work Context Gates 92 47.1 47.6 39.1 27.9 32.5 37.7 Regularized Context Gates 92 47.7 48.3 39.7 28.1 33.0 38.3 Table 1: Translation performances (BLEU).",
"Results are measured on DE EN task.",
"Table 2: Translation performance over different regularization coefficient .",
"is achieved when = 1 , which is the default setting throughout this paper.",
"Table 1 shows the translation quality of our methods in BLEU.",
"Our observations are as follows:",
"1) The performance of our implementation of the Transformer is slightly higher than Vaswani et al. (2017), which indicates we are in a fair comparison.",
"2) The proposed Context Gates achieves modest improvement over the baseline.",
"As we mentioned in Section 2.1, the structure of RNN based NMT is quite different from the Transformer.",
"Therefore, naively introducing the gate mechanism to the Transformer without adaptation does not obtain similar gains as it does in RNN based NMT.",
"3) The proposed Regularized Context Gates improves nearly 1.0 BLEU score over the baseline and outperforms all existing related work.",
"This indicates that the regularization can make context gates more effective in relieving the context control problem as discussed following.",
"To explain the success of Regularized Context Gates, we analyze the error rates of translation and context selection.",
"Given a sentence pair x and y , the forced decoding translation error is defined as P ( y i | y <i , x ) < P ( y i | y <i , x ) , where y i (cid:44) arg max v P ( v | y <i , x ) and v denotes any token in the vocabulary.",
"The context selection error is defined as z i ( y i ) (cid:54) = z i ( y i ) , where z i is defined in equation 7.",
"Note that a context selection error must be a translation error but the opposite is not true.",
"The example shown in Figure 1 also demonstrates a context selection error indicating the translation error is related with the bad context selection.",
"As shown in Table 3, the Regularized Context Gates significantly reduce the translation error by avoiding the context selection error.",
"The Context Gates are also able to avoid few context selection error but cannot make a notable improvement in translation performance.",
"It is worth to note that there is approximately one third translation error is related to context selection error.",
"The Regularized Context Gates indeed alleviate this severe problem by effectively rebalancing of source and target context for translation.",
"Table 4 summarizes the mean and variance of each context gate (every dimension of the context gate vectors) over the MT08 test set.",
"It shows that learning context gates freely from scratch tends to pay more attention to target context (0.38 < 0.5), which Models Mean Variance Context Gates 0.38 0.10 Regularized Context Gates 0.51 0.13 * Results are measured on MT08 of ZH EN task.",
"Specifically, this bias will make the translation unfaithful for some source tokens.",
"As shown in Table 4, the Regularized Context Gates demonstrates more balanced behavior (0.51 0.5) over the source and target context with similar variance.",
"To investigate the sensitivity of choosing different layers for regularization, we only regularize the context gate in every single layer.",
"Table 5 shows that there is no significant performance difference, but all single layer regularized context gate models are slightly inferior to the model, which regularizes all the gates.",
"Moreover, since nearly no computation overhead is introduced and for design simplicity, we adopt regularizing all the layers.",
"In Tu et al. (2017), context gates alleviate the problem of long sentence translation of attentional RNN based system (Bahdanau et al., 2014).",
"We follow Tu et al. (2017) and compare the translation performances according to different lengths of the sentences.",
"As shown in Figure 2, we find Context Gates does not improve the translation of long sentences but translate short sentences better.",
"Fortunately, the Regularized Context Gates indeed significantly improves the translation for both short sentences and long sentences.",
"This paper transplants context gates from the RNN based NMT to the Transformer to control the source and target context for translation.",
"We find [0,10) [10,20) [20,30) [30,40) [40,50) [50,60) [60,130) Length of Source Sentence 34 36 38 40 42 BLEU s c o r e Transformer Context Gates Regularized Context Gates Figure 2: Translation performance on MT08 test set with respect to different lengths of source sentence.",
"that context gates only modestly improve the translation quality of the Transformer, because learning context gates freely from scratch is more challenging for the Transformer with the complicated structure than for RNN.",
"Based on this observation, we propose a regularization method to guide the learning of context gates with an effective way to generate supervision from training data.",
"Experimental results show the regularized context gates can significantly improve translation performances over different translation tasks even though the context control problem is only slightly relieved.",
"In the future, we believe more work on alleviating context control problem has the potential to improve translation performance as quantified in Table 3."
] | [
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"result",
"abstain",
"objective",
"abstain",
"result"
] |
[
"Implicit arguments are not syntactically connected to their predicates, and are therefore hard to extract.",
"Previous work has used models with large numbers of features, evaluated on very small datasets.",
"We propose to train models for implicit argument prediction on a simple cloze task, for which data can be generated automatically at scale.",
"This allows us to use a neural model, which draws on narrative coherence and entity salience for predictions.",
"We show that our model has superior performance on both synthetic and natural data.",
"1 1 Introduction When parts of an event description in a text are missing, this event cannot be easily extracted, and it cannot easily be found as the answer to a question.",
"This is the case with implicit arguments , as in this example from the reading comprehension dataset of Hermann et al. (2015): Text: More than 2,600 people have been infected by Ebola in Liberia, Guinea, Sierra Leone and Nigeria since the outbreak began in December, according to the World Health Organization.",
"Nearly 1,500 have died .",
"Question: The X outbreak has killed nearly 1,500.",
"In this example, it is Ebola that broke out, and Ebola was also the cause of nearly 1,500 people dying, but the text does not state this explicitly.",
"Ebola is an implicit argument of both outbreak and die , which is crucial to answering the question.",
"We are particularly interested in implicit arguments that, like Ebola in this case, do appear in the text, but not as syntactic arguments of their 1 Our code is available at https://github.com/ pxch/event_imp_arg .",
"predicates.",
"Event knowledge is key to determining implicit arguments.",
"In our example, diseases are maybe the single most typical things to break out , and diseases also typically kill people.",
"The task of identifying implicit arguments was first addressed by Gerber and Chai (2010) and Ruppenhofer et al. (2010).",
"However, the datasets for the task were very small, and to our knowledge there has been very little further development on the task since then.",
"In this paper, we address the data issue by training models for implicit argument prediction on a simple cloze task, similar to the narrative cloze task (Chambers and Jurafsky, 2008), for which data can be generated automatically at scale.",
"This allows us to train a neural network to perform the task, building on two insights.",
"First, event knowledge is crucial for implicit argument detection.",
"Therefore we build on models for narrative event prediction (Granroth-Wilding and Clark, 2016; Pichotta and Mooney, 2016a), using them to judge how coherent the narrative would be when we fill in a particular entity as the missing (implicit) argument.",
"Second, the omitted arguments tend to be salient, as Ebola is in the text from which the above example is taken.",
"So in addition to narrative coherence, our model takes into account entity salience (Dunietz and Gillick, 2014).",
"In an evaluation on a large automatically generated dataset, our model clearly outperforms even strong baselines, and we find salience features to be important to the success of the model.",
"We also evaluate against a variant of the Gerber and Chai (2012) model that does not rely on gold features, finding that our simple neural model outperforms their much more complex model.",
"Our paper thus makes two major contributions.",
"1) We propose an argument cloze task to generate synthetic training data at scale for implicit argument prediction.",
"2) We show that neural event 831 models for narrative schema prediction can be used on implicit argument prediction, and that a straightforward combination of event knowledge and entity salience can do well on the task.",
"While dependency parsing and semantic role labeling only deal with arguments that are available in the syntactic context of the predicate, implicit argument labeling seeks to find argument that are not syntactically connected to their predicates, like",
"Ebola in our introductory example.",
"The most relevant work on implicit argument prediction came from Gerber and Chai (2010), who built an implicit arguments dataset by selecting 10 nominal predicates from NomBank (Mey-ers et al., 2004) and manually annotating implicit arguments for all occurrences of these predicates.",
"In an analysis of their data they found implicit arguments to be very frequent, as their annotation added 65% more arguments to NomBank.",
"Gerber and Chai (2012) also trained a linear classifier for the task relying on many hand-crafted features, including gold features from FrameNet (Baker et al., 1998), PropBank (Palmer et al., 2005) and NomBank.",
"This classifier has, to the best of our knowledge, not been outperformed by follow-up work (Laparra and Rigau, 2013; Schenk and Chiarcos, 2016; Do et al., 2017).",
"We evaluate on the Gerber and Chai dataset below.",
"Ruppenhofer et al. (2010) also introduced an implicit argument dataset, but we do not evaluate on it as it is even smaller and much more complex than Gerber and Chai (2010).",
"More recently, Modi et al. (2017) introduced the referent cloze task, in which they predicted a manually removed discourse referent from a human annotated narrative text.",
"This task is closely related to our argument cloze task.",
"Since we intend to exploit event knowledge in predicting implicit arguments, we here refer to recent work on statistical script learning, started by Chambers and Jurafsky (2008, 2009).",
"They introduced the idea of using statistical information on coreference chains to induce prototypical sequences of narrative events and participants, which is related to the classical notion of a script (Schank and Abelson, 1977).",
"They also proposed the narrative cloze evaluation, in which one event is removed at random from a sequence of narrative events, then the missing event is predicted given all context events.",
"We use a similar trick to define a cloze task for implicit argument prediction, discussed in Section 3. Many follow-up papers on script learning have used neural networks.",
"Rudinger et al. (2015) showed that sequences of events can be efficiently modeled by a log-bilinear language model.",
"Pichotta and Mooney (2016a,b) used an LSTM to model a sequence of events.",
"Granroth-Wilding and Clark (2016) built a network that produces an event representation by composing its components.",
"To do the cloze task, they select the most probable event based on pairwise event coherence scores.",
"For our task we want to do something similar: We want to predict how coherent a narrative would be with a particular entity candidate filling the implicit argument position.",
"So we take the model of Granroth-Wilding and Clark (2016) as our starting point.",
"The Hermann et al. (2015) reading comprehension task, like our cloze task, requires systems to guess a removed entity.",
"However in their case the entity is removed in a summary, not in the main text.",
"In their case, the task typically amounts to finding a main text passage that paraphrases the sentence with the removed entity; this is not the case in our cloze task.",
"We present the argument cloze task, which allows us to automatically generate large scale data for",
"training (Section 6.1) and evaluation (Section 5.1).",
"In this task, we randomly remove an entity from an argument position of one event in the text.",
"The entity in question needs to appear in at least one other place in the text.",
"The task is then for the model to pick, from all entities appearing in the text, the one that has been removed.",
"We first define what we mean by an event, then what we mean by an entity.",
"Like Pichotta and Mooney (2016a); Granroth-Wilding and Clark (2016), we define an event e as consisting of a verbal predicate v , a subject s , a direct object o , and a prepositional object p (along with the preposition).",
"Here we only allow one prepositional argument in the structure, to avoid variable length input in the event composition model.",
"2 By an entity , we mean a coreference chain with a length of at least two that is, the entity needs to appear at least twice in the text.",
"Manville Corp. said it will build a $ 24 million power plant to provide electricity to its Igaras pulp and paper mill in Brazil .",
"The company said the plant will ensure that it has adequate energy for the mill and will reduce the mill's energy costs .",
"(a) A piece of raw text from OntoNotes corpus.",
"x 0 = The company x 1 = mill x 2 = power plant e 0 : ( build-pred, x 0 -subj, x 2 -dobj, ) e 1 : ( provide-pred, , electricity-dobj, x 1 -prep_to ) e 2 : ( ensure-pred, x 2 -subj, , ) e 3 : ( has-pred, x 0 -subj, energy-dobj, x 1 -prep_for ) e 4 : ( reduce-pred, x 2 -subj, cost-dobj, )",
"(b) Extracted events ( e 0 ~ e 4 ) and entities ( x 0 ~ x 2 ), using gold annotations from OntoNotes.",
"(c) Example of an argument cloze task for prep to of e 1 .",
"Figure 1 : Example of automatically extracted events and entities and an argument cloze task.",
"1a), we automatically extract a sequence of events from a dependency parse, and a list of entities from coreference chains.",
"In Figure 1b, e 0 ~ e 4 are events, x 0 ~ x 2 are entities.",
"The arguments electricity-dobj and energy-dobj are not in coreference chains and are thus not candidates for removal.",
"An example of the argument cloze task is shown in Figure 1c.",
"Here the prep to argument of e 1 has been removed.",
"Coreference resolution is very noisy.",
"Therefore we use gold coreference annotation for creating evaluation data, but automatically generated coreference chains for creating training data.",
"We model implicit argument prediction as selecting the entity that, when filled in as the implicit argument, makes the overall most coherent narrative.",
"Suppose we are trying to predict the direct object argument of some target event e t .",
"Then we complete e t by putting an entity candidate into the direct object argument position, and check the coherence of the resulting event with the rest of the narrative.",
"Say we have a sequence of events e 1 , e 2 , . . . , e n in a narrative, and a list of entity candidates x 1 , x 2 , . . . , x m .",
"Then for any candidate x j , we first complete the target event to be e t ( j ) = ( v t , s t , x j , p t ) , j = 1 , . . . , m (1) where v t , s t , and p t are the predicate, subject, and prepositional object of e t respectively, and x j is filled as the direct object.",
"(Event completion for omitted subjects and prepositional objects is anal-ogous.)",
"Then we compute the narrative coherence score S j of the candidate x j by 3 S j = n max c =1 , c 6 = t coh (cid:16) ~ e t ( j ) , ~e c (cid:17) , j = 1 , . . . , m (2) where ~ e t ( j ) and ~e c are representations for the completed target event e t ( j ) and one context event e c , and coh is a function computing a coherence score between two events, both depending on the model being used.",
"The candidate x j with the highest score S j is then selected as our prediction.",
"To model coherence ( coh ) between a context event and a target event, we build an event composition model consisting of three parts, as shown in Figure 2: event components are representated through event-based word embeddings , which encode event knowledge in word representations; the argument composition network combines the components to produce event representations; and the pair composition network compute a coherence score for two event representations.",
"This basic architecture is as in the model of Granroth-Wilding and Clark (2016).",
"However our model is designed for a different task, argument cloze rather than narrative cloze, and for our task entity-specific information is more important.",
"We therefore create the training data in a different way, as described in Section 4.2.1.",
"We now discuss the three parts of the model in more detail.",
"3 We have also tried using the sum instead of the maximum, but it did not perform as well across different models and datasets.",
"Figure 2 : Diagram for event composition model.",
"Input : a context event and a target event.",
"Event-Based Word Embeddings : embeddings for components of both events that encodes event knowledge.",
"Argument Composition Network : produces an event representation from its components.",
"Pair Composition Network : computes a coherence score coh from two event representations.",
"Extra Features : argument index and entity salience features as additional input to the pair composition network.",
"arguments as input to compute event representations.",
"To better encode event knowledge in word level, we train an SGNS (skip-gram with negative sampling) word2vec model (Mikolov et al., 2013) with event-specific information.",
"For each extracted event sequence, we create a sentence with the predicates and arguments of all events in the sequence.",
"An example of such a training sentence is given in Figure 3. build-pred company-subj plant-dobj provide-pred electricity-dobj mill-prep_to ensure-pred plant -subj has-pred company-subj energy-dobj mill-prep_for reduce-pred plant-subj cost-dobj Figure 3 : Event-based word2vec training sentence, constructed from events and entities in Figure 1b.",
"Argument Composition Network The argument composition network (dark blue area in Figure 2) is a two-layer feedforward neural network that composes an event representation from the embeddings of its components.",
"Non-existent argument positions are filled with zeros.",
"Pair Composition Network The pair composition network (light blue area in Figure 2) computes a coherence score coh between 0 and 1, given the vector representations of a context event and a target event.",
"The coherence score should be high when the target event contains the correct argument, and low otherwise.",
"So we construct the training objective function to distinguish the correct argument from wrong ones, as described in Equation 3. 4.2.1 Training for Argument Prediction To train the model to pick the correct candidate, we automatically construct training samples as event triples consisting of a context event e c , a positive event e p , and a negative event e n .",
"The context event and positive event are randomly sampled from an observed sequence of events, while the negative event is generated by replacing one argument of positive event by a random entity in the narrative, as shown in Figure 4. x 0 = The company x 1 = mill x 2 = power plant Context: ( build-pred, x 0 -subj, x 2 -dobj, ) Positive: ( reduce-pred, x 2 -subj , cost-dobj, ) Negative: ( reduce-pred, x 1 -subj , cost-dobj, ) Figure 4 : Example of an event triple constructed from events and entities in Figure 1b.",
"We want the coherence score between e c and e p to be close to 1 , while the score for e c and e n should be close to 0 .",
"Therefore, we train the model to minimize cross-entropy as follows: 1 m m X i =1 log( coh ( e ci , e pi )) log(1 coh ( e ci , e ni )) (3) 834 where e ci , e pi , and e ni are the context, positive, and negative events of the i th training sample respectively.",
"Implicit arguments tend to be salient entities in the document.",
"So we extend our model by entity salience features, building on recent work by Dunietz and Gillick (2014), who introduced a simple model with several surface level features for entity salience detection.",
"Among the features they used, we discard those that require external resources, and only use the remaining three features, as illustrated in Table 1. Dunietz and Gillick found mentions to be the most powerful indicator for entity salience among all features.",
"We expect similar results in our experiments, however we include all three features in our event composition model for now, and conduct an ablation test afterwards.",
"Table 1 : Entity salience features from Dunietz and Gillick (2014).",
"The entity salience features are directly passed into the pair composition network as additional input.",
"We also add an extra feature for argument position index (encoding whether the missing argument is a subject, direct object, or prepositional object), as shown in the red area in Figure 2. 5 Evaluation Datasets 5.1 Argument Cloze Evaluation Previous implicit argument datasets were very small.",
"To overcome that limitation, we automatically create a large and comprehensive evaluation dataset, following the argument cloze task setting in Section 3. Since the events and entities are extracted from dependency labels and coreference chains, we do not want to introduce systematic error into the evaluation from imperfect parsing and coreference algorithms.",
"Therefore, we create the evaluation set from OntoNotes (Hovy et al., 2006), which contains human-labeled dependency and coreference annotation for a large corpus.",
"So the extracted events and entities in the evaluation set are gold.",
"Note that this is only for evaluation; in training we do not rely on any gold annotations (Section 6.1).",
"There are four English sub-corpora in OntoNotes Release 5.0 4 that are annotated with dependency labels and coreference chains.",
"Three of them, which are mainly from broadcast news, share similar statistics in document length, so we combine them into a single dataset and name it ON-SHORT as it consists mostly of short documents.",
"The fourth subcorpus is from the Wall Street Journal and has significantly longer documents.",
"We call this subcorpus ON-LONG and evaluate on it separately.",
"Some statistics are shown in Table 2. ON-SHORTON-LONG # doc 1027 597 # test cases 13018 18208 Avg # entities 12.06 36.95 Table 2 : Statistics on argument cloze datasets.",
"The implicit argument dataset from Gerber and Chai (2010) (referred as G&C henceforth) consists of 966 human-annotated implicit argument instances on 10 nominal predicates.",
"To evaluate our model on G&C, we convert the annotations to the input format of our model as follows: We map nominal predicates to their verbal form, and semantic role labels to syntactic argument types based on the NomBank frame defini-tions.",
"One of the examples (after mapping semantic role labels) is as follows: [Participants] subj will be able to transfer [money] dobj to [other investment funds] prep to .",
"The [investment] pred choices are limited to [a stock fund and a money-market fund] prep to .",
"For the nominal predicate investment , there are three arguments missing ( subj , dobj , prep to ).",
"The model first needs to determine that each of those argument positions in fact has an implicit filler.",
"Then, from a list of candidates (not shown here), it 4 LDC Catalog No.",
"LDC2013T19 835 needs to select Participants as the implicit subj argument, money as the implicit dobj argument, and either other investment funds or a stock fund and a money-market fund as the implicit prep to .",
"We train our neural model using synthetic data as described in Section 3. For creating the training data, we do not use gold parses or gold coreference chains.",
"We use the 20160901 dump of English Wikipedia 5 , with 5,228,621 documents in total.",
"For each document, we extract plain text and break it into paragraphs, while discarding all structured data like lists and tables 6 .",
"We construct a sequence of events and entities from each paragraph, by running Stanford CoreNLP (Manning et al., 2014) to obtain dependency parses and coreference chains.",
"We lemmatize all verbs and arguments.",
"We incorporate negation and particles in verbs, and normalize passive constructions.",
"We represent each argument by the corresponding entity's representative mention if it is linked to an entity, otherwise by its head lemma.",
"We keep verbs and arguments with counts over 500, together with the 50 most frequent prepositions, leading to a vocabulary of 53,345 tokens; all other words are replaced with an out-of-vocabulary token.",
"The most frequent verbs (with counts over 100,000) are down-sampled.",
"For training the event-based word embeddings, we create pseudo-sentences (Section 4.2) from all events of all sequences (approximately 87 million events) as training samples.",
"We train an SGNS word2vec model with embedding size = 300 , window size = 10 , subsampling threshold = 10 4 , and negative samples = 10 , using the Gensim package ( Rehurek and Sojka, 2010).",
"For training the event composition model, we follow the procedure described in Section 4.2.1, and extract approximately 40 million event triples as training samples 7 .",
"We use a two-layer feedforward neural network with layer sizes 600 and 300 for the argument composition network, and another two-layer network with layer sizes 400 and 200 for the pair composition network.",
"We use cross-entropy loss with 2 regularization of 0.01.",
"com/attardi/wikiextractor .",
"7 We only sample one negative event for each pair of context and positive events for fast training, though more training samples are easily accessible.",
"We train the model using stochastic gradient descent (SGD) with a learning rate of 0.01 and a batch size of 100 for 20 epochs.",
"To study how the size of the training set affects performance, we downsample the 40 million training samples to another set of 8 million training samples.",
"We refer to the resulting models as EVENTCOMP -8M and EVENTCOMP -40M .",
"RANDOM Randomly select one entity from the candidate list.",
"EVENTWORD 2 VEC Use the event-based word embeddings described in Section 4.2 for predicates and arguments.",
"The representation of an event e is the sum of the embeddings of its components, i.e., ~e = ~v + ~s + ~o + ~p (4) where ~v,~s, ~o, ~p are the embeddings of verb, subject, object, and prepositional object, respectively.",
"The coherence score of two events in this baseline model is their cosine similarity.",
"Like in our main model, the coherence score of the candidate is then the maximum pairwise coherence score, as described in Section 4.1.",
"The evaluation results on the ON-SHORT dataset are shown in Table 3. The EVENTWORD 2 VEC baseline is much stronger than the other two, achieving an accuracy of 38.40%.",
"In fact, EVENTCOMP -8M by itself does not do better than EVENTWORD 2 VEC , but adding entity salience greatly boosts performance.",
"Using more training data (EVENTCOMP -40M) helps by a substantial margin both with and without entity salience features.",
"To see which of the entity salience features are important, we conduct an ablation test with the EVENTCOMP -8M model on ON-SHORT .",
"From the results in Table 4, we can see that in our task, as in Dunietz and Gillick (2014), the entity mentions features, i.e., the numbers of named, nominal, pronominal, and total mentions of the entity, are most helpful.",
"In fact, the other two features even decrease performance slightly.",
"Figure 5 : Performance of EVENTCOMP (with and without entity salience) and two baseline models by",
"(a) argument type,",
"(b) part-of-speech tag of the head word of the entity, and",
"(c) entity frequency.",
"Table 3 : Evaluation on ON-SHORT .",
"Table 4 : Ablation test on entity salience features.",
"(Using EVENTCOMP -8M on ON-SHORT .)",
"We take a closer look at several of the models in Figure 5.",
"Figure 5a breaks down the results by the argument type of the removed argument.",
"On subjects, the EVENTWORD 2 VEC baseline matches the performance of EVENTCOMP , but not on direct objects and prepositional objects.",
"Subjects are semantically much less diverse than the other argument types, as they are very often animate.",
"A similar pattern is apparent in Figure 5b, which has results by the part-of-speech tag of the head word of the removed entity.",
"Note that an entity is a coreference chain, not a single mention; so when the head word is a pronoun, this is an entity which has only pronoun mentions.",
"A pronoun entity provides little semantic content beyond, again, animacy.",
"And again, EVENTWORD 2 VEC performs well on pronoun entities, but less so on entities described by a noun.",
"It seems that EVENTWORD 2 VEC can pick up on a coarse-grained pattern such as animate/inanimate, but not on more fine-grained distinctions needed to select the right noun, or to select a fitting direct object or prepositional object.",
"This matches the fact that EVENTWORD 2 VEC gets a less clear signal on the task, in two respects: It gets much less information than EVENTCOMP on the distinction between argument positions, 8 and it only looks at overall event similarity while EVENTCOMP is trained to detect narrative coherence.",
"Entity salience contributes greatly across all argument types and parts of speech, but more strongly on subjects and pronouns.",
"This is again because subjects, and pronouns, are semantically less distinct, so they can only be distinguished by relative salience.",
"Figure 5c analyzes results by the frequency of the removed entity, that is, by its number of mentions.",
"The MOSTFREQ baseline, unsurprisingly, only does well when the removed entity is a highly frequent one.",
"The EVENTCOMP model is much better than MOSTFREQ at picking out the right entity when it is a rare one, as it can look at the semantic content of the entity as well as its frequency.",
"Entity salience boosts the performance of EVENTCOMP in particular for frequent entities.",
"The ON-LONG dataset, as discussed in Section 5.1, consists of OntoNotes data with much 8 As shown in Figure 3, the words for which embeddings are computed are role-lemma pairs.",
"longer documents than found in ON-SHORT .",
"Evaluation results on ON-LONG are shown in Table 5.",
"Although the overall numbers are lower than those for ON-SHORT , we are selecting from 36 .",
"95 candidates on average, more than 3 times more than for ON-SHORT .",
"Considering that the accuracy of randomly selecting an entity is as low as 2 .",
"71% , the performance of our best performing model, with an accuracy of 27 .",
"87% , is quite good.",
"Table 5 : Evaluation on ON-LONG .",
"The G&C data differs from the Argument Cloze data in two respects.",
"First, not every argument position that seems to be open needs to be filled: The model must additionally make a fill / no-fill decision .",
"Whether a particular argument position is typically filled is highly predicate-specific.",
"As the small G&C dataset does not provide enough data to train our neural model on this task, we instead train a simple logistic classifier, the fill / no-fill classifier , with a small subset of shallow lexical features used in Gerber and Chai (2012), to make the decision.",
"These features describe the syntactic context of the predicate.",
"We use only 14 features; the original Gerber and Chai model had more than 80 features, and our re-implementation, described below, has around 60.",
"The second difference is that in G&C, an event may have multiple open argument positions.",
"In that case, the task is not just to select a candidate entity, but also to determine which of the open argument positions it should fill.",
"So the model must do multi implicit argument prediction .",
"We can flexibly adapt our method for training data generation to this case.",
"In particular, we create extra negative training events, in which an argument of the positive event has been moved to another argument position in the same event, as shown in Figure 6.",
"We can then simply train our EVENTCOMP model on this extended training data.",
"We refer to the extra training process as multi-arg training .",
"Figure 6 : Event triples for training multi implicit argument prediction.",
"We compare our models to that of Gerber and Chai (2012).",
"However, their original logistic regression model used many features based on gold annotation from FrameNet, PropBank and NomBank.",
"To create a more realistic evaluation setup, we re-implement a variant of their original model by removing gold features, and name it GCAUTO .",
"Results from GCAUTO are directly comparable to our models, as both are trained on automatically generated features.",
"9 P R F 1 Gerber and Chai (2012) 57.9 44.5 50.3 GCAUTO 49.9 40.1 44.5 EVENTCOMP -8M 8.9 27.9 13.5 + fill / no-fill classifier 22.0 22.3 22.1 + multi-arg training 43.5 44.1 43.8 + entity salience 45.7 46.4 46.1 EVENTCOMP -40M 9.4 30.3 14.3 + fill / no-fill classifier 23.7 24.0 23.9 + multi-arg training 46.7 47.3 47.0 + entity salience 49.3 49.9 49.6 Table 6 : Evaluation on G&C dataset.",
"We present the evaluation results in Table 6.",
"The original EVENTCOMP models do not perform well, which is as expected since the model is not designed to do the fill / no-fill decision and multi implicit argument prediction tasks as described above.",
"With the fill / no-fill classifier, precision rises by around 13 points because this classifier prevents many false positives.",
"With additional multi-arg training, F 1 score improves by another 22-23 points.",
"At this point, our model 9 To be fair, we also tested adding the fill / no-fill classifier to GCAUTO .",
"However the classifier only increases precision at the cost of reducing recall, and GCAUTO already has higher precision than recall.",
"The resulting F 1 score is actually worse, and thus is not reported here.",
"achieves a performance comparable to the much more complex G&C reimplementation GCAUTO .",
"Adding entity salience features further boosts both precision and recall, showing that implicit arguments do tend to be filled by salient entities, as we had hypothesized.",
"Again, more training data substantially benefits the task.",
"Our best performing model, at 49.6 F 1 , clearly outperforms GCAUTO , and is comparable with the original Gerber and Chai (2012) model trained with gold features.",
"10 7 Conclusion In this paper we have addressed the task of implicit argument prediction.",
"To support training at scale, we have introduced a simple cloze task for which data can be generated automatically.",
"We have introduced a neural model, which frames implicit argument prediction as the task of selecting the textual entity that completes the event in a maximally narratively coherent way.",
"The model prefers salient entities, where salience is mainly defined through the number of mentions.",
"Evaluating on synthetic data from OntoNotes, we find that our model clearly outperforms even strong baselines, that salience is important throughout for performance, and that event knowledge is particularly useful for the (more verb-specific) object and prepositional object arguments.",
"Evaluating on the naturally occurring data from Gerber and Chai, we find that in a comparison without gold features, our model clearly outperforms the previous state-of-the-art model, where again salience information is important.",
"The current paper takes a first step towards predicting implicit arguments based on narrative coherence.",
"We currently use a relatively simple model for local narrative coherence; in the future we will turn to models that can test global coherence for an implicit argument candidate.",
"We also plan to investigate how the extracted implicit arguments can be integrated into a downstream task that makes use of event information, in particular we would like to experiment with reading comprehension.",
"This research was supported by NSF grant IIS 1523637.",
"We also acknowledge the Texas Ad-10 We also tried fine tune our model on the G&C dataset with cross validation, but the model severely overfit, possibly due to the very small size of the dataset.",
"vanced Computing Center for providing grid resources that contributed to these results, and we would like to thank the anonymous reviewers for their valuable feedback."
] | [
"abstain",
"abstain",
"objective",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"result",
"result",
"objective",
"objective",
"result",
"other",
"abstain",
"other",
"other",
"other",
"abstain",
"method",
"method",
"other",
"abstain",
"method",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"method",
"method",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"Predicting the political bias and the factuality of reporting of entire news outlets are critical elements of media profiling, which is an understudied but an increasingly important research direction.",
"The present level of proliferation of fake, biased, and propagandistic content online, has made it impossible to fact-check every single suspicious claim, either manually or automatically.",
"Alternatively, we can profile entire news outlets and look for those that are likely to publish fake or biased content.",
"This approach makes it possible to detect likely fake news the moment they are published, by simply checking the reliability of their source.",
"From a practical perspective, political bias and factuality of reporting have a linguistic aspect but also a social context.",
"Here, we study the impact of both, namely ( i ) what was written (i.e., what was published by the target medium, and how it describes itself on Twitter) vs. ( ii ) who read it (i.e., analyzing the readers of the target medium on Facebook, Twitter, and YouTube).",
"We further study ( iii ) what was written about the target medium on Wikipedia.",
"The evaluation results show that what was written matters most, and that putting all information sources together yields huge improvements over the current state-of-the-art.",
"The rise of the Web has made it possible for anybody to create a website or a blog and to become a news medium .",
"Undoubtedly, this was a hugely positive development as it elevated freedom of expression to a whole new level, thus allowing anybody to make their voice heard online.",
"With the subsequent rise of social media, anybody could potentially reach out to a vast audience, something that until recently was only possible for major news outlets.",
"One of the consequences was a trust crisis : with traditional news media stripped off their gate-keeping role, the society was left unprotected against potential manipulation.",
"The issue became a general concern in 2016, a year marked by micro-targeted online disinformation and misinformation at an unprecedented scale, primarily in connection to Brexit and the US Presidential campaign.",
"These developments gave rise to the term fake news, which can be defined as false, often sensational, information disseminated under the guise of news reporting. 1 It was declared Word of the Year 2016 by Macquarie Dictionary and of Year 2017 by the Collins English Dictionary.",
"In an attempt to solve the trust problem, several initiatives such as Politifact, Snopes, FactCheck, and Full Fact, have been launched to fact-check suspicious claims manually.",
"However, given the scale of the proliferation of false information online, it became clear that it was unfeasible to fact-check every single suspicious claim, even when this was done automatically, not only due to computational challenges but also due to timing.",
"In order to fact-check a claim, be it manually or automatically, one often needs to verify the stance of mainstream media concerning that claim and the reaction of users on social media.",
"Accumulating this kind of evidence takes time, but time flies very fast, and any delay means more potential sharing of the malicious content on social media.",
"A study has shown that for some very viral claims, more than 50% of the sharing happens within the first ten minutes after posting the micro-post on social media (Za-man et al., 2014), and thus timing is of utmost importance.",
"Moreover, an extensive recent study has found that fake news spreads six times faster and reaches much farther than real news (Vosoughi et al., 2018).",
"A much more promising alternative is to focus on the source and to profile the medium that initially published the news article.",
"The idea is that media that have published fake or biased content in the past are more likely to do so in the future.",
"Thus, profiling media in advance makes it possible to detect likely fake news the moment it is published, by simply checking the reliability of its source.",
"From a practical perspective, political bias and factuality of reporting have not only a linguistic aspect but also a social context.",
"Here, we study the impact of both, namely ( i ) what was written (the text of the articles published by the target medium, the text and the audio signal in the videos of its YouTube channel, as well as how the medium self-describes itself on Twitter) vs. ( ii ) who read it (by analyzing the media readers in Facebook, Twitter, and YouTube).",
"We further study ( iii ) what was written about the target medium on Wikipedia.",
"Our contributions can be summarized as follows: We model the leading political ideology (left, center or right bias) and the factuality of reporting (high, mixed, or low) of news media by modeling the textual content of what they publish vs. who reads it in social media (Twitter, Facebook, and YouTube).",
"The latter is novel for these tasks.",
"We combine a variety of information sources about the target medium, many of which have not been explored for our tasks, e.g., YouTube video channels, political bias estimates of their Facebook audience, and information from the profiles of the media followers on Twitter.",
"We use features from different data modalities: text, metadata, and speech.",
"The latter two are novel for these tasks.",
"We achieve sizeable improvements over the current state-of-the-art for both tasks.",
"We propose various ensembles to combine the different types of features, achieving further improvements, especially for bias detection.",
"We release the data, the features, and the code necessary to replicate our results.",
"In the rest of this paper, we discuss some related work, followed by a description of our sys-tem's architecture and the information sources we use.",
"Then, we present the dataset, the experimental setup, and the evaluation results.",
"Finally, we conclude with possible directions for future work.",
"While leveraging social information and temporal structure to predict the factuality of reporting of a news medium is not new (Canini et al., 2011; Castillo et al., 2011; Ma et al., 2015, 2016; Zu-biaga et al., 2016), modeling this at the medium level is a mostly unexplored problem.",
"A popular approach to predict the factuality of a medium is to check the general stance of that medium concerning already fact-checked claims (Mukherjee and Weikum, 2015; Popat et al., 2017, 2018).",
"Therefore, stance detection became an essential component in fact-checking systems (Baly et al., 2018b).",
"In political science, media profiling is essential for understanding media choice (Iyengar and Hahn, 2009), voting behavior (DellaVigna and Kaplan, 2007), and polarization (Graber and Dun-away, 2017).",
"The outlet-level bias is measured as a similarity of the language used in news media to political speeches of congressional Republicans or Democrats, also used to measure media slant (Gentzkow and Shapiro, 2006).",
"Article-level bias was also measured via crowd-sourcing (Budak et al., 2016).",
"Nevertheless, public awareness of media bias is limited (Elejalde et al., 2018).",
"Political bias was traditionally used as a feature for fact verification (Horne et al., 2018b).",
"In terms of modeling, Horne et al. (2018a) focused on predicting whether an article is biased or not.",
"Political bias prediction was explored by Potthast et al. (2018) and Saleh et al. (2019), where news articles were modeled as left vs. right, or as hyperpartisan vs. mainstream.",
"Similarly, Kulkarni et al. (2018) explored the left vs. right bias at the article level, modeling both textual and URL contents of articles.",
"In our earlier research (Baly et al., 2018a), we analyzed both the political bias and the factuality of news media.",
"We extracted features from several sources of information, including articles published by each medium, what is said about it on Wikipedia, metadata from its Twitter profile, in addition to some web features (URL structure and traffic information).",
"The experiments on the Media Bias/Fact Check (MBFC) dataset showed that combining features from these different sources of information was beneficial for the final classifica-tion.",
"Here, we expand this work by extracting new features from the existing sources of information, as well as by introducing new sources, mostly related to the social media context, thus achieving sizable improvements on the same dataset.",
"In follow-up work (Baly et al., 2019), we showed that jointly predicting the political bias and the factuality is beneficial, compared to predicting each of them independently.",
"We used the same sources of information as in (Baly et al., 2018a), but the results were slightly lower.",
"While here we focus on analyzing political bias and factuality separately, future work may analyze how the newly proposed features and sources affect the joint prediction.",
"In this section, we present our system.",
"For each target medium, it extracts a variety of features to model ( i ) what was written by the medium, ( ii ) the audience of the medium on social media, and ( iii ) what was written about the medium in Wikipedia.",
"This results in multi-modal (text, speech, and metadata) feature set, which we use to train a classifier to predict the political bias and the factuality of reporting of news media.",
"Figure 1 illustrates the system architecture.",
"We describe the features that we used to model the content generated by the news media, analyzing both the articles they publish on their website as well as relevant activity on social media.",
"Given a target news medium, we first collect a number of articles it has published.",
"Then, we extract various types of features from the text of these articles.",
"Below we describe these features in more detail.",
"Linguistic Features: These features focus on language use, and they model text structure, topic, sentiment, subjectivity, complexity, bias, and morality.",
"They have proved useful for detecting fake articles, as well as for predicting the political bias and the factuality of reporting of news media (Horne et al., 2018b; Baly et al., 2018a).",
"We extracted such features using the News Landscape (NELA) toolkit (Horne et al., 2018b), and we will refer to them as the NELA features in the rest of this paper.",
"We averaged the NELA features for the individual articles in order to obtain a NELA representation for a news medium.",
"Using arithmetic averaging is a good idea as it captures the general trend of articles in a medium, while limiting the impact of outliers.",
"For instance, if a medium is known to align with left-wing ideology, this should not change if it published a few articles that align with right-wing ideology.",
"We use this method to aggregate all features that we collected at a level of granularity that is finer than the medium-level.",
"Embedding Features: We encoded each article using BERT (Devlin et al., 2019) by feeding the first 510 WordPieces 2 from the article 3 and then averaging the word representations extracted from the second-to-last layer.",
"4 In order to obtain representations that are relevant to our tasks, we fine-tuned BERT by training a softmax layer on top of the [CLS] output vector to predict the label (bias or factuality) of news articles that are scrapped from an external list of media to avoid overfitting.",
"The articles' labels are assumed to be the same as those of the media in which they are published (a form of distant supervision).",
"This is common practice in tasks such as fake news detection, where it is difficult to manually annotate large-scale datasets (Nrregaard et al., 2019).",
"We averaged the BERT representations across the articles in order to aggregate them at the medium level.",
"Aggregated Probabilities: We represent each article by a C -dimensional vector that corresponds to its posterior probabilities of belonging to each class c i , i { 1 , . . . , C} of the given task, whether it is predicting the political bias or the factuality of the target news medium.",
"These probabilities are produced by training a softmax layer on top of the [CLS] token in the above-mentioned fine-tuned BERT model.",
"We averaged the probability representations across the articles in order to aggregate them at the medium level.",
"Some news media post their video content on YouTube.",
"Thus, we use YouTube channels by modeling their textual and acoustic contents to predict the political bias and the factuality of reporting of the target news medium.",
"This source of information is relatively underexplored, but it has demonstrated potential for modeling bias (Dinkov et al., 2019) and factuality (Kopev et al., 2019).",
"Due to the lack of viable methods for automatic channel retrieval, we manually looked up the YouTube channel for each medium.",
"For each channel marked as English, we crawled 25 videos (on average) with at least 15 seconds of speech content.",
"Then, we processed the speech segments from each video into 15-second episodes by mapping the duration timeline to the subtitle timestamps.",
"2 There is a limit of maximum of 512 input tokens, and we had to leave space for the special tokens [CLS] and [SEP].",
"3 This is recommended in (Adhikari et al., 2019) when encoding full documents using Transformer-based models.",
"4 This is common practice, since the last layer may be biased towards the pre-training objectives of BERT.",
"We used the OpenSMILE toolkit (Eyben et al., 2010) to extract low-level descriptors (LLDs) from these speech episodes, including frame-based features (e.g., energy), fundamental frequency, and Mel-frequency cepstral coefficients (MFFC).",
"This set of features proved to be useful in the Interspeech Computational Paralinguistics challenge of emotion detection (Schuller et al., 2009).",
"To complement the acoustic information, we retrieved additional textual data such as descriptions, titles, tags, and captions.",
"This information is encoded using a pre-trained BERT model.",
"Furthermore, we extracted the NELA features from the titles and from the descriptions.",
"Finally, we averaged the textual and the acoustic features across the videos to aggregate them at the medium level.",
"We model how news media portray themselves to their audience by extracting features from their Media Twitter profiles.",
"In our previous work, this has proven useful for political bias prediction (Baly et al., 2018a).",
"Such features include information about whether Twitter verified the account, the year it was created, its geographical location, as well as some other statistics, e.g., the number of followers and of tweets posted.",
"We encoded the profile's description using SBERT for the following reasons: ( i ) unlike the articles, the number of media profiles is too small to fine-tune BERT, and ( ii ) most Twitter descriptions have sentence-like structure and length.",
"If a medium has no Twitter account, we used a vector of zeros.",
"We argue that the audience of a news medium can be indicative of the political orientation of that medium.",
"We thus propose a number of features to model this, which we describe below.",
"Previous research has used the followers' networks and the retweeting behavior in order to infer the political bias of news media (Wong et al., 2013; Atanasov et al., 2019; Darwish et al., 2020).",
"Here, we analyze the self-description (bio) of Twitter users that follow the target news medium.",
"The assumption is that ( i ) followers would likely agree with the news medium's bias, and ( ii ) they might express their own bias in their self-description.",
"We retrieved the public profiles of 5,000 followers for each target news medium with a Twitter account, and we excluded those with non-English bios since our dataset is mostly about US media.",
"Then, we encoded each follower's bio using SBERT (Reimers and Gurevych, 2019).",
"As we had plenty of followers' bios, this time fine-tuning BERT would have been feasible.",
"However, we were afraid to use distant supervision for labeling as we did with the articles since people sometimes follow media with different political ideologies.",
"Thus, we opted for SBERT, and we averaged the SBERT representations across the bios in order to obtain a medium-level representation.",
"Like many other social media giants, Facebook makes its revenues from advertisements.",
"The extensive user interaction enables Facebook to create detailed profiles of its users, including demographic attributes such as age, gender, income, and political leaning.",
"Advertisers can explore these attributes to figure out the targeting criteria for their ads, and Facebook returns an audience estimate based on these criteria.",
"For example, the estimated number of users who are female, 20-years-old, very liberal, and interested in the NY Times is 160K.",
"These estimates have been used as a proxy to measure the online population in various domains (Fatehkia et al., 2018; Araujo et al., 2017; Ribeiro et al., 2018).",
"In this study, we explore the use of political leaning estimates of users who are interested in particular news media.",
"To obtain the audience estimates for a medium, we identify its Interest ID using the Facebook Marketing API 5 .",
"Given an ID, we retrieve the estimates of the audience (in the United States) who showed interest in the corresponding medium.",
"Then, we extract the audience distribution over the political spectrum, which is categorized into five classes ranging from very conservative to very liberal .",
"Finally, we incorporate audience information from YouTube videos.",
"We retrieved the following metadata to model audience interaction: number of views, likes, dislikes, and comments for each video.",
"As before, we averaged these statistics across the videos to obtain a medium-level representation.",
"Wikipedia contents describing news media were useful for predicting the political bias and the factuality of these media (Baly et al., 2018a).",
"We automatically retrieved the Wikipedia page for each medium, and we encoded its contents using the pre-trained BERT model.",
"6 Similarly to encoding the articles, we fed the encoder with the first 510 tokens of the page's content, and used as an output representation the average of the word representations extracted from the second-to-last layer.",
"If a medium had no page in Wikipedia, we used a vector of zeros.",
"We used the Media Bias/Fact Check (MBFC) dataset, which consists of a list of news media along with their labels of both political bias and factuality of reporting.",
"Factuality is modeled on a 3-point scale: low , mixed , and high .",
"Political bias is modeled on a 7-point scale: extreme-left , left , center-left , center , center-right , right , and extreme-right .",
"Further details and examples of the dataset can be found in (Baly et al., 2018a).",
"After manual inspection, we noticed that the left-center and right-center labels are ill-defined, ambiguous transitionary categories.",
"Therefore, we decided to exclude news media with these labels.",
"Also, to reduce the impact of potentially subjective decisions made by the annotators, we merged the extreme-left and extreme-right media with the left and right categories, respectively.",
"As a result, we model political bias on a 3-point scale ( left , center , and right ), and the dataset got reduced to 864 news media.",
"Table 1 provides statistics about the dataset.",
"We were able to retrieve Wikipedia pages for 61.2% of the media, Twitter profiles for 72.5% of the media, Facebook pages for 60.8% of the media, and YouTube channel for 49% of the media.",
"We evaluated the following aspects about news media separately and in combinations: ( i ) what the target medium wrote, ( ii ) who read it, and ( iii ) what was written about that medium.",
"We used the features described in Section 3 to train SVM classifiers for predicting the political bias and the factuality of reporting of news media.",
"We performed an incremental ablation study by combining the best feature(s) from each aspect to obtain a combination that achieves even better results.",
"We used 5-fold cross-validation to train and to evaluate an SVM model using different features and feature combinations.",
"At each iteration of the cross-validation, we performed a grid search to tune the hyper-parameters of our SVM model, namely the values of the cost C and of the value for the RBF kernel.",
"In the process of search, we optimized for macro-average F 1 score, i.e., averaging over the classes, since our dataset is not balanced, which is true for both tasks.",
"Finally, we evaluated the model on the remaining unseen fold.",
"Ultimately, we report both macroF 1 score, and accuracy.",
"We compared our results to the majority class baseline and to our previous work (Baly et al., 2018a).",
"The latter used ( i ) NELA features from articles, ( ii ) embedding representations of Wikipedia pages using averaged GloVe word embeddings, ( iii ) metadata from the media's Twitter profiles, and ( iv ) URL structural features.",
"Since we slightly modified the MBFC dataset, we retrained the old model on the new version of the dataset.",
"7 To fine-tune BERT's weights, we trained a softmax layer on top of the [CLS] token of the pre-trained BERT model to classify articles for the task at hand: either predicting the articles' political bias as left , center , or right , or predicting their level of factuality as low or high .",
"8 To avoid overfitting, we scrapped articles from news media listed in the Media Bias/Fact Check database, but not included in our dataset: 30K articles from 298 such media.",
"Finally, we used two strategies to evaluate feature combinations.",
"The first one trains a single classifier using all features.",
"The second one trains a separate classifier for each feature type and then uses an ensemble by taking a weighted average of the posterior probabilities of the individual models.",
"Note that we learn different weights for the different models, which ensures that we pay more attention to the probabilities produced by better models.",
"We used the sklearn library to obtain probabilities from an SVM classifier as a function of the distance between the data point and the learned hyperplane using Platt scaling (for the binary case) or an extension thereof (for the 3-way case).",
"Table 2 shows the evaluation results for political bias prediction, grouped according to different aspects.",
"For each aspect, the upper rows correspond to individual features, while the lower ones show combinations thereof.",
"The results in rows 35 show that averaging embeddings from a fine-tuned BERT to encode articles (row",
"4) works better than using NELA features (row 3).",
"They also show that using the posterior probabilities obtained from applying a softmax on top of BERT's [CLS] token (row",
"5) performs worse than using average embeddings (row 4).",
"This suggest that it is better to incorporate information from the articles' word representations rather than using [CLS] as a compact representation of the articles.",
"Also, since our BERT was fine-tuned on articles with noisy labels obtained using distant supervision, its predictions for individual articles are also noisy, and so are the vectors of posterior.",
"Yet, this fine-tuning seems to yield improved article-level representations for our task.",
"The results in rows 710 show that captions are the most useful type of feature among those extracted from YouTube.",
"This makes sense since captions contain the most essential information about the contents of a video.",
"We can further see that the BERT-based features outperform the NELA ones.",
"Overall, the YouTube features are under-performing since for half of the media we could not find a corresponding YouTube channel, and we used representations containing only zeroes.",
"Rows 11-16 show the results for systems that combine article, Twitter, and YouTube features, either directly or in an ensemble.",
"We can see on rows 1316 that the YouTube and the Twitter profile features yield loss in performance when added to the article features (rows 1112).",
"Note that the article features already outperform the individual feature types from rows 310 by a wide margin, and thus we will use them to represent the What Was Written aspect of the model in our later experiments below.",
"We can further notice that the ensembles consistently outperform feature concatenation models, which is actually true for all feature combinations in Table 2.",
"Next, we compare rows 6 and 17, which show results when using Twitter information of different nature: from the target medium profile (row",
"6) vs. from the profiles of the followers of the target medium (row 17).",
"We can see that the latter is much more useful, which confirms the importance of the Who Read It aspect, which we have introduced in this paper.",
"Note that here we encode the descriptions and the self-description bio information using Sentence BERT instead of the pre-trained BERT; this is because, in our preliminary experiments (not shown in the table), we found the former to perform much better than the latter.",
"Next, the results in rows 2023 show that the YouTube metadata features improve the performance when combined with the Twitter followers' features.",
"On the other hand, the Facebook audience features' performance is deficient and hurts the overall performance, i.e., these estimates seem not to correlate well with the political leanings of news media.",
"Also, as pointed by (Flaxman et al., 2016), social networks can help expose people to different views, and thus the polarization in news readership might not be preserved.",
"Row 24 shows that the Wikipedia features perform worse than most individual features above, which can be related to coverage as only 61.2% of the media in our dataset have a Wikipedia page.",
"Nevertheless, these features are helpful when combined with features about other aspects; see below.",
"Finally, rows 2532 in Table 3 show the evaluation results when combining all aspects.",
"We can see that the best results are achieved when using the best features from each of the three aspects, where the combination is performed as an ensemble (row 32).",
"This combination improves over using information from the article only (row 12) by +3.5 macroF 1 points absolute.",
"It further yields sizeable absolute improvements over the baseline system from (Baly et al., 2018a), by +11.87 macro-F 1 points absolute.",
"While this improvement is due to a large extent to improved techniques for text representation such as using fine-tuned BERT instead of averaged GloVe word embeddings, modeling the newly-introduced media aspects further yielded a lot of additional improvements.",
"Table 3 demonstrates the evaluation results when using the proposed sources/features for the task of predicting the factuality of reporting of news media.",
"Similarly to the results for political bias prediction, rows 310 suggest that the features extracted from articles are more important than those coming from YouTube or from Twitter profiles, and that using BERT to encode the articles yields the best results.",
"Note that overall, the results in this table are not as high as those for bias prediction.",
"This reflects the level of difficulty of this task, and the fact that, in order to predict factuality, one needs external information or a knowledge base to be able to verify the published content.",
"The results in rows 1116 show that combining the Twitter profile features with the BERT-encoded articles improves the performance over using the article text only.",
"Comparing rows 6 and 17 in Table 3, we can see that the Twitter follower features perform worse than using Twitter profiles features; this is the opposite of what we observed in Table 2.",
"This makes sense since our main motivation to look at the followers' profiles was to detect political bias, rather than factuality.",
"Moreover, the metadata collected from media profiles about whether the corresponding account is verified, or its level of activity or connectivity (counts of friends and statuses) are stronger signals for this task.",
"Finally, rows 2532 show the results for modeling combinations of the three aspects we are exploring in this paper.",
"The best results are achieved using the best features selected from the What was written and the What was written about the target medium aspects, concatenated together.",
"This combination achieves sizeable improvements compared to the baseline system from (Baly et al., 2018a): by +6.17 macroF 1 points absolute.",
"This result indicates that looking at the audience of the medium is not as helpful for predicting factuality as it was for predicting political bias, and that looking at what was written about the medium on Wikipedia is more important for this task.",
"We have presented experiments in predicting the political ideology, i.e., left/center/right bias, and the factuality of reporting, i.e., high/mixed/low, of news media.",
"We compared the textual content of what media publish vs. who read it on social media, i.e., on Twitter, Facebook, and YouTube.",
"We further modeled what was written about the target medium in Wikipedia.",
"We have combined a variety of information sources, many of which were not explored for at least one of the target tasks, e.g., YouTube channels, political bias of the Facebook audience, and information from the profiles of the media followers on Twitter.",
"We further modeled different modalities: text, metadata, and speech signal.",
"The evaluation results have shown that while what was written matters most, the social media context is also important as it is complementary, and putting them all together yields sizable improvements over the state of the art.",
"In future work, we plan to perform user profiling with respect to polarizing topics such as gun control (Darwish et al., 2020), which can then be propagated from users to media (Atanasov et al., 2019; Stefanov et al., 2020).",
"We further want to model the network structure, e.g., using graph embeddings (Darwish et al., 2020).",
"Another research direction is to profile media based on their stance with respect to previously fact-checked claims (Mo-htarami et al., 2018; Shaar et al., 2020), or by the proportion and type of propaganda techniques they use (Da San Martino et al., 2019, 2020).",
"Finally, we plan to experiment with other languages.",
"This research is part of the Tanbih project 9 , which aims to limit the effect of fake news, propaganda and media bias by making users aware of what they are reading.",
"The project is developed in collaboration between the Qatar Computing Research Institute, HBKU and the MIT Computer Science and Artificial Intelligence Laboratory."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"objective",
"objective",
"method",
"objective",
"result",
"objective",
"result",
"method",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"other",
"objective",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"result",
"result",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"other",
"other"
] |
[
"In contrast to recent advances focusing on high-level representation learning across modalities, in this work we present a self-supervised learning framework that is able to learn a representation that captures finer levels of granularity across different modalities such as concepts or events represented by visual objects or spoken words.",
"Our framework relies on a discretized embedding space created via vector quantization that is shared across different modalities.",
"Beyond the shared embedding space, we propose a Cross-Modal Code Matching objective that forces the representations from different views (modalities) to have a similar distribution over the discrete embedding space such that cross-modal objects/actions localization can be performed without direct supervision.",
"We show that the proposed discretized multi-modal fine-grained representation (e.g., pixel/word/frame) can complement high-level summary representations (e.g., video/sentence/waveform) for improved performance on cross-modal retrieval tasks.",
"We also observe that the discretized representation uses individual clusters to represent the same semantic concept across modalities.",
"Toddlers acquire much of their knowledge through grounded learning visual concepts can be acquired through language, and language acquisition emerges through visual interaction.",
"Inspired by this type of grounded learning, a rich body of representation learning research (Harwath et al., 2018; Miech et al., 2020; Alayrac et al., 2020; Monfort et al., 2021; Luo et al., 2021) has been exploring the potential to learn from multi-modal data such as video-text, video-audio, and image-audio pairs.",
"These works typically focus on learning a joint embedding space between different modalities, in which high-level summary representations are extracted as embedding vectors.",
"These embedding vectors often represent entire video clips, spoken utterances, or sentences as single vectors, and can be useful on tasks such as cross-modal data retrieval, e.g., finding the most similar visual scene according to a spoken language description.",
"The predominant approach to learning these embedding vectors is to use modality-independent encoders, and while this has been successful for downstream retrieval tasks, it makes it difficult to compare the activations of the encoders from different modalities.",
"Further, the space of continuous embedding vectors is unbounded, which makes interpreting the learned representations challenging.",
"To this end, we propose to jointly learn high-level embedding vector representations with a fine-grained discrete embedding space that is shared across different modalities.",
"The discrete embedding space enables model interpretability since there are a finite number of embedding vectors which are shared across modalities.",
"Besides the shared embedding space, we propose a CrossModal Code Matching (CMCM) objective that guides the embedding space to capture cross-modal correspondences of concepts, actions, and words.",
"This not only improves downstream performance on retrieval, but also allows us to better interpret what the model recognized through cross-modal grounded learning.",
"To verify the effectiveness of our proposed learning framework, we conducted experiments in several cross-modal domains, including video-text, video-audio, and image-audio.",
"We found consistent improvements over baseline models, verifying that the gain was not restricted to the particular choice of network architecture, input modalities, or dataset.",
"We also demonstrate the interpretability of the fine-grained discrete representations by showing the cross-modal relations between the embedding vectors and semantic concepts appearing in the input modalities.",
"Our approach also enables cross-modal concept localization without requiring any labels during training.",
"We begin by describing the two-branch cross-modal representation learning paradigm in Section 2.1 (the blue and yellow regions).",
"Next, we introduce our shared discrete embedding space in Section 2.2 (the green region).",
"Finally, in Section 2.3 and Figure 2, we introduce the CrossModal Code Matching objective which guides the model to learn semantically meaningful representations through the shared discrete embedding space.",
"Given a set of data X = { ( x Ai , x Bi ) } Ni =1 of size N where each instance x i is instantiated in different modalities A and B (e.g. video and its corresponding caption), the goal is to derive high-level representative vectors ( z Ai , z Bi ) for each instance ( x Ai , x Bi ) that capture the cross-modal relation measured by a choice of similarity function S ( , ) .",
"For a specific modality M { A, B } , a common first step is to encode raw data x Mi into a sequence of fine-grained latent features HM i with a modality-specific neural network f M fine , i.e. H Mi = f M fine ( x Mi ) .",
"The fine-grained representations H Mi can express different kinds of raw data, such as video, audio, or sentences, as a sequence of vectors { h Mi, 1 , ..., h Mi,L } of length L .",
"In the second step, a high-level representation z Mi can be derived by summarizing the fine-grained latent features H Mi with another encoding function f M high that reduces the sequence into a single vector, i.e. z Mi = f M high ( H Mi ) .",
"For example, with modality A being video, raw data x Ai can be treated as a sequence along time and space and encoded into fine-grained representations H Ai = { h Ai,l } Ll =1 by choosing f A fine to be a Residual Network (He et al., 2016).",
"For the second step, a natural choice for f A high to derive the high-level representation z Ai would be a mean pooling function over the time and spatial axes (arranged along l ).",
"With the sets of high-level representations { z Ai } Ni =1 and { z Bj } Nj =1 from different modalities, we can measure the cross-modal relation between any pair of representations ( z Ai , z Bj ) with some similarity function 1 S ( , ) .",
"The final step in this paradigm is to adopt an objective function that maximizes the similarity score between positive pairs (where i = j , and thus the true pairs) and minimizes the similarity score between negative pairs (where i = j , and thus imposter pairs).",
"While different objective functions, such as Semi-Hard Negative Mining (Schroff et al., 2015) (SHN) and Noise Constrastive Estimation (Gut-mann and Hyvrinen, 2010) (NCE), have been studied in prior work, we focused on the Masked Margin Softmax (Ilharco et al., 2019) (MMS) loss LMMS = 1 NN (cid:88) i =1 log e S ( z Ai ,z Bi ) M e S ( z Ai ,z Bi ) M + (cid:80) Nj =1 I i = j e S ( z Ai ,z Bj ) , (1) where the margin M is a hyperparameter to encourage a higher similarity for positive pairs.",
"The MMS loss LMMS can be seen as an application of the InfoNCE (Oord et al., 2018) loss with a margin.",
"The effectiveness of the described cross-modal learning paradigm has been shown by recent works that achieved state-of-the-art results on benchmark 1 While we used dot product throughtout this work, we also found euclidean distance works well in practice.",
"datasets in different cross-modal scenarios such as video-text (Luo et al., 2021), video-audio (Monfort et al., 2021; Rouditchenko et al., 2020), and image-text (Radford et al., 2021).",
"While the high-level representations ( z Ai , z Bi ) given by the cross-modal learning paradigm benefit end tasks such as data retrieval, the representations cannot be easily interpreted by humans.",
"To obtain fine-grained representations that are more interpretable, we introduce a Vector Quantization (Oord et al., 2017) (VQ) mechanism after obtaining the H Mi representations.",
"Formally, with an auxiliary embedding table E = { e 1 , e 2 , ..., e V } of size V , which we refer to as the codebook , vector quantization is performed on each fine-grained representation h Mi,l H Mi of modality M { A, B } with h Mi,l = f M ( h Mi,l ) + sg ( e v f M ( h Mi,l )) , where f M is a modality specific projection network to project the input to the shared embedding space, v = arg min k V h Mi,l e k 2 , and sg ( ) is the stop-gradient operator proposed in straight-through gradient estimation (Bengio et al., 2013) that treats the input as constant during backpropagation.",
"In other words, each vector h Mi,l will be replaced by its nearest neighbor e v , which we refer to as the codeword , in the codebook E .",
"The codebook is randomly initialized and updated with the exponential moving average (Oord et al., 2017) given the fine-grained representations (more details in Section A of the Appendix).",
"We trained the shared embedding space jointly with the rest of the framework by modifying the high-level representations z Mi to include the discretized fine-grained representations as z Mi = f M high ( H Mi ) + f M code ( H Mi ) , where f M code is, similar to f M high , the encoding function for summarizing the sequence of quantized fine-grained representations (e.g., an average pooling function over l ).",
"Having such a discrete embedding space allows humans to better interpret the learned embeddings since they are shared across modalities and there are a finite number of them.",
"Ideally, the codebook should be shared across different modalities since the quantization method is independent to the input modality.",
"However, as we demonstrate in Section F of the Appendix, the model will learn to partition the codebook into modality-specific subspaces due to the significant difference between fine-grained representations from different modalities.",
"To learn a shared embedding space that is invariant to input modality, we propose the Cross-Modal Code Matching objective which encourages the model to focus more on the semantic aspect of the input, as illustrated in Figure 2. For each vector h Mi,l in the fine-grained representation sequence H Mi encoded from an instance x Mi of modality M , we first define the probability of h M i,l belonging to the codeword e v as the Softmin function of their Euclidean distance, P ( e v | h Mi,l ) = exp( f M ( h Mi,l ) e v 2 ) (cid:80) k V exp( f M ( h Mi,l ) e k 2 ) .",
"Note that this definition assigns higher a probability to codewords that are closer to the fine-grained representation, where the closest codeword is used to perform vector quantization.",
"We can then define the sequence-level probability distribution over the codebook as the average of the fine-grained distribution, P ( e v | H Mi ) = 1 L (cid:80) l P ( e v | h M i,l ) , which is the normalized frequency of codeword usage for a given sequence of fine-grained representations.",
"Next, for a pair of cross-modal data ( x Ai , x Bj ) , we define their code similar-3015 ity as the negative symmetric cross entropy of probability distribution over the codebook S code ( x Ai , x Bj ) = (cid:80) v P ( e v | H Ai ) log P ( e v | H Bj ) + (cid:80) v P ( e v | H Bj ) log P ( e v | H Ai ) .",
"Intuitively, the proposed objective encourages the model to represent the input ( x Ai , x Bj ) with similar codewords for positive pairs ( i = j ) and nonmatching codewords for negative pairs ( i = j ).",
"As a consequence, each codeword is expected to be a modality invariant representation of a more fine-grained concept, action, or word that can be discovered from cross-modal data.",
"For example, a codeword could correspond to both the visual scene of a man juggling, and also the spoken word juggling, as we demonstrate in our experimental results in Table 2 and Figure 4. The full objective of our proposed cross-modal representation learning framework is the combination of objectives at different levels L = LMMS + LCMCM , where controls the weight between the two terms.",
"Empirically, we found = 0 .",
"1 worked well across different settings.",
"Please refer to Section C and D in Appendix for ablation study and comparison to possible alternatives to our method.",
"Examples of the cross-modal learning paradigm.",
"As described in Section 2.1, many of the existing methods for cross-modal learning fit into the paradigm where encoders are modality-independent.",
"This paradigm has been shown to be effective by achieving state-of-the-art retrieval performance on benchmark datasets with the modality pairs that we considered in this work: videotext (Bain et al., 2021; Luo et al., 2021), video-audio (Monfort et al., 2021; Rouditchenko et al., 2020), and image-audio (Harwath et al., 2018, 2020).",
"While these prior works relied on different pre-training datasets, model architectures, and objective functions, they all leverage modality-independent encoders.",
"One of the most important features of this paradigm is the fixed inference time for retrieval.",
"Since the encoders are modality-independent, embedding vectors for samples in a given modality can be computed without using any samples from the other modality.",
"Thus retrieval only involves computing the dot product between embedding vectors from two different modalities.",
"As a consequence, these models are more flexible for large-scale retrieval, and the embedding vectors from each modality can be used independently for other downstream tasks.",
"Other cross-modal learning frameworks.",
"In contrast to the aforementioned works, some methods leverage cross-modal relations within the encoders instead of using modality-independent encoders.",
"This has been done with both cross-modal encoders (Lei et al., 2021; Luo et al., 2021) and cross-modal attention mechanisms (Miech et al., 2018; Liu et al., 2019b,a; Gabeur et al., 2020).",
"However, the cross-modal interactions increase the complexity for retrieval since every instance of a specific modality must be used as input with every instance of another modality to obtain the embedding vectors.",
"With m and n samples in the modalities respectively, this increases the complexity from the modality-independent approach from O ( m + n ) to O ( mn ) .",
"Further, it also makes analysis of the embedding vectors from any individual modality challenging and inhibits single-modality downstream tasks.",
"Our proposed framework builds on the modality-independent approach to enable light-weight retrieval, but it also enables crossmodal interaction through our proposed codebook and Cross-Modal Code Matching objective.",
"Uncovering semantic-level correspondences.",
"Image-audio models have been shown to discover spoken words and visual objects without supervision through retrieval tasks (Synnaeve et al., 2014; Harwath and Glass, 2015; Harwath et al., 2017; Kamper et al., 2018), and the audio embedding vectors have been shown to cluster into word-like speech units (Harwath and Glass, 2017; Wang and Hasegawa-Johnson, 2019; Harwath et al., 2020).",
"Some work has studied the ability of video-audio models to relate spoken words to visual objects and actions in videos (Boggust et al., 2019; Rouditchenko et al., 2020).",
"However, none of these models incorporated a shared embedding space that enabled modality-invariant representations.",
"VQ units have been used in the audio encoder of an image-audio model (Harwath et al., 2020), which allowed it to capture the hierarchical structure of spoken language.",
"While our proposed framework is similar in that it also discretizes the audio sequence 3016 Table 1: Cross-Modal retrieval results on S-MiT, Places, and MSR-VTT.",
"Existing model reproduced with LMMS for fair comparison, see Table 3 in the Appendix for more detail.",
"* Results obtained by running the official code and pre-trained models, see Appendix for more details.",
"with VQ units, our work differs significantly by capturing the cross-modal interactions between visual and audio inputs in the shared embedding space rather than solely capturing the tree structure of speech.",
"Further, besides image-audio data, our proposed framework can handle video-audio and video-text data.",
"To demonstrate the generalizability of the proposed method, we tested our framework on different cross-modal datasets and baseline models that fit into the cross-modal learning paradigm.",
"All setups are listed below and summarized in Table 3 of the Appendix.",
"For training the proposed model, we randomly initialized all the modules related to the discrete shared embedding space and trained them jointly with the rest of the framework (see Figure 1).",
"Unless otherwise specified, (1) we warm-started our proposed framework by initializing it with the modality-specific encoders (namely, f M fine and f M high ) from the baseline models; (2) both the projection network f M and the encoder network f M code are single linear layers; (3) the codebook size is set to 1024.",
"Please refer to Section B in the Appendix for more implementation details.",
"corresponding spoken audio captions averaging 8 seconds.",
"We followed the official protocol to train on the training set of 500k pairs, use the validation set of 10k pairs for development and analysis, and report the retrieval result on a 1k search space over 5 runs randomly sampled from a held-out test set.",
"We selected the same baseline model used on the dataset (Monfort et al., 2021), which contains a visual encoder composed of a ResNet-152 pre-trained on ImageNet (Deng et al., 2009) and TSM ResNet-50 (Lin et al., 2019) pre-trained on M-MiT (Monfort et al., 2019).",
"The audio encoder is a randomly initialized 1D-ResNet (Harwath et al., 2018) designed specifically for spectrograms.",
"The shared embedding space has the dimension of 4096, matching the encoders in the baseline model.",
"Image-Audio: Places (Harwath et al., 2017) contains over 400k pairs of images from the Places 205 dataset (Zhou et al., 2014) and corresponding spoken audio captions averaging 10 seconds.",
"We followed the previous works (Harwath et al., 2018, 2020) to use the training set of 400k pairs and report results on the validation set of 1k pairs.",
"We select ResDAVEnet (Harwath et al., 2018) as the baseline model where the visual encoder is a ResNet-50 pre-trained on ImageNet (Deng et al., 2009) and the audio encoder is a randomly initialized 1D-ResNet (Harwath et al., 2018) designed specifically for spectrograms.",
"The shared embedding space has the dimension of 1024.",
"Video-Text: MSR-VTT (Xu et al., 2016) contains 10k video clips with length varying from 10 to 32 seconds.",
"While each video is provided with 20 related captions for training, we followed the evaluation protocol from previous works (Luo et al., 2021; Gabeur et al., 2020; Yu et al., 2018) to use the training-9k / test 1k-A splits for training and testing respectively.",
"CLIP4Clip (Luo et al., 2021), the current state-of-the-art on MSR-VTT, is selected as the baseline model.",
"Following the crossmodal learning paradigm described in Section 2.1, CLIP4Clip is composed of a pair of encoders: a Visual Transformer (Dosovitskiy et al., 2020) and a Text Transformer (Vaswani et al., 2017).",
"Both encoders are initialized from the CLIP model (Rad-ford et al., 2021), which is pre-trained on the text-image dataset WIT (Radford et al., 2021) and optimized in the end-to-end manner from pixel/text input.",
"For training the proposed framework on top of CLIP4Clip, we freeze the transformers from CLIP4Clip and update only the modules related to the discrete shared embedding space.",
"Both the projection network f M and the encoder network f M code are 4D-Convolutions for video with a depth of 3 and BiLSTMs for text, also with a depth of 3. While CLIP4Clip provided different options for the high-level visual encoder f M high , we adopted the vanilla mean-pooling model.",
"Following CLIP4Clip, the shared embedding space has a dimension of 512.",
"Data retrieval is one of the most common evaluations for cross-modal representation learning.",
"For example, in video retrieval with input query text, videos in the search space will be ranked by the similarity between the representation of each video and the query.",
"We report the standard retrieval metrics recall at rank K (R@K) and median rank (MdR) in Table 1. We show the performance on both visual retrieval, where input language queries are used to retrieve videos or images, and language retrieval, where input visual queries are used to retrieve spoken or text captions.",
"Video-Audio Retrieval.",
"Video-Audio retrieval on S-MiT (Monfort et al., 2021) is a challenging task since videos are paired with raw speech audio, which is untranscribed, unsegmented, and can contain background noise and speaker variation.",
"However, our proposed framework that leverages cross-modal connections between visual actions and spoken words is able to improve the baseline model by a margin.",
"We further analyze our frame-work's ability to relate visual actions and spoken words in Section 4.3.",
"Image-Audio Retrieval.",
"Comparing the baseline model, ResDAVEnet (Harwath et al., 2018), and the current state-of-the-art ResDAVEnet-VQ (Har-wath et al., 2020), the latter model introduces VQ units into the audio encoder, allowing it to model the hierarchical structure of speech and achieve better retrieval results.",
"With our framework, we introduce our shared VQ embedding space into the ResDAVEnet model to capture cross-modal interactions.",
"This improves the performance over both ResDAVEnet and ResDAVEnet-VQ.",
"method against recent works achieving state-of-the-art (Bain et al., 2021; Liu et al., 2021; Luo et al., 2021) and provide a full comparison against more prior work (Liu et al., 2019b; Rouditchenko et al., 2020; Gabeur et al., 2020; Patrick et al., 2020; Dzabraev et al., 2021; Croitoru et al., 2021) in Section E of the Appendix.",
"Frozen-in-Time (Bain et al., 2021) and CLIP4Clip (Luo et al., 2021) are similar methods that employ a Visual Transformer (Dosovitskiy et al., 2020) to encode video as sequence of images.",
"The key differences between them is the choice of summarizing function (i.e. f M high ) for video and the pre-training procedure.",
"We also note that the CLIP4Clip with tight transformer encoder (Luo et al., 2021) (CLIP4Clip-tightT) relied on cross-modal reference via self-attention encoders to derive representations, which has a higher time complexity as mentioned in Section 3. With the shared codebook and Cross-Modal Code Matching objection, our proposed framework also enables cross-modal reference and gives an improvement over the baseline model without increasing the time complexity.",
"Overall, our proposed method enables consistent improvements regardless of the data modalities and baseline architectures, demonstrating its effectiveness and generalizability.",
"One of the important motivations of introducing the discrete cross-modal embedding space is better model interpretability.",
"In this section, we take a closer look into the codewords learned through our proposed framework.",
"For the evaluation, we chose the video-audio setup on S-MiT (Monfort et al., 2021).",
"We used video-audio pairs from the development set, where each pair is labeled with an action out of 332 categories.",
"Note that we only 3019 used labels for analysis, labels are never used for training.",
"Conditional Probability of Action Labels Given Codeword.",
"First, we compute the conditional probability distributions of action labels given the codewords over the video inputs.",
"Each video input is fixed-length and represented by 27 codewords (3 frames each represented by 3 3 code-words), and we labeled all these codewords with the video's action label.",
"By accumulating codeword labels through the whole development set, we can compute the conditional probability of each action given any codeword, i.e. P ( action | codeword ) .",
"Results are visualized in the upper part of Figure 3. Similarly, we computed the conditional probabilities based on the audio input where each utterance is represented by up to 32 codewords depending on the utterance length.",
"We selected the most frequent codewords used by the video inputs and plot the conditional probabilities based on the audio input in the lower part of Figure 3. We can observe that both matrices have similar patterns, i.e., when a codeword is activated, there is a high chance of a specific action appearing in the input regardless if it is video or audio.",
"This suggests that our model is able to learn cross-modal representations for actions grounded by either visual or spoken language input.",
"The codewords are not only modality invariant, but more importantly, they also capture the semantic relations of the labels.",
"e.g., codewords with the highest chance to represent autographing typically have the second highest chance of representing signing; codewords for surfing are less likely to represent other actions as all of them are very different from surfing.",
"We also note that without the Cross-Modal Code Matching objective, semantically related video and audio inputs no longer use the same codewords, which we illustrate in Section F of the Appendix.",
"Cross-Modal Correspondences.",
"Next, we analyze the connections captured by the codewords between action labels and spoken words.",
"With the same label accumulation method described previously, we compute the precision of action prediction with codewords (i.e. code-action co-occurrence code occurrence ).",
"For the audio, we used word-level transcriptions (from Google's speech-to-text API) to assign a spoken word to each codeword when it is activated by the input utterance.",
"This results in a hypothesis set including around 7k words for each codeword, and we listed the top 2 hypotheses for each codeword with the highest F1 score (instead of precision to avoid domination of high-frequency words).",
"Results are listed in Table 2. For the codewords that have the highest precision on predicting the action label, we found the top hypotheses for spoken words are often the action label itself.",
"E.g., the codeword (rank 1st) for the visual action juggling maps to the spoken word juggling perfectly.",
"As precision on visual action prediction decreases, we observed fewer perfect mappings, but the spoken word hypotheses remained semantically related to the visual action hypotheses.",
"E.g., the codeword (rank 35th) for the visual action dunking with lower precision now maps to the spoken word bas-ketball.",
"Surprisingly, even the codewords with the lowest precision capture relationships between visual actions and spoken words to some extent.",
"E.g., codeword (rank 743th) that is most related to the action baking has the top and second word hypotheses cupcake and peanut.",
"Codeword Localization.",
"Finally, to visualize the relation between codewords and the input data, we localize the segments of both the video and audio input that are assigned to certain codewords.",
"This is possible because quantization in our shared embedding space is done at the fine-grained level, so that the time and spatial axes are preserved.",
"Examples are shown in Figure 4, where the regions assigned to the given code are highlighted.",
"Interestingly, we see the codewords being aligned to both the visual actions and the corresponding spoken words.",
"This supports our claim of having a more interpretable representation at the fine-grained level.",
"In this paper, we proposed a framework for crossmodal representation learning with a discrete embedding space that is shared amongst different modalities and enables model interpretability.",
"We also propose a Cross-Modal Code Matching objective that encourages models to represent cross-model semantic concepts in the embedding space.",
"Combining our discrete embedding space and objective with existing cross-modal representation learning models improves retrieval performance on video-text, video-audio, and image-audio datasets.",
"We also analyze the shared embedding space and find that semantically related video and audio inputs tend to use the same codewords.",
"This research was supported in part by the MIT-IBM Watson AI Lab and its member companies, Nexplore and Woodside, and by MIT Lincoln Laboratory."
] | [
"objective",
"method",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"result",
"objective",
"result",
"objective",
"abstain",
"abstain",
"method",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"objective",
"objective",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"objective",
"objective",
"result",
"other"
] |
[
"Most information extraction methods focus on binary relations expressed within single sentences.",
"In high-value domains, however, n -ary relations are of great demand (e.g., drug-gene-mutation interactions in precision oncology).",
"Such relations often involve entity mentions that are far apart in the document, yet existing work on cross-sentence relation extraction is generally confined to small text spans (e.g., three consecutive sentences), which severely limits recall.",
"In this paper, we propose a novel multiscale neural architecture for document-level n -ary relation extraction.",
"Our system combines representations learned over various text spans throughout the document and across the subrelation hierarchy.",
"Widening the sys-tem's purview to the entire document maximizes potential recall.",
"Moreover, by integrating weak signals across the document, multiscale modeling increases precision, even in the presence of noisy labels from distant supervision.",
"Experiments on biomedical machine reading show that our approach substantially outperforms previous n -ary relation extraction methods.",
"Knowledge acquisition is a perennial challenge in AI.",
"In high-value domains, it has acquired new urgency in recent years due to the advent of big data.",
"For example, the dramatic drop in genome sequencing cost has created unprecedented opportunities for tailoring cancer treatment to a tumor's genetic composition (Bahcall, 2015).",
"Despite this potential, operationalizing personalized medicine is difficult, in part because it requires painstaking curation of precision oncology knowledge from biomedical literature.",
"With tens of millions of papers on PubMed, and thousands more added every Work done as an intern at Microsoft Research.",
"The 2 mutations that were only found in the neuroblastoma resistance screen (G1123S/D) are located in the glycine-rich loop, which is known to be crucial for ATP and ligand binding and are the first mutations described that induce resistance to TAE684, but not to PF02341066.",
"day, 1 we are sorely in need of automated methods to accelerate manual curation.",
"Prior work in machine reading has made great strides in sentence-level binary relation extraction.",
"However, generalizing extraction to n -ary relations poses new challenges.",
"Higher-order relations often involve entity mentions that are far away in the document.",
"Recent work on n -ary relation extraction has begun to explore cross-sentence extraction (Peng et al., 2017; Wang and Poon, 2018), but the scope is still confined to short text spans (e.g., three consecutive sentences), even though a document may contain hundreds of sentences and tens of thousands of words.",
"While this already increases the yield compared to sentence-level extraction, it still misses many relations.",
"For example, in Figure 1, the drug-gene-mutation relations between PF02341066 , ALK , G1123S ( D ) ( PF02341066 can treat cancers with mutation G1123S ( D ) in gene ALK ) can only be extracted by substantially expanding the scope.",
"High-value information, such as latest medical findings, might only be mentioned once in the corpus.",
"Maximiz-1 ncbi.nlm.nih.gov/pubmed ing recall is thus of paramount importance.",
"In this paper, we propose a novel multiscale neural architecture for document-level n -ary relation extraction.",
"By expanding extraction scope to the entire document, rather than restricting relation candidates to co-occurring entities in a short text span, we ensure maximum potential recall.",
"To combat the ensuing difficulties in document-level extraction, such as low precision, we introduce multiscale learning, which combines representations learned over text spans of varying scales and for various subrelations (Figure 2).",
"This approach deviates from past methods in several key regards.",
"First, we adopt an entity-centric formulation by making a single prediction for each entity tuple occurring in a document.",
"Previous n -ary relation extraction methods typically classify individual mention tuples, but this approach scales poorly to whole documents.",
"Since each entity can be mentioned many times in the same document, applying mention-level methods leads to a combinatorial explosion of mention tuples.",
"This creates not only computational challenges but also learning challenges, as the vast majority of these tuples do not express the relation.",
"Our entity-centric formulation alleviates both of these problems.",
"Second, for each candidate tuple, prior methods typically take as input the contiguous text span encompassing the mentions.",
"For document-level extraction, the resulting text span could become untenably large, even though most of it is unrelated to the relation of interest.",
"Instead, we allow discontiguous input formed by multiple discourse units (e.g., sentence or paragraph) containing the given entity mentions.",
"Finally, while an n -ary relation might not reside within a discourse unit, its subrelations might.",
"In Figure 1, the paper first mentions a gene-mutation subrelation, then discusses a drug-mutation subrelation in a later paragraph.",
"By including subrelations in our modeling, we can predict n -ary relations even when all n entities never co-occur in the same discourse unit.",
"With multiscale learning, we turn the document view from a challenge into an advantage by combining weak signals across text spans and subrelations.",
"Following recent work in cross-sentence relation extraction, we conduct thorough evaluation in biomedical machine reading.",
"Our approach substantially outperforms prior n -ary relation extraction methods, attaining state-of-the-art results on a large benchmark dataset recently released by a major cancer center.",
"Prior work on relation extraction typically formulates it as a mention-level classification problem.",
"Let e 1 , . . . , e n be entity mentions that co-occur in a text span T .",
"Relation extraction amounts to classifying whether a relation R holds for e 1 , . . . , e n in T .",
"For the well-studied case of binary relations within single sentences, n = 2 and T is a sentence.",
"In high-value domains, however, there is increasing demand for document-level n -ary relation extraction, where n > 2 and T is a full document that may contain hundreds of sentences.",
"For example, a molecular tumor board needs to know if a drug is relevant for treating cancer patients with a certain mutation in a given gene.",
"We can help the tumor board by extracting such ternary interactions from biomedical articles.",
"The mention-centric view of relation extraction does not scale well to this general setting.",
"Each of the n entities may be mentioned many times in a document, resulting in a large number of candidate mention tuples, even though the vast majority of them are irrelevant to the extraction task.",
"In this paper, we adopt an entity-centric formulation for document-level n -ary relation extraction.",
"We use upper case for entities ( E 1 , , E n ) and lower case for mentions ( e 1 , , e n ).",
"We de-fine an n -ary relation candidate to be an ( n + 1) -tuple ( E 1 , . . . , E n , T ) , where each entity E i is mentioned at least once in the text span T .",
"The relation extraction model is given a candidate ( E 1 , . . . , E n , T ) and outputs whether or not the tuple expresses the relation R .",
"3 Deciding what information to use from the various entity mentions within T is now a modeling question, which we address in the next section.",
"We present a general framework for document-level n -ary relation extraction using multiscale",
"representation learning.",
"Given a document with text T and entities E 1 , . . . , E n , we first build mention-level representations for groups of these entities whenever they co-occur within the same discourse unit.",
"We then aggregate these representations across the whole document, yielding entity-level representations for each subset of entities.",
"Finally, we predict whether E 1 , . . . , E n participate in the relation based on the concatenation of these entity-level representations.",
"These steps are depicted in Figure",
"2. 3.1 Mention-level Representation Let the full document T be composed of discourse units T 1 , . . . , T m (e.g., different paragraphs).",
"Let T j be one such discourse unit, and suppose e 1 , . . . , e n are entity mentions of E 1 , . . . , E n that co-occur in T j .",
"We construct a contextualized representation for mention tuple ( e 1 , . . . , e n ) in T j .",
"In this paper, we use a standard approach by applying a bi-directional LSTM (BiLSTM) to T j , concatenating the hidden states for each mention, and feeding this through a single-layer neural network.",
"We denote the resulting vector as r ( R, e 1 , . . . , e n , T j ) for the relation R .",
"Let M ( R, E 1 , . . . , E n , T ) denote the set of all mention tuples ( e 1 , . . . , e n ) and discourse units T j within T such that each e i appears in T j .",
"We can create an entity-level representation r ( R, E 1 , . . . , E n , T ) of the n entities by combining mention-level representations using an aggregation operator C : C ( e 1 ,...,e n ,T j ) M ( R,E 1 ,...,E n ,T ) r ( R, e 1 , . . . , e n , T j ) A standard choice for C is max pooling, which works well if it is pretty clear-cut whether a mention tuple expresses a relation.",
"In practice, however, the mention tuples could be ambiguous and less than certain individually, yet collectively express a relation in the document.",
"This motivates us to experiment with logsumexp , the smooth version of max, where logsumexp( x 1 , . . . , x k ) = log k (cid:88) i =1 exp( x i ) .",
"This facilitates accumulating weak signals from individual mention tuples, and our experiments show that it substantially improves extraction accuracy compared to max pooling.",
"For higher-order relations (i.e., larger n ), it is less likely that they will be completely contained within a discourse unit.",
"Often, the relation can be decomposed into subrelations over subsets of entities, each of which is more likely to be expressed in a single discourse unit.",
"This motivates us to construct entity-level representations for subrelations as well.",
"The process is straightforward.",
"Let RS be the | S | -ary subrelation over entities ES 1 , , ES | S | , where S { 1 , . . . , n } and | S | denotes its size.",
"We first construct mention-level representations r ( RS , e S 1 , , e S | S | , T ) for RS and its relevant entity mentions, then combine them into an entity-level representation r ( RS , ES 1 , , ES | S | , D ) using the chosen aggregation operator C .",
"We do this for every S { 1 , . . . , n } with | S | 2 (including the whole set, which corresponds to the full relation R ).",
"This gives us an entity-level representation for each subrelation of arity at least 2 , or equivalently, each subset of entities of size at least 2 .",
"To make a final prediction, we first concatenate all of the entity-level representations r ( RS , ES 1 , . . . , ES | S | , D ) for all S { 1 , . . . , n } with | S | 2 .",
"The concatenated representation is fed through a two-layer feedforward neural network followed by a softmax function to predict the relation type.",
"It is possible that for some subrelations RS , all | S | entities do not co-occur in any discourse unit.",
"When this happens, we set r ( RS , ES 1 , . . . , ES | S | ) to a bias vector which is learned separately for each RS .",
"This ensures that the concatenation is done over a fixed number of vectors, e.g., 4 for a tenary relation (three binary subrelations and the main relation).",
"Importantly, this strategy enables us to make meaningful predictions for relation candidates even if all n entities never co-occur in the same discourse unit; such candidates would never be generated by a system that only looks at single discourse units in isolation.",
"Our document model is actually a family of representation learning methods, conditioned on the choice of discourse units, subrelations, and aggregation operators.",
"In this paper, we consider sentences and paragraphs as possible discourse Sentence Paragraph Document level level level Text Units 2 , 326 3 , 687 3 , 362 Pos.",
"units.",
"We explore max and logsumexp as aggregation operators.",
"Moreover, we explore ensemble prediction as an additional aggregation method.",
"Specifically, we learn a restricted multiscale model by limiting the text span to a single discourse unit (e.g., a paragraph); the model still combines representations across mentions and subrelations.",
"At test time, given a full document with m discourse units, we obtain independent predictions p 1 , . . . , p m for each discourse unit.",
"We then combine these probabilities using an ensemble operator P .",
"A natural choice for P is max, though we also experiment with noisy-or: P ( p 1 , , p k ) = 1 k (cid:89) i =1 (1 p i ) .",
"It is also possible to ensemble multiple models that operate on different discourse units, using this same operator.",
"Our model can be trained using standard supervised or indirectly supervised methods.",
"In this paper, we focus on distant supervision, as it is a particularly potent learning paradigm for high-value domains.",
"Our entity-centric formulation is particularly well aligned with distant supervision, as distant supervision at the entity level is significantly less noisy compared to the mention level, so we don't need to deploy sophisticated denoising strategies such as multi-instance learning (Hoff-mann et al., 2011).",
"We validate our approach on a standard biomedical machine reading task: extracting drug-gene-mutation interactions from biomedical articles (Peng et al., 2017; Wang and Poon, 2018).",
"We cast this task as binary classification: given a drug, gene, mutation, and document in which they are mentioned, determine whether the document asserts that the mutation in the gene affects response to the drug.",
"For training, we use documents from the PubMed Central Open Access Subset (PMC-OA) 4 .",
"For distant supervision, we use three existing knowledgebases (KBs) with hand-curated drug-gene-mutation facts: CIVIC, 5 GDKD (Dien-stmann et al., 2015), and OncoKB (Chakravarty et al., 2017).",
"Table 1 shows basic statistics of this training data.",
"Past methods using distant supervision often need to up-weight positive examples, due to the large proportion of negative candidates.",
"Interestingly, we found that our document model was robust to this imbalance, as re-weighting had little effect and we didn't use it in our final results.",
"Evaluating distant supervision methods is challenging, as there is often no gold-standard test set, especially at the mention level.",
"Prior work thus resorts to reporting sample precision (estimated proportion of correct system extractions) and absolute recall (estimated number of correct system extrac-tions).",
"This requires subsampling extraction results and manually annotating them.",
"Subsampling variance also introduces noise in the estimate.",
"Instead, we used CKB CORE, a public subset of the Clinical Knowledgebase (CKB) 6 (Pat-terson et al., 2016), as our gold-standard test set.",
"CKB CORE contains document-level annotation of drug-gene-mutation interactions manually curated by The Jackson Laboratory (JAX), an NCI-designated cancer center.",
"It is a high-quality KB containing facts from a few hundred PubMed articles for 86 genes, with minimal overlap with the three KBs we used for distant supervision.",
"To avoid contamination, we removed CKB entries whose documents were used in our training data, and split the rest into a development and test set.",
"See Table 2 for statistics.",
"We tuned hyperparam-eters and thresholds on the development set, and report results on the test set.",
"We conducted standard preprocessing and entity linking, similar to Wang and Poon (2018) (see Section A.1).",
"Following standard practice, we masked all entities of the same type with a dummy token, to prevent the classifier from simply memorizing the facts in distant supervision.",
"Wang and Poon (2018) observed that many errors stemmed from incorrect gene-mutation association.",
"We therefore developed a simple rule-based system that predicts which gene-mutation pairs are valid (see Section A.2).",
"We removed candidates that contained a gene-mutation pair that was not predicted by the rule-based system.",
"We evaluate primarily on area under the precision recall curve (AUC).",
"7 We also report maximum recall, which is the fraction of true facts for which a candidate was generated.",
"Finally, we report precision, recall, and F1, using a threshold tuned to maximize F1 on the CKB development set.",
"We compared our multiscale system (MULTISCALE ) with three restricted variants (SENTLEVEL , PARALEVEL , DOCLEVEL ).",
"SENTLEVEL and PARALEVEL restricted training and prediction to single discourse units (i.e., sentences and paragraphs), and produced a document-level prediction by applying the ensemble operator over individual discourse units.",
"DOCLEVEL takes the whole document as input, with each paragraph as a discourse unit.",
"MULTISCALE further combined SENTLEVEL , PARALEVEL , and DOCLEVEL using the ensemble operator.",
"For additional details about the models, see Section A.3.",
"We also compared MULTISCALE with DPL (Wang and Poon, 2018), the prior state of the art in cross-sentence n -ary relation extraction.",
"DPL classifies drug-gene-mutation interactions within three consecutive sentences using the same model architecture as Peng et al. (2017), but incorporates additional indirect supervision such as data programming and joint inference.",
"We used the DPL code from the authors and produced a document-level prediction similarly using the ensemble operator.",
"In the base version, we used max as the ensemble operator.",
"We also evaluated the effect when we 7 We compute area using average precision, which is similar to a right Riemann sum.",
"used noisy-or as the ensemble operator, as well as when we applied the gene-mutation filter during postprocessing.",
"Table 3 shows the results on the CKB test set.",
"In all scenarios, our full model (MULTISCALE ) substantially outperforms the prior state-of-the-art system (DPL).",
"For example, in the best setting, using both noisy-or and the gene-mutation filter, the full model improves over DPL by 8 .",
"4 AUC points.",
"Multiscale learning is the key to this performance gain, with MULTISCALE substantially outperforming more restricted variants.",
"Not surprisingly, expanding extraction scope from sentences to paragraphs resulted in the biggest gain, already surpassing DPL.",
"Conducting end-to-end learning over a document-level representation, as in DOCLEVEL , is beneficial compared to ensembling over predictions for individual discourse units (SENTLEVEL , PARALEVEL ), especially in the base version.",
"Interestingly, MULTISCALE still attained significant gain over DOCLEVEL with an ensemble over SENTLEVEL and PARALEVEL , suggesting that the document-level representation can still be improved.",
"In addition to prediction accuracy, the document-level models also have much more room to grow, as maximum recall is about 20 absolute points higher in MULTISCALE and DOCLEVEL , compared to PARALEVEL or DPL.",
"8 The ensemble operator had a surprisingly large effect, as shown by the gain when it was changed from max (base version) to noisy-or.",
"sug-8 The difference in actual recall is less pronounced, as we chose thresholds to maximize F1 score.",
"We expect actual recall to increase significantly as document-level models improve, whereas the other models are closer to their ceiling.",
"gests that combining weak signals across multiple scales can be quite beneficial.",
"Our handcrafted gene-mutation filter also improved all sys-tems substantially, corroborating the analysis of Wang and Poon (2018).",
"In particular, without the filter, it is hard for the document-level models to achieve high precision, so they sacrifice a lot of recall to get good F1 scores.",
"Using the filter helps them attain significantly higher recall while maintaining respectable precision.",
"Figure 3 shows the precision-recall curves for the four models (with noisy-or and gene-mutation filter).",
"DOCLEVEL has higher maximum recall than PARALEVEL , but generally lower precision at the same recall level.",
"By ensembling all three variants, MULTISCALE achieves the best combi-System AUC MR P R F1 MULTISCALE 47 .",
"This can also be seen in Table 4, where we ablate each of the three variants used by MULTISCALE .",
"All three variants in the ensemble contributed to overall performance.",
"We use logsumexp as the aggregation operator to combine mention-level representations into an entity-level one.",
"If we replace it with max pooling, the performance drops substantially across the board, as shown in Table",
"5. For example, MULTISCALE lost 3.8 absolute points in AUC.",
"Such difference is also observed in Verga et al. (2018).",
"As in comparing ensemble operators, this demonstrates the benefit of combining weak signals using a multiscale representation.",
"Compared to standard sentence-level extraction, our method can extract relations among entities that never co-occur in the same sentence or even paragraph.",
"Figure 4 shows the proportion of correctly predicted facts by MULTISCALE that are expressed across paragraph or sentence boundaries.",
"MULTISCALE can substantially improve the recall by making additional cross-sentence and cross-paragraph extractions.",
"We manually inspected twenty correct cross-paragraph extractions (with the chosen threshold for the preci-sion/recall numbers in Table 3) and found that our model was able to handle some interesting linguistic phenomena.",
"Often, a paper would first describe the mutations present in a patient cohort, Figure 4: Breakdown of MULTISCALE recall based on whether entities in a correctly extracted fact occurred within a single sentence, cross-sentence but within a single paragraph, or only cross-paragraph.",
"and later describe the effects of drug treatment.",
"There are also instances of bridging anaphora, for example via cell lines.",
"One paper first stated the gene and mutation for a cell line The FLT3 -inhibitor resistant cells Ba/F3-ITD+691, Ba/F3-ITD+842, . . . , which harbored FLT-ITD plus F691L, Y842C , . . . mutations. . . , and later stated the drug effect on the cell line E6201 also demonstrated strong anti-proliferative effects in FLT3 -inhibitor resistant cells. . . such as Ba/F3-ITD+691, Ba/F3-ITD+842 . . . .",
"As a baseline, we also consider a different document-level strategy where we decompose the n -ary relation into subrelations of lower arity, train independent classifiers for them, then join the subrelation predictions into one for the n -ary relation.",
"We found that with distant supervision, the gene-mutation subrelation classifier was too noisy.",
"Therefore, we focused on training drug-gene and drug-mutation classifiers, and joined each with the rule-based gene-mutation predictions to make ternary predictions.",
"Table 6 shows the results on CKB.",
"The paragraph-level drug-mutation model is quite competitive, which benefits from the fact that the gene-mutation associations in a document are unique.",
"This is not true in general n -ary relations.",
"Still, it trails MULTISCALE by a large margin in predictive accuracy, and with an even larger gap in the potential upside (i.e., maximum recall).",
"The drug-gene model has higher maximum recall, but much worse precision.",
"This low precision is expected, as it is usually not valid to assume that if a drug and gene interact, then all possible mutations in the gene will have an effect on the drug response.",
"While much higher compared to other systems, the maximum recall for MULTISCALE is still far from 100%.",
"For over 20% of the relations, we can't find all three entities in the document.",
"In many cases, the missing entities are in figures or supplements, beyond the scope of our extraction.",
"Some mutations are indirectly referenced by well-known cell lines.",
"There are also remaining entity linking errors (e.g., due to missing drug synonyms).",
"We next manually analyzed some sample prediction errors.",
"Among 50 false positive errors, we found a significant portion of them were actually true mentions in the paper but were excluded by curators due to additional curation criteria.",
"For example, CKB does not curate a fact referenced in related work, or if they deem the empirical evidence as insufficient.",
"This suggests the need for even higher-order relation extraction to cover these aspects.",
"We also inspected 50 sample false negative errors.",
"In 40% of the cases, the textual evidence is vague and requires corroboration from a table or figure.",
"In most of the remaining cases, there is direct textual evidence, though they require cross-paragraph reasoning (e.g., bridging anaphora).",
"While MULTISCALE was able to process such phenomena sometimes, there is clearly much room to improve.",
"semantics by reducing the n -ary relation to n binary relations between the reified relation and its arguments, a.k.a. slot filling .",
"For example, early work on the Message Understanding Conference (MUC) dataset aims to identify event participants in news articles (Chinchor, 1998).",
"More recently, there has been much work in extracting semantic roles for verbs, as in semantic role labeling (Palmer et al., 2010), as well as properties for popular entities, as in Wikipedia Infobox (Wu and Weld, 2007) and TAC KBP 9 .",
"In biomedicine, the BioNLP Event Extraction Shared Task aims to extract genetic events such as expression and regulation (Kim et al., 2009).",
"These approaches typically assume that the whole document refers to a single coherent event, or require an event anchor (e.g., verb in semantic role labeling and trigger word in event extraction).",
"We instead follow recent work in cross-sentence n -ary relation extraction (Peng et al., 2017; Wang and Poon, 2018; Song et al., 2018), which does not have these restrictions.",
"Document-level relation extraction Most information extraction work focuses on modeling and prediction within sentences (Surdeanu and Ji, 2014).",
"Duan et al. (2017) introduces a pre-trained document embedding to aid event detection, but their extraction is still at the sentence level.",
"Past work on cross-sentence extraction often relies on explicit coreference annotations or the as-sumption of a single event in the document (Wick et al., 2006; Gerber and Chai, 2010; Swampillai and Stevenson, 2011; Yoshikawa et al., 2011; Koch et al., 2014; Yang and Mitchell, 2016).",
"Recently, there has been increasing interest in general cross-sentence relation extraction (Quirk and Poon, 2017; Peng et al., 2017; Wang and Poon, 2018), but their scope is still limited to short text spans of a few consecutive sentences.",
"These methods all extract relations at the mention level, which does not scale to whole documents due to the combinatorial explosion of relation candidates.",
"Wu et al. (2018b) applies manually crafted rules to heavily filter the candidates.",
"We instead adopt an entity-centric approach and combine mention-level representations to create an entity-level representation for extraction.",
"Mintz et al. (2009) aggregates mention-level features into entity-level ones within a document, but they only consider 9 http://www.nist.gov/tac/2016/KBP/ ColdStart/index.html binary relations within single sentences.",
"Kil-icoglu (2016) used hand-crafted features to improve cross-sentence extraction, but they focus on binary relations, and their documents are limited to abstracts, which are substantially shorter than the full-text articles we consider.",
"Verga et al. (2018) applies self-attention to combine the representations of all mention pairs into an entity pair representation, which can be viewed a special case of our framework.",
"Their work is also limited to binary relations and abstracts, rather than full documents.",
"Multiscale modeling Deep learning on long sequences can benefit from multiscale modeling that accounts for varying scales in the discourse structure.",
"Prior work focuses on generative learning such as language modeling (Chung et al., 2017).",
"We instead apply multiscale modeling to discriminative learning for relation extraction.",
"In addition to modeling various scales of discourse units (sentence, paragraph, document), we also combine mention-level representations into an entity-level one, as well as sub-relations of the n -ary relation.",
"McDonald et al. (2005) learn (cid:0) n 2 (cid:1) pairwise relation classifiers, then construct maximal cliques of related entities, which also bears resemblance to our subrelation modeling.",
"However, our approach incorporates the entire subrelation hierarchy, provides a principled end-to-end learning framework, and extracts relations from the whole document rather than within single sentences.",
"Distant supervision Distant supervision has emerged as a powerful paradigm to generate large but potentially noisy labeled datasets (Craven et al., 1999; Mintz et al., 2009).",
"A common denoising strategy applies multi-instance learning by treating mention-level labels as latent variables (Hoffmann et al., 2011).",
"Noise from distant supervision increases as extraction scope expands beyond single sentences, motivating a variety of indirect supervision approaches (Quirk and Poon, 2017; Peng et al., 2017; Wang and Poon, 2018).",
"Our entity-centric representation and multiscale modeling provide an orthogonal approach to combat noise by combining weak signals spanning various text spans and subrelations.",
"We vastly increase maximum recall by scoring document-level candidates.",
"Meanwhile, we preserve precision with a multiscale approach that combines representations learned across the subrelation hierarchy and text spans of various scales.",
"Our method substantially outperforms prior cross-sentence n -ary relation extraction approaches in the high-value domain of precision oncology.",
"Our document-level view opens opportunities for multimodal learning by integrating information from tables and figures (Wu et al., 2018a).",
"We used the ternary drug-gene-mutation relation as a running example in this paper, but knowledge bases often store additional fields such as effect (sensitive or resistance), cancer type (solid tumor or leukemia), and evidence (human trial or cell line experiment).",
"It is straightforward to apply our method to such higher-order relations.",
"Finally, it will be interesting to validate our approach in a real-world assisted-curation setting, where a machine reading system proposes candidate facts to be verified by human curators.",
"We thank Sara Patterson and Susan Mockus for guidance on precision oncology knowledge curation and CKB data, Hai Wang for help in running experiments with deep probabilistic logic, and Tristan Naumann, Rajesh Rao, Peng Qi, John Hewitt, and the anonymous reviewers for their helpful comments.",
"R.J. is supported in part by an NSF Graduate Research Fellowship under Grant No.",
"DGE-114747."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"abstain",
"method",
"other",
"other",
"other",
"method",
"method",
"method",
"method",
"other",
"other",
"other",
"method",
"abstain",
"method",
"result",
"abstain",
"method",
"method",
"objective",
"other",
"other",
"other"
] |
[
"To improve the coherence and knowledge retrieval capabilities of non-task-oriented dialogue systems, recent Transformer-based models aim to integrate fixed background context.",
"This often comes in the form of knowledge graphs, and the integration is done by creating pseudo utterances through paraphrasing knowledge triples, added into the accumulated dialogue context.",
"However, the context length is fixed in these architectures, which restricts how much background or dialogue context can be kept.",
"In this work, we propose a more concise encoding for background context structured in the form of knowledge graphs, by expressing the graph connections through restrictions on the attention weights.",
"The results of our human evaluation show that this encoding reduces space requirements without negative effects on the precision of reproduction of knowledge and perceived consistency.",
"Further, models trained with our proposed context encoding generate dialogues that are judged to be more comprehensive and interesting.",
"Building on the idea of attention-based seq2seq models (Vaswani et al., 2017), recent language models such as BERT (Devlin et al., 2019) and GPT-2 (Radford et al., 2019) enable neural conversational models to generate responses that appear human-like and engaging (Yu et al., 2019).",
"A closer look, however, reveals that the lack of long-term memory to represent consistent (world) knowledge and personality over multiple speaker turns can lead to incoherent content being generated (Li et al., 2016; Serban et al., 2017).",
"Initiated by the Conversational Intelligence Challenge (Burtsev et al., 2018; Dinan et al., 2020), the research focus therefore shifted towards knowledge-grounded dialogue The first two authors contributed equally to this paper.",
"generation, resulting in first promising approaches using Transformer-based architectures (Dinan et al., 2019; Ghazvininejad et al., 2018; Galetzka et al.,",
"2020).",
"The basic idea of these approaches is to provide the required background knowledge together with the current dialogue context when decoding the next system utterance.",
"As the underlying language model's input sequence length is limited for instance, to 1024 tokens in the case of GTP-2 the presentation of the background knowledge to the model highly impacts the amount of context information that can be fed into a Transformer network.",
"In these earlier attempts, the knowledge was paraphrased into pseudo-utterances, on a par with the utterances from the dialogue history.",
"In this paper, we show that a structured knowledge representation offers advantages over unstructured text: facts and complex relationships between different entities can be encoded concisely without performance drop in key indicators, such as knowledge correctness, consistency, and interestingness.",
"Chaudhuri et al. (2019) showed the general feasibility of integrating knowledge graphs into domain-specific dialogues.",
"With this work, we integrate arbitrary knowledge graphs into open-domain knowledge-grounded dialogues, preserving the information encoded in their structure.",
"Space Efficient Context Encoding For our proposed encoding, we generate dialogue-specific local knowledge graphs ( subgraphs of a background knowledge graph) that capture the information relevant to the dialogue (similar to (Chaudhuri et al., 2021)).",
"We transform these subgraphs into a concise representation that fits the input sequence encoding for the underlying language model (GPT-2): Labels of the distinct nodes and edges (entities and corresponding relations) are concatenated with the dialogue history.",
"To preserve the graph structure, we fit the attention mask to force the self-attention layers for each node to attend to only connected nodes in the original graph (if there is a connection, attention weight is set to 1, otherwise to 0).",
"This resembles the message-passing approach of graph neural networks (Gilmer et al., 2017).",
"Naive concatenation of graph triples has a space complexity of O ( n k ) , with n being the number of triples and k the number of word tokens per verbalized triple.",
"Paraphrasing these triples into pseudo-utterances results in even larger space complexity.",
"Our proposed encoding has a space complexity of O ( l ) , with l being the number of distinct node and edge labels (entities and relations).",
"This reduces the required context space compared to triple concatenation or paraphrasing if entities are repeated in the triples (and hence l < n k ), which can be assumed to be the case in knowledge graphs (see discussion below).",
"The space savings grow with the size and average degree (connectedness) of the graph.",
"Empirical results with two different knowledge-grounded dialogue datasets confirm our theoretical considerations and show that we can reduce the required space by a factor of up to 3.6.",
"These results imply that we can feed more context information into the model, which should result in higher accuracy.",
"We discuss these results in detail in Section 4.3.",
"Contributions We propose an approach to integrate a concise encoding of knowledge graphs into a Transformer-based decoder architecture for knowledge-grounded dialogue generation.",
"Transformers for natural language generation can be viewed as graph neural networks which use self-attention (Velickovic et al., 2018) for neighborhood aggregation on fully-connected word graphs (Xu et al., 2019).",
"We utilize this relationship and restrict the self-attention weights to match the underlying graph structure.",
"Our comprehensive human evaluation with models trained with the publicly available datasets KOMODIS (Galetzka et al., 2020) and OPENDIALKG (Moon et al., 2019), both providing dialogues enriched with structured knowledge, shows that we can reduce the space requirement for context without negative effects on the precision of reproduction of knowledge and perceived consistency.",
"Moreover, our models generate dialogues that are judged to be more detailed and interesting.",
"For reproducibility, we publish all necessary source code and data ( https://github.com/fabiangal/ space-efficient-context-encoding-acl21 ).",
"Neural conversational models can be categorized into retrieval-based approaches (Lowe et al., 2015; Wu et al., 2017) that choose a next utterance from a set of suitable candidates, and generative approaches (Serban et al., 2016; Wolf et al., 2019; Chaudhuri et al., 2019; Roller et al., 2021) which decode the next utterance token by token out of a fixed vocabulary.",
"The architectures are based on recurrent neural networks such as LSTM (Hochre-iter and Schmidhuber, 1997) or GRU (Cho et al., 2014) cells or self-attention layers (Vaswani et al., 2017) in sequence-to-sequence structures.",
"To integrate knowledge in addition to the dialogue history these models can be augmented by additional recurrent cells to encode the knowledge into a fixed-sized vector representation (Young et al., 2018; Parthasarathi and Pineau, 2020; Ghazvininejad et al., 2018).",
"This can be traced back to first end-to-end approaches reading documents for question-answering (Miller et al., 2016) or more general sequential data (Sukhbaatar et al., 2015).",
"He et al. (2017) embedded knowledge graphs (stored as triples) with LSTM cells and message-passing, and then used a decoder LSTM to generate a suitable answer.",
"Long et al. (2017) used a CNN architecture to encode external knowledge instead.",
"The recent success of unsupervised pre-trained language generation models such as GPT-2 yielded a variety of conversational models using self-attention based on the idea of fine-tuning the models with specific knowledge-grounded dialogue datasets (which we will discuss in Section 3).",
"These models concatenate the additional context information as plain text to the input sequence (Zhang et al., 2018; Dinan et al., 2019; Galetzka et al., 2020).",
"To differentiate context from dialogue, additional tokens are learned during fine-tuning and added to the word tokens.",
"For bigger knowledge graphs, the limitation of the input sequence length of these models makes an information retrieval system necessary to estimate a small subset of relevant information that can be fed into the model.",
"The increasing availability of conversational content on social media platforms such as Twitter or Reddit led to the construction of many dia-type",
"logue datasets, with Open-Subtitles (Vinyals and Le, 2015) and Twitter-Corpus (Sordoni et al., 2015) being some popular examples (see also (Ritter et al., 2010; Duplessis et al., 2016)).",
"Some recently published datasets emphasize knowledgeable dialogues by integrating external information sources.",
"The objective is to create models that generate consistent dialogues with a high knowledge retrieval accuracy (utilizing information from user profiles or knowledge graphs).",
"Dinan et al. (2019) released the Wizard of Wikipedia dataset with over 22k open-domain dialogues.",
"In each dialogue, one participant is playing the wiz-ard,",
"i.e.",
"an expert who is presented with potentially interesting and relevant Wikipedia article excerpts, while the chat partner is the curious apprentice.",
"The textual knowledge passages that were shown to the wizard are part of the dataset.",
"The PERSONA-CHAT dataset (Zhang et al., 2018) contains over 10k dialogues that are conditioned on profile information ( personas ), which ranges from hobbies or favorite food to family background.",
"The information is shown to the participants as a set of sentences and they are tasked to integrate them into the dialogues.",
"In addition, the dataset contains revised personas, which are rephrased, generalized, or specialized versions of the original personas.",
"We use two publicly available human/human multiturn dialogue datasets that use structured background knowledge.",
"KOMODIS (Galetzka et al., 2020) is a closed-domain dataset with dialogues between human participants that were tasked to chit-chat about one given movie and use provided information about it.",
"This information includes facts about the film, such as release year or shot location (Movie was shot in Canada. or The release year is 1995.), free text containing plot or trivia related to the film crew and cast, and opinions towards the facts and entities (I agree with the age restriction. or I don't like Bruce Willis.).",
"The dataset contains over 7,500 conversations with an average of 13.8 utterances per dialogue.",
"OpenDialKG (Moon et al., 2019) is an open-domain dataset containing 15K dialogues, which were collected in a Wizard-of-Oz setup, by connecting two human participants that were tasked to have an engaging dialogue about a given topic.",
"Each dialogue is paired with its corresponding KG paths from Freebase (Bollacker et al., 2007) (connecting entities and relations mentioned in the dialogue).",
"For our experiments with different encoding strategies, we restructure the context information provided by both datasets into dialogue-specific subgraphs.",
"Figure 1 illustrates an example of an (in-complete) subgraph that belongs to a dialogue from KOMODIS .",
"The inner subgraph containing the two green entity nodes 'Pulp Fiction' and 'Bruce Willis', and corresponding attribute nodes (blue), marked as depth 0 , represents the information on which one particular dialogue was based.",
"To test the limits of the capacity for representing knowledge, we also experiment with expanded subgraphs depths 1 and depths 2 in the figure by including information from external knowledge sources (IMDb for KOMODIS , and Freebase for OPENDIALKG).",
"For instance, Pulp Fiction also has Samuel L. Jackson as an actor (depth 1) who also stars in Goodfellas (depth 2).",
"This way, the subgraph depth directly reflects the hop distance from the entities in the core subgraph.",
"For subgraphs of depth 2, we restrict some attributes and entities to prevent the subgraphs to explode in size, thus unlikely to fit in GPT-2.",
"For example, we don't add trivia information that isn't already in the dialogues or limit additional actors per movie to three.",
"In contrast to OPENDIALKG, the dialogues in KOMODIS are about one main entity (here, the movie) each.",
"To better compare the experiments across datasets, we create two versions of depth 1 for KOMODIS , where depth 1b includes a second movie that is related to the first movie (e.g. by an actor).",
"This version is then used to create the subgraph of depth 2. 4 Graph Attention Transformer 4.1 Model Overview For all experiments, we use the GPT-2 model proposed by Radford et al. (2019), which is commonly used in Transformer-based dialogue generation for English.",
"The authors published four different sized variations.",
"We use the model with 117 million parameters, 12 self-attention layers, and 768-dimensional word embeddings.",
"The model has 12 heads per attention layer and 3072 nodes in all feed-forward layers.",
"Our architecture is visualized in Figure 3. A knowledge estimator creates a subgraph from the available knowledge graphs for both datasets based on the dialogue history and converts it using our encoding.",
"Then, the dialogue history and encoded context sequences are concatenated and fed into the GPT-2 model.",
"For training, we optimize model weights from GPT-2 by minimizing the negative log-likelihood for next-token prediction.",
"Training details are listed in Appendix B. 4.2 Concise Graph Encoding Figure 2 shows the general encoding strategy that we propose.",
"Similar to our previous approach (Galetzka et al., 2020) and Wolf et al. (2019), we use three layers of input embeddings for words, segments and positions.",
"But instead of concatenating paraphrased triples (e.g. (cid:104) Pulp Fiction ', is a ', movie ' (cid:105) , (cid:104) Pulp Fiction ', release year ', 1994 ' (cid:105) ), we convert the graph into unique entity-relation pairs (e.g. (cid:104) Pulp Fiction ', movie ' (cid:105) , (cid:104) 1994 ', release year ' (cid:105) in the leftmost part of the figure) and concatenate them with the dialogue history (middle part in figure).",
"In previous work, the segments layer distinguished context and different speakers.",
"We experiment with two different encoding strategies, utilizing the segments layer in other ways.",
"Figure 4 illustrates both encoding strategies.",
"In the series encoding (upper half of the figure), relation and entity tokens are sequenced in a se-BOS entity entity rel.",
"ries and added to the words layer.",
"Two new tokens ( (cid:104) entity (cid:105) and (cid:104) relation (cid:105) ) differentiate between relations and entities in the segments layer.",
"In the parallel encoding, entity tokens are added to the words layer and according relations to the segments layerthus in parallel .",
"Padding tokens are used to align the length between the two layers.",
"This encoding via a segments layer reduces the space requirements compared to paraphrasing, as repeating tokens occur only once, but on its own loses information encoded in the graph structure (node-edge connections).",
"To preserve this structure information, we create and add a per-graph attention mask to all hidden layers.",
"Given an input sequence S , the hidden state h li of the i 'th token at layer l in the GPT-2 model can be computed by: h li = (cid:88) j S w ij ( V l 1 h l 1 j ) , (1) where w ij = softmax j ( m j + Q l 1 h l 1 i K l 1 h l 1 j ) , (2) with learnable weights K , Q , and V .",
"Equation 1 is similar to message-passing algorithms (Duve-naud et al., 2015; Li et al., 2016; Gilmer et al., 2017), where a new hidden state for a graph node is computed by an arbitrary function of all previous hidden states of connected nodes.",
"Our attention masks m j are added as shown in Equation 2 so that entity and relation tokens can only attend to tokens from their neighboring nodes.",
"This attention masking was originally used for mask out future tokens (setting m i,j for all j > i to the masking value).",
"Figure 5 illustrates the concept with an attention mask of the graph example from Figure 1.",
"Here, the node Bruce Willis ' (blue) is not connected with the release year 1994 '.",
"Thus, the attention weights are masked out with zeros.",
"But, it is connected with the trivia information Worked on the movie for only 18 days ' and these attentions are not masked (ones).",
"Although entities and relations from the knowledge graph are position invariant within S , the word order still matters.",
"Therefore, we keep the positional encoding of the model but shuffle the knowledge graph nodes and relations for each training sample to facilitate order invariance of the graph encoding.",
"Figure 6 shows the growth of the number of required context tokens when the graph size is increased (and hence, more knowledge is provided to the model), for different encoding types.",
"The baselines are paraphrased-based encodings, where base-triples are the concatenated triples ( Pulp Fiction release year 1994 ) and base-paraphrased the verbalized paraphrase ( The movie Pulp Fiction was released in 1994 ).",
"For OPENDIALKG, no paraphrased version is available.",
"For both datasets, the average number of tokens increases with the graph depth and the average number of nodes and relations for all encodings, as expected.",
"However, it grows much slower in the case of our proposed encodings.",
"The increase of required tokens for OPENDIALKG is steeper than for KOMODIS , due to the different structure of the dialogue context and the underlying knowledge graphs.",
"The context graph for OPENDIALKG is initially rather small Figure 6: Average number of context tokens in the input sequence for different encodings and knowledge graph depths ( KOMODIS from left: d0, d1a, d1b, d2; OPENDIALKG from left: d0, d1, d2).",
"and increases very fast with more hops.",
"Further, the KOMODIS context graph contains information about plot and trivia, which are normally longer strings that belong to one entity, thus the benefit of series-encoding ( series-enc ) and parallel-encoding ( parallel-enc ) regarding this information is rather small compared to the baselines.",
"Concluding, the sequence length reduction correlates with the average number of edges per node.",
"The series-enc is between 14% and 30% longer than the parallel-enc , due to representing relation labels within the segments instead of word embeddings (as shown in Figure 4).",
"We trained 25 models with both datasets with series-encoding , parallel-encoding , base-triples and base-paraphrased (only KOMODIS ) and with graph depths d0 , d1 and d2 .",
"As we were also interested to investigate the effect of different decoding strategies, we used beam-search and top-k-sampling when generating the dialogues.",
"These were created by four colleagues (who were not involved in the creation of the models and did not know what the innovation was) interacting with the models.",
"In sum, we created 500 dialogues.",
"At training time, we use perplexity on the validation subset as the stopping criterion.",
"Table 3 lists the results for all models estimated on the test set.",
"Base-triples (baselines) models reach the lowest perplexity and an increasing graph depth increases perplexity, which is reasonable since the format of the baseline encodings resembles the pre-training data of the GPT-2 model the most.",
"This correlation is stronger for OPENDIALKG models.",
"In our experiments, perplexity does not correlate with naturalness (estimated by the human evaluators), indicating that this metric is insufficient to evaluate the overall dialogue quality, in particular across models trained on different datasets.",
"In information retrieval, precision/recall and F1 score are typically chosen as automated metrics to evaluate the retrieval capabilities of a system.",
"In our case, we are interested in the ability of reproducing entities and relations from the knowledge graph.",
"Commonly used word-overlap metrics, such as ROUGE-N/ROUGE-L (Lin, 2004), or BLEU (Papineni et al., 2001), aren't capable of measuring these.",
"Evaluating precision and recall requires precise co-reference resolution, parse tree annotation and question labelling (e.g. entities, relations, intent).",
"Assume following background facts and generated utterance: (cid:104) Pulp Fiction ', has genre ', crime ' (cid:105) (cid:104) comedy ', bot opinion ', I like ' (cid:105) It is a crime movie, but I am more interested in comedy films. Without knowing the intent (asking for the genre of 'Pulp Fiction' vs. asking for the preferred genre) of the previous utterance, we cannot determine if the occurrences of crime and comedy are true or false positives concerning the underlying knowledge graph.",
"Moreover, in a conversation knowledge is not only used when answering a question dialogue context inference model have you seen ghost, the musical with brad pitt? genre: drama neutral human no, but is the genre really a musical? model yes, it's a crime and drama movie. genre: drama entailment human nice. what is the age certificate? model the age restriction is 12. i think that's a age certificate: 12 entailment good age to watch it . age certificate: agree entailment human and do you know the budget? model yes, it was $24,000,000. i think that's a lot budget: 22$ million contradiction of money for a movie that came out in 1990. year: 1990 entailment Table 1: Example of a dialogue from the series-enc-d1 model trained on KOMODIS .",
"but also proactively ( Did you know that Bruce Willis worked on the movie only for 18 days? ).",
"However, the resources that we use offer no such annotations and to the best of our knowledge, no published dataset does.",
"Without it, automated metrics don't work well.",
"To tackle these challenges, we put our effort into a comprehensive human evaluation and annotation, which is described in the next section.",
"Participants The evaluation study was managed by researchers not involved in setting up the models and experiments.",
"They recruited 20 participants not familiar with our research and the goals of the study.",
"Demographic data is given in Appendix A. Participants were paid for their effort.",
"Materials To keep the number of assessed dialogues manageable, we limited the number of experiments and did not test all possible variations of the factors described in Section 5.",
"We prepared three series of experiments, aimed at evaluating the influence of decoding algorithms , encoding strategies and graph depths .",
"Early samples indicated that beam-search generates more precise dialogues regarding context.",
"We, therefore, decided to evaluate the decoding algorithm series beforehand.",
"As shown in Section 6.2 our hypothesis proved to be correct, so that the other two series of experiments were done with beam-search only.",
"Procedure All participants were instructed before and supervised during the study by a supervisor to ensure their understanding of the metrics.",
"They were given a participant-specific questionnaire with the human/chatbot dialogues and had to perform three tasks.",
"First, mark utterances that either entail (correct use) or contradict (wrong use) the dialogue context.",
"Based on these annotations we measure the model's knowledge retrieval ability as the ratio between entailing utterances and the sum of entailing and contradicting utterances ( precision ).",
"Second, rate the dialogues with the following statements for agreement on a 7-point Likert scale: (1) Person B sounds natural.",
"(2) Person B sounds consistent.",
"(3) Person B sounds interesting.",
"Person B is always a model, Person A a human.",
"Last, choose between two dialogues, by answering: To which Person B would you prefer to talk?.",
"Additionally, the participants could briefly reason their decision.",
"An example questionnaire can be found in Appendix A. 6.2 Results and Discussion Decoding Table 2 shows the results for beam-search and top-k-sampling decoding.",
"Knowledge precision is better with beam-search for all models, while dialogues generated with top-k-sampling are considered more natural, less self-contradicting, and less repetitive.",
"N-gram filtering reduces repetition through beam-search, but could not be avoided completely.",
"Decoding with top-k-sampling includes more often wrong entity nouns when es-experiment knowledge precision naturalness base-triples series-enc-d1 base-triples series-enc-d1 KOMODIS beam-search 0.69 0.74 5.0 (1.5) 4.8 (1.6) KOMODIS top-k-sampling 0.52 0.56 5.9 (1.2) 5.9 (1.3) OPENDIALKG beam-search 0.73 0.70 4.0 (1.6) 3.4 (1.5) OPENDIALKG top-k-sampling 0.54 0.45 5.3 (1.4) 5.4 (1.3) Table 2: Human evaluation results for beam-search and top-k-sampling, with respect to the correct reproduction of dialogue context.",
"timating the best next tokens, which are then selected by the algorithm.",
"In this work, we emphasize the model's ability to integrate additional dialogue context correctly.",
"Here, models with beam-search perform significantly better.",
"Thus, our further evaluation focuses on beam-search.",
"Graph Encoding The results with series and parallel graph encodings are shown in Table 3 and compared against the baselines.",
"Within each dataset, all models perform similar regarding knowledge precision.",
"Due to the high standard deviation on the agreements, the difference between the models is statistically insignificant.",
"Our graph encoding approach reduces the required input sequence length by a factor of up to 3.6 and still achieves the same quality of knowledge reproduction, consistency, and naturalness as the baselines.",
"Further, the direct dialogue comparison (win ratio) indicates more comprehensive and interesting utterances for KOMODIS .",
"Dialogue preference correlates highest with interestingness and non-existence of contradicting statements.",
"The most common reasons from participants in no specific order are longer and more comprehensive utterances, more inter-esting, asks counter questions and more pleas-ant.",
"The OPENDIALKG models perform worse in general but show similar results between the different encodings.",
"Both datasets have similar sizes but OPENDIALKG is not limited to the movie domain, which makes it harder to train compared to KOMODIS .",
"Series vs. Parallel Encoding A quick summary: the segments layer encodes the typing of the word tokens (from the words layer).",
"The intuition behind it is that the model learns the meaning of the words instead of the word distribution alone.",
"For the series encoding, we encode the types generically as either entity or relation.",
"For the parallel encoding, we use the actual typing from the underlying knowledge graph, such as movie, actor, or release year (Section 4.2).",
"We had two objectives.",
"First, reducing the required context space even further (which we achieved, see Figure 6).",
"Second, analyzing if this improves the accuracy.",
"The results show, that parallel encoding performs slightly worse compared to series encoding.",
"We assume that this is the case due to the lack of training data, which is, in particular, evident for OPENDIALKG that has much more entity and relation types than KOMODIS ,",
"i.e.",
"fewer samples per type.",
"Graph Depth Results for training with different context lengths with KOMODIS are shown in Table 4.",
"All metrics (one outlier for opinion precision with d = 1 ) correlate with increasing graph depth.",
"Results for d = 2 , however, are statistically not significantly higher than for d = 1 .",
"A bigger subgraph leads to more difficult training data, as the model has more options to choose from.",
"The same results couldn't be reproduced for OPENDIALKG.",
"This dataset was created for graph generation based on dialogues.",
"However, the dialogue structure is different due to the recommendation task of the data collection.",
"Most entities in these dialogues (e.g. persons, books, movies) are exchangeable (Can you recommend me a crime book similar to X?, Can you recommend me a crime movie similar to Y?) and therefore not mandatory for a correct and consistent dialogue.",
"Adding more of these entities did not help to determine a correct next entity, as all entities of the same type could be used correctly by the model.",
"Effectiveness of Graph Attention Masking Graph masking encodes the relationships between the entities.",
"We hypothesize that dropping these relationships will lead to an information gap, particularly for bigger subgraphs due to more entities that are not represented (well) in the training data.",
"Table 5 shows the results from an early evaluation phase for KOMODIS and OPENDIALKG with graph depth 1 and 2 without graph masking.",
"The dialogues are significantly worse, in particular in terms of reproducing entities correctly for graph depth 2 which validates our hypothesis.",
"As our resources were limited, we had to reduce the number of models for a thorough human evaluation and thus decided to not pursue this approach any longer.",
"We proposed a new and concise encoding for knowledge triples from a knowledge graph, which can be integrated into a Transformer architecture",
"for consistent non-goal-driven dialogue generation.",
"In our encoding, we reduce the context length by avoiding repetition by concatenating the whole triples with the dialogue history.",
"By manipulating self-attention layers to reflect connections between nodes in the graphs, we preserve the graph structure.",
"The evaluation results prove that our encoding reduces space requirements without negative effects on the precision of reproduction of knowledge and perceived consistency.",
"For reproducibility, we publish the source code and data.",
"We thank our colleagues from the Digital Assistant for Mobility team at the Volkswagen Group Innovation Europe for their support in preparing the human evaluation."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"result",
"result",
"objective",
"abstain",
"method",
"result",
"method",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"objective",
"abstain",
"method",
"abstain",
"result",
"abstain",
"other"
] |
[
"Ashutosh Modi 1 1 Indian Institute of Technology Kanpur (IIT-K) 2 Indian Institute of Science Education and Research Kolkata (IISER-K) 3 West Bengal National University of Juridical Sciences (WBNUJS)",
"{ vijitvm,rsan,sknigam } @iitk.ac.in [email protected] [email protected] { arnabb,ashutoshm } @cse.iitk.ac.in",
"Abstract An automated system that could assist a judge in predicting the outcome of a case would help expedite the judicial process.",
"For such a system to be practically useful, predictions by the system should be explainable.",
"To promote research in developing such a system, we introduce ILDC (Indian Legal Documents Corpus) .",
"ILDC is a large corpus of 35k Indian Supreme Court cases annotated with original court decisions.",
"A portion of the corpus (a separate test set) is annotated with gold standard explanations by legal experts.",
"Based on ILDC, we propose the task of Court Judgment Prediction and Explanation (CJPE).",
"The task requires an automated system to predict an explainable outcome of a case.",
"We experiment with a battery of baseline models for case predictions and propose a hierarchical occlusion based model for explainability.",
"Our best prediction model has an accuracy of 78% versus 94% for human legal experts, pointing towards the complexity of the prediction task.",
"The analysis of explanations by the proposed algorithm reveals a significant difference in the point of view of the algorithm and legal experts for explaining the judgments, pointing towards scope for future research.",
"In many of the highly populated countries like India, there is a vast number of pending backlog of legal cases that impede the judicial process (Katju, 2019).",
"The backlog is due to multiple factors, including the unavailability of competent judges.",
"Therefore, a system capable of assisting a judge by suggesting the outcome of an ongoing court case is likely to be useful for expediting the judicial process.",
"However, an automated decision system is not tenable in law unless it is well explained in terms of how humans understand the legal process.",
"Hence, it is necessary to explain the suggestion.",
"In other words, we would like such a system to predict not only what should be the final decision of a court case but also how one arrives at that decision.",
"In this paper, we introduce INDIANLEGALDOCUMENTSCORPUS (ILDC) intending to promote research in developing a system that could assist in legal case judgment prediction in an explainable way.",
"ILDC is a corpus of case proceedings from the Supreme Court of India (SCI) that are annotated with original court decisions.",
"A portion of ILDC (i.e., a separate test set) is additionally annotated with gold standard judgment decision explanations by legal experts to evaluate how well the judgment prediction algorithms explain themselves.",
"Based on ILDC, we propose a new task: COURTJUDGMENTPREDICTION ANDEXPLANATION (CJPE).",
"This task aims to predict the final decision given all the facts and arguments of the case and provide an explanation for the predicted decision.",
"The decision can be either allowed , which indicates ruling in favor of the appellant/petitioner, or dismissed , which indicates a ruling in favor of the respondent.",
"The explanations in the CJPE task refer to sentences/phrases in the case description that best justify the final decision.",
"Since, we are addressing mainly the SCI cases, one might argue that the usefulness of the task may be limited since, the legislative provisions can always change with time.",
"However, the legal principles of how to apply a given law to a given set of facts remain constant for prolonged periods.",
"Judgment prediction and explanation in the CJPE task are far more challenging than a standard text-classification task for multiple reasons.",
"Firstly, the legal court case documents (especially in Indian context) are unstructured and are usually quite long, verbose, and noisy.",
"There is no easy way of extracting and directly using the facts and arguments.",
"Secondly, the domain-specific lexicon used in court cases makes models pre-trained on generally available texts ineffective on such documents.",
"Consequently, the standard models need to be adapted to the legal domain for the proposed judgment prediction on court cases.",
"Thirdly, explaining prediction in legal documents is considerably more challenging as it requires understanding the facts, following the arguments and applying legal rules, and principles to arrive at the final decision.",
"Our main contributions can be summarized as: 1. We create a new corpus, INDIANLEGALDOCUMENTSCORPUS (ILDC), annotated with court decisions.",
"A portion of the corpus (i.e. a separate test set) is additionally annotated with explanations corresponding to the court decisions.",
"We perform detailed case studies on the corpus to understand differences in prediction and explanation annotations by legal experts, indicative of the computational challenges of modeling the data.",
"2. We introduce a new task, COURTJUDGMENTPREDICTION ANDEXPLANATION (CJPE), with the two sub-tasks:",
"(a) Court Judgment Prediction (CJP) and",
"(b) Explanation of the Prediction.",
"While CJP is not a novel task per se; however, in combination with the explanation part, the CJPE task is new.",
"Moreover, the requirement for explanations also puts restrictions on the type of techniques that could be tried for CJP.",
"In the CJPE task, gold explanations are not provided in the train set; the task expects that the trained algorithms should explain the predictions without requiring additional information in the form of annotations during training.",
"3. We develop a battery of baseline models for the CJPE task.",
"We perform extensive experimentation with state-of-the-art machine learning algorithms for the judgment prediction task.",
"We develop a new method for explaining machine predictions since none of the existing methods could be readily applied in our setting.",
"We compare model explainability results with annotations by legal experts, showing significant differences between the point of view of algorithms and experts.",
"ILDC is introduced to promote the development of a system/models that will augment humans and not replace them.",
"We have covered the ethical considerations in the paper.",
"Nevertheless, the community needs to pursue more research in this regard to fully understand the unforeseen social implications of such models.",
"This paper takes initial steps by introducing the corpus and baseline models to the community.",
"Moreover, we plan to continue to grow, revise and upgrade ILDC.",
"We release the ILDC and code for the prediction and explanation models via GitHub 1 .",
"There has been extensive research on legal domain text, and various corpora and tasks have been proposed e.g., prior case retrieval (Jackson et al., 2003), summarization (Tran et al., 2019; Bhattacharya et al., 2019a), catchphrase extraction (Galgani et al., 2012), crime classification (Wang et al., 2019), and judgment prediction (Zhong et al., 2020).",
"Why ILDC?",
"The task of Legal Judgment Prediction (LJP) and its corresponding corpora (Chalkidis et al., 2019; Zhong et al., 2020; Yang et al., 2019a; Xiao et al., 2018) are related to our setting.",
"In the LJP task, given the facts of a case, violations , charges (e.g., theft) and terms of penalty are predicted.",
"However, the ILDC and the CJPE task introduced in this paper differ from the existing LJP corpora and task in multiple ways.",
"Firstly, we require prediction algorithms to explain the decisions in the CJPE task, to evaluate the explanations we provide a separate test set annotated with gold explanations.",
"Secondly, in the LJP task, typically, the facts of a case are explicitly provided.",
"However, in our case, only unannotated unstructured documents are provided.",
"ILDC addresses a more realistic/practical setting, and consequently, CJPE is a much more challenging task.",
"Moreover, the bare facts do not form the judgment premise of a case since facts are subject to interpretations.",
"A court case description, in practice, has other vital aspects like Ruling by Lower Court , Arguments , Statutes , Precedents , and Ratio of the decision (Bhattacharya et al., 2019b) that are instrumental in decision making by the judge(s).",
"Unlike LJP, we consider (along with the facts) the entire case (except the judgment), and we predict the judgment only.",
"Work by Strickson and de la Iglesia (2020) comes close to our setting, where the authors prepared the test set on UK court cases by removing the final decision from rulings and employed classical machine learning models.",
"Thirdly, to the best of our knowledge, 1 https://github.com/Exploration-Lab/ CJPE we are the first to create the largest legal corpus ( 34 , 816 documents) for the Indian setting.",
"It is important because India has roots in the common law system and case decisions are not strictly as per the statute law, with the judiciary having the discretion to interpret their version of the legal provisions as applicable to the case at hand; this can sometimes make the decision process subjective.",
"Fourth, we do not focus on any particular class of cases (e.g., criminal, civil) but address publicly available generic SCI case documents.",
"Xiao et al. (2018) released the Chinese AI and Law challenge dataset (CAIL2018) in Chinese for judgment prediction, that contains more than 2 .",
"68 million criminal cases published by the Supreme People's Court of China.",
"Chalkidis et al. (2019) released an English legal judgment prediction dataset, containing 11 , 478 cases from the European Court of Human Rights (ECHR).",
"It contains facts, articles violated (if any), and an importance score for each case.",
"ILDC contrasts with the existing LJP corpora, where mainly the civil law system and cases are considered.",
"Though the proposed corpus focuses on Indian cases, our analysis reveals ( 4.2) that the language used in the cases is quite challenging to process computationally and provides a good playground for developing realistic legal text understanding systems.",
"Several different approaches and corpora have been proposed for the LJP task.",
"Chalkidis et al. (2019) proposed a hierarchical version of BERT (Devlin et al., 2019) to alleviate BERT's input token count limitation for the LJP task.",
"Yang et al. (2019a) applied Multi-Perspective Bi-Feedback Network for predicting the relevant law articles, charges, and terms of penalty on Chinese AI and Law challenge (CAIL2018) datasets.",
"Xu et al. (2020) proposed a system for distinguishing confusing law articles in the LJP task.",
"Zhong et al. (2018) applied topological multi-task learning on a directed acyclic graph to predict charges like theft, traffic violation, intentional homicide on three Chinese datasets (CJO, PKU, and CAIL).",
"Luo et al. (2017) proposed an attention-based model to predict the charges given the facts of the case along with the relevant articles on a dataset of Criminal Law of the People's Republic of China.",
"Hu et al. (2018) used an attribute-attentive model in a few-shot setup for charge prediction from facts of the case.",
"Long et al. (2019) predicts the decision of the case using a Legal Reading Comprehension tech-Corpus (Avg. tokens) Number of docs (Accepted Class %) Train Validation Test ILDC multi (3231) 32305 (41.43%) 994(50%) 1517 (50.23%) ILDC single (3884) 5082 (38.08%) ILDC expert (2894) 56 (51.78%) Table 1: ILDC Statistics nique on a Chinese dataset.",
"Chen et al. (2019) used a deep gating network for prison term prediction, given the facts and charges on a dataset constructed from documents of the Supreme People's Court of China.",
"Aletras et al. (2016) used linear SVM to predict violations from facts on European Court of Human Rights cases.",
"Sulea et al. (2017) used SVM in the LJP task on French Supreme Court cases.",
"Katz et al. (2017) presented a random for-est model to predict the Reverse, Affirm, and Other decisions of US Supreme Court judges.",
"We also experiment with some of these models as baselines for the CJPE task ( 5).",
"Explainability in a system is of paramount importance in the legal domain.",
"Zhong et al. (2020) presented a QA based model using reinforcement learning for explainable LJP task on three Chinese datasets (CJO, PKU, and CAIL).",
"The model aims to predict the appropriate crime by asking relevant questions related to the facts of the case.",
"Jiang et al. (2018) used a rationale augmented classification model for the charge prediction task.",
"The model selects as rationale the relevant textual portions in the fact description.",
"Ye et al. (2018) used label-conditioned Seq2Seq model for charge prediction on Chinese legal documents, and the interpretation comprise the selection of the relevant rationales in the text for the charge.",
"We develop an explainability model based on the occlusion method ( 5.2).",
"In this paper, we introduce the INDIANLEGALDOCUMENTSCORPUS (ILDC), a collection of case proceedings (in the English language) from the Supreme Court of India (SCI).",
"For a case filed at the SCI, a decision (accepted v/s rejected) is taken between the appellant/petitioner versus the respondent by a judge while taking into account the facts of the case , ruling by lower Court(s) , if any, arguments , statutes , and precedents .",
"For every case filed in the Supreme Court of India (SCI), the judge (or a bench) decides on whether the claim(s) filed by the appellant/petitioner against the respondent should be accepted or rejected.",
"The decision is relative to the appellant.",
"In ILDC, each of the case proceeding document is labeled with the original decision made by the judge(s) of the SCI, which serve as the gold labels.",
"In addition to the ground truth decision, a separate test set documents are annotated (by legal experts) with explanations that led to the decision.",
"The explanations annotations are ranked in the order of importance.",
"ILDC Creation.",
"We extracted all the publicly available SCI 2 case proceedings from the year 1947 to April 2020 from the website: https: //indiankanoon.org .",
"Case proceedings are unstructured documents and have different formats and sizes, have spelling mistakes (since these are typed during the court hearing), making it challenging to (pre-)process.",
"We used regular expressions to remove the noisy text and meta-information (e.g., initial portions of the document containing case number, judge name, dates, and other meta information) from the proceedings.",
"In practice, as pointed by the legal experts, the judge deciding the case and other meta information influence the final decision.",
"In SCI case proceedings, the decisions are written towards the end of the document.",
"These end section(s) directly stating the decision have been deleted from the documents in ILDC since that is what we aim to predict.",
"Each case's actual decision label has been extracted from the deleted end sections of the proceeding using regular expressions.",
"Another challenge with SCI case proceedings is the presence of cases with multiple petitions where, in a single case, multiple petitions have been filed by the appellant leading to multiple decisions.",
"Consequently, we divided ILDC documents into two sets.",
"The first set, called ILDC single , either have documents where there is a single petition (and, thus, a single decision) or multiple petitions, but the decisions are the same across all those petitions.",
"The second set, called ILDC multi , is a superset of ILDC single and has multiple appeals leading to different decisions.",
"Predicting multiple different decisions for cases with multiple appeals is significantly challenging.",
"In this paper, we do not develop any baseline computational models for this setting; we plan to address this in future work.",
"For the com-2 Although IndianKanoon includes lower court cases as well, they do not have a common structural format and many of the case documents in lower courts may be in a regional Indian language.",
"Hence, for now we only use SCI documents.",
"putational models for the CJPE task, in the case of ILDC multi , even if a single appeal was accepted in the case having multiple appeals/petitions, we assigned it the label as accepted.",
"Table 1 shows the corpus statistics for ILDC.",
"Note that the validation and test sets are the same for both ILDC multi and ILDC single .",
"Temporal Aspect.",
"The corpus is randomly divided into train, validation, and test sets, with the restriction that validation and test sets should be balanced w.r.t. the decisions.",
"The division into train, development, and test set was not based on any temporal consideration or stratification because the sys-tem's objective that may eventually emerge from the project is not meant to be limited to any particular law(s), nor focused on any particular period of time.",
"On the contrary, the aim is to identify standard features of judgments pronounced in relation to various legislation by different judges and across different temporal phases, to be able to use the said features to decipher the judicial decision-making process and successfully predict the nature of the order finally pronounced by the court given a set of facts and legal arguments.",
"While there would be a degree of subjectivity involved, given the difference in the thoughts and interpretations adopted by different judges, such differences are also found between two judges who are contemporaries of each other, as much as between two judges who have pronounced judgments on similar matters across a gap of decades.",
"The focus is, therefore, to develop a system that would be equally successful in predicting the outcome of a judgment given the law that had been in vogue twenty years back, as it would in relation to the law that is currently in practice.",
"The validity and efficacy of the system can therefore be equally tested by applying it to cases from years back, as to cases from a more recent period.",
"In fact, if the system cannot be temporally independent, and remains limited to only successful prediction of contemporary judgments, then it is likely to fail any test of application because by the time the final version of the system can be ready for practical applications on a large scale, the laws might get amended or replaced, and therefore, the judgments that would subsequently be rendered by the court might be as different from one pronounced today, as the latter might differ from one pronounced in the twentieth century.",
"Not acknowledging time as a factor during data sample choice, therefore, appears to be the prudent step in this case, especially given the exponential rate at which legislation is getting amended today, as well as the fast-paced growth of technological development.",
"Annotation of explanations is a very specialized, time-consuming, and laborious effort.",
"Legal Expert Annotations.",
"In our case, the legal expert team consisted of a law professor and his students at a reputed national law school.",
"We took a set of 56 documents (ILDC expert ) from the test set, and these were given to 5 legal experts.",
"Experts were requested to",
"(i) predict the judgment, and",
"(ii) mark the sentences that they think are explanations for their judgment.",
"Each document was annotated by all the 5 experts (in isolation) using the WebAnno framework (de Castilho et al., 2016).",
"The annotators could assign ranks to the sentences selected as explanations; a higher rank indicates more importance for the final judgment.",
"The rationale for rank assignment to the sentences is as follows.",
"Rank 1 was given to sentences immediately leading to the decision.",
"Rank 2 was assigned to sentences that contributed to the decision.",
"Rank 3 was given to sentences indicative of the disagreement of the current court with a lower court/tribunal decision.",
"Sentences containing the facts of the case, not immediately, leading to decision making, but are essential for the case were assigned Rank 4 (or lower) .",
"Note in practice, only a small set of sentences of a document were assigned a rank.",
"Although documents were annotated with explanations in order of ranks, we did not have a similar mechanism in our automated explainability models.",
"From the machine learning perspective, this is a very challenging task, and to the best of our knowledge, none of the state-of-the-art explainability models are capable of doing this.",
"In the current version of ILDC we provide explanation annotations to only a small portion of the test set, this is for evaluating prediction algorithms for the explainability aspect.",
"Even this small set of documents is enough to highlight the difference between the ML-based explainability methods and how a legal expert would explain a decision ( 5.3).",
"Nevertheless, we plan to continue to grow the corpus by adding more explainability annotations and other types of annotations.",
"Moreover, we plan to include lower courts like Indian High Court cases and tribunal cases.",
"The corpus provides new research avenues to be explored by the community.",
"Fairness and Bias.",
"While creating the corpus, we took all possible steps to mitigate any biases that might creep in.",
"We have not made any specific choice with regard to any specific law or any category of cases, i.e., the sampling of cases was completely random.",
"As explained earlier, we took care of the temporal aspect.",
"Importantly, the names of the judge(s), appellants, petitioners, etc., were anonymized in the documents so that no inherent bias regarding these creeps in.",
"The anonymization with respect to judge names is necessary as legal experts pointed out that a judge's identity can sometimes be a strong indicator of the case outcome.",
"It is noteworthy that according to the legal experts if we had not done the same, we could have had higher prediction accuracy.",
"The subjectivity associated with judicial decision-making may also be controlled in this way since the system focuses on how consideration of the facts and applicable law are supposed to determine the outcome of the cases, instead of any individual bias on the judge's part.",
"We also address the ethical concerns in the end.",
"We performed a detailed analysis of case predictions and the explanations annotations.",
"With assistance from a legal expert, we also performed detailed studies for some court cases to understand the task's complexity and possible reasons for deviations between the annotators.",
"We computed the case judgment accuracy of the annotators with respect to original decisions by judges of SCI.",
"The results are shown in Table 2. Though the values are high, none of these are 100%.",
"The accuracy indicates that no annotator agrees with the original judgment in all the cases.",
"This possibly depicts the subjectivity in the legal domain with regard to decision making.",
"The subjectivity aspect has also been observed in other tasks that involve human decision-making, e.g., sentiment and emotion analysis.",
"We performed detailed case studies with the help of experts to further probe into this difference in judgment.",
"Due to space limitations, we are not able to present the studies here; please refer to appendix A and GitHub repository for details.",
"To summarize, the study indicated that the sources of confusion are mainly due to differences in linguistic interpretation (by the annotators) of the legal language given in the case document.",
"Agreement in the judgment prediction: For the quantitative evaluation , we calculate pair-wise",
"agreement between the annotators as shown in Table 3. The highest agreement (94 . 6%) is between Experts 1-3 and 3-5.",
"We also calculate Fleiss' kappa (Fleiss, 1971) as 0 .",
"820 , among all the five annotators, which indicates high agreement.",
"Agreement in the explanation: There are no standard metrics for evaluating annotator agreements for textual annotations.",
"For quantitative evaluation of agreements among the annotators for explanations, we took inspiration from machine translation community and used metrics like ROUGE-L, ROUGE-1, ROUGE-2 (Lin, 2004), BLEU (Pap-ineni et al., 2002) (unigram and bigram averaging), METEOR (Lavie and Agarwal, 2007), Jaccard Similarity, Overlap Maximum and Overlap Minimum 3 .",
"The result for ROUGE-L (averaged out over all documents) 4 is shown in Figure 1. The highest overlap across all the metrics is observed between Expert 3 and Expert 4. The highest value (0 . 9129) is between Expert 2 and Expert 4 for Overlap-Min.",
"We also performed a qualitative evaluation of the agreements in the explanations.",
"We observed that Expert 1, Expert 3, and Expert 4 consider holis-3 Overlap Max: Size of the intersection divided by the maximum size out of the two sample sets that are being compared.",
"Overlap Min: Size of the intersection divided by the minimum size out of the two sample sets that are being compared 4 Due to space constraints we are not able to show heatmaps corresponding to other metrics but they showed similar trends.",
"For the heatmaps for other metrics please refer to our GitHub repository.",
"tic reasoning for the decision.",
"They look at both Substantive (sections applicable) and Procedural (about the jurisdiction of a lower court) aspects of the case.",
"The differences among them are largely due to consideration/non-consideration of the factual sentences.",
"On the other hand, Expert 2 and Expert 5 often use bare-minimum reasoning leading to the final judgment instead of looking at the exhaustive set of reasons and did not always cover both Substantive and Procedural aspects of the case.",
"Analysis of annotations gives insights into the inherent complexity and subjectivity of the task.",
"Legal proceedings are long, verbose, often challenging to comprehend, and exhibit interesting (and computationally challenging) linguistic phenomena.",
"For example, in a case numbered 1962 47 (appendix A), sentence 17 of the case appears to refer to the Supreme Court having accepted a previous appeal for which a review has been requested (i.e., the current appeal).",
"This amounted to the fact that the court actually rejected the present appeal while accepting the previous one.",
"Such intricacies can confuse even legal experts.",
"Given a case proceeding from the SCI, the task of COURTJUDGMENTPREDICTION ANDEXPLANATION (CJPE) is to automatically predict the decision for the case (with respect to the appellant) and provide the explanation for the decision.",
"We address the CJPE task via two sub-tasks in the following sequence: Prediction and Explanation.",
"Prediction : Given a case proceeding D , the task is to predict the decision y { 0 , 1 } , where the label 1 corresponds to the acceptance of the appeal/petition of the appellant/petitioner.",
"Explanation : Given the case proceeding and the predicted decision for the case, the task is to explain the decision by predicting important sentences that lead to the decision.",
"Annotated explanations are not provided during training; the rationale is that a model learned for prediction should explain the decision without explicit training on explanations, since explanation annotations are difficult to obtain.",
"ILDC documents are long and have specialized vocabulary compared to typical corpora used for training text classification models and language models.",
"We initially experimented with non-neural models based on text features (e.g., n-grams, tf-idf, word based features, and syntactic features) and existing pre-trained models (e.g., pre-trained word embeddings based models, transformers), but none of them were better than a random classifier.",
"Consequently, we retrained/fine-tuned/developed neural models for our setting.",
"In particular, we ran a battery of experiments and came up with four different types of models: classical models, sequential models, transformer models, and hierarchical transformer models.",
"Table 4 summarizes the performance of different models.",
"Due to space constraints, we are not able to describe each of the models here.",
"We give a very detailed description of model implementations in appendix B. Classical Models: We considered classical ML models like word/sentence embedding based Logistic Regression, SVM, and Random Forest.",
"We also tried prediction with summarized legal (Bhat-tacharya et al., 2019a) documents; however, these resulted in a classifier no better than random classifier.",
"As shown in Table 4, classical models did not perform so well.",
"However, model based on Doc2vec embeddings had similar performance as sequential models.",
"We extensively experimented with dividing documents into chunks and training the model using each of the chunks separately.",
"We empirically determined that sequential and transformer-based models performed the best on the validation set using the last 512 tokens 5 of the document.",
"Intuitively, this makes sense since the last parts of case proceedings usually contain the main information about the case and the rationale behind the judgment.",
"We also experimented with different sections of a document, and we observed last 512 tokens gave the best performance.",
"Sequence Models: We experimented with standard BiGRU (2 layers) with attention model.",
"We tried 3 different types of embeddings:",
"(i) Word level trained GloVe embeddings (Pennington et al., 2014), with last 512 tokens as input,",
"(ii) Sentence level embeddings (Sent2Vec), where last 150 sen-5 length of 512 was partly influenced by the maximum input token limit of BERT Model MacroPrecision(%) MacroRecall(%) MacroF1(%) Accuracy(%) Classical Models on ILDC multi train set Doc2Vec + LR 63.03 61.00 62.00 60.91 Sent2vec + LR 57.19 55.55 56.36 55.44 Sequential Models on ILDC multi train set Sent2vec + BiGRU + att.",
"tences were input 6 , and",
"(iii) Chunk level embeddings (trained via Doc2Vec).",
"We also trained Hierarchical Attention Network (HAN) (Yang et al., 2016) model.",
"GloVe embeddings with BiGRU and 6 last 150 sentences covered around 90% of the documents XLNet XLNet XLNet | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Dense . . . .",
"attention model gave the best performance (64% F1) among the sequential models.",
"Sequential models trained on ILDC multi and ILDC single have similar performances Transformer Models: We experimented with BERT (Devlin et al., 2019), DistilBERT (Sanh et al., 2019), RoBERTa (Liu et al., 2019), and XLNet (Yang et al., 2019b).",
"Due to limitation on the number of input tokens to BERT and other transformer models, we experimented with different sections (begin tokens, middle tokens, end tokens, combinations of these) of the documents and as shown in Table 4, the last 512 tokens gave the best performance.",
"In general, transformer models outperform classical and sequential models.",
"RoBERTa gave the best performance (72% F1) and DistilBERT was the worst.",
"We did not experiment with domain specific transformers like LEGAL-BERT (Chalkidis et al., 2020), since these have been trained upon US/EU legal texts, hence, they do not work well in the Indian setting as the legal systems are entirely different.",
"Hierarchical Transformer Models: Taking inspiration from hierarchical topic prediction model (Chitkara et al., 2019), we developed Hierarchical Transformer model architecture (Chalkidis et al., 2019).",
"We divided each document into chunks using a moving window approach where each chunk was of length 512 tokens, and there was an overlap of 100 tokens.",
"We obtained the [ CLS ] representation of these chunks, which were then used as input to sequential models (BiGRU + attention) or feed-forward model (CNN (Kim, 2014)).",
"We also tried an ensemble of individual transformer models on each of the chunks.",
"In general, all the hierarchical models outperform transformer models.",
"The best performing model (78% F1) for predicting the case decision is XLNet with BiGRU on the top (Figure 2).",
"Comparing best model accuracy with average annotator accuracy (78% vs. 94%) indicates the task's inherent complexity and motivates more research in this direction.",
"We experimented with a variety explainability algorithms as a post-prediction step.",
"We experimented with the best judgment prediction model (Hierar-chical Transformer (XLNet + BiGRU)) for all the explainable algorithms.",
"We explored three class of explainability methods (Xie et al., 2020): attribution based, model agnostic, and attention-based.",
"In the class of attribution based methods, Lay-erwise Relevance Propagation (LRP) (Bach et al., 2015) and DeepLIFT (Shrikumar et al., 2017) methods did not work in our case.",
"Due to the long length of documents, model agnostic explainability methods like LIME (Ribeiro et al., 2016) and Anchors (Ribeiro et al., 2018) were not applicable.",
"We also experimented with attention-based methods, and Integrated Gradients (Sundararajan et al., 2017) method using the CAPTUM library (Kokhlikyan et al., 2019).",
"However, these highlighted only a few tokens or short phrases.",
"Moreover, attention-based scores are not necessarily indicative of explanations (Jain and Wallace, 2019).",
"To extract explanations, we propose a method inspired from Li et al. (2016) and Zeiler and Fergus (2014).",
"The idea is to use the occlusion method at both levels of the hierarchy.",
"For each document, for the BiGRU part of the model, we mask each complete chunk embedding one at a time.",
"The masked input is passed through the trained BiGRU, and the output probability (masked probability) of the label obtained by the original unmasked model is calculated.",
"The masked probability is compared with unmasked probability to calculate the chunk explainability score.",
"Formally, for a chunk c , if the sigmoid outputs (of the BiGRU) are m (when the chunk was not masked) and m (cid:48) (when the chunk was masked) and the predicted label is y then the probabilities and chunk score s c = p m p m (cid:48) and p m (cid:48) /m = (cid:40) m (cid:48) /m , y = 1 1 m (cid:48) /m , y = 0 We obtain sentences that explain the decision from the transformer part of the model (XLNet) using the chunks that were assigned positive scores.",
"Each chunk (length 512 tokens) is segmented into sentences using NLTK sentence splitter (Loper and Bird, 2002).",
"Similar to BiGRU, each sentence is masked and the output of the transformer at the classification head (softmax logits) is compared Metric Explainability Model vs Experts Expert 1 2 3 4 5 JaccardSimilarity 0.333 0.317 0.328 0.324 0.318 Overlap-Min 0.744 0.589 0.81 0.834 0.617 Overlap-Max 0.39 0.414 0.36 0.35 0.401 ROUGE-1 0.444 0.517 0.401 0.391 0.501 ROUGE-2 0.303 0.295 0.296 0.297 0.294 ROUGE-L 0.439 0.407 0.423 0.444 0.407 BLEU 0.16 0.28 0.099 0.093 0.248 Meteor 0.22 0.3 0.18 0.177 0.279 Table 5: Machine explanations v/s Expert explanations with logits of the label corresponding to original hierarchical model.",
"The difference between the logits normalized by the length of the sentence is the explanation score of the sentence.",
"Finally, top-k sentences ( 40%) in each chunk are selected.",
"To understand and analyze which parts of the documents were contributing towards prediction, we examined the attention weights (scores) in the case of the XLNet+BiGRU+Attention model and the occlusion scores of the XLNet+BiGRU model.",
"Plots for some of the documents are shown in Figure 3. Plots for different chunk sizes are provided in Data/images folder in our GitHub repository.",
"We also provide the t-SNE visualization on the test set using the BERT and Doc2Vec embeddings.",
"Token visualization heatmap using Integrated Gradient for document name 1951 33.txt for BERT model is also provided in GitHub.",
"Plots of scores averaged out over the entire test set for each chunk size can be visualized in appendix B.2.",
"Two things can be noted: firstly, the largest attention and occlusion scores are assigned to chunks corresponding to the end of the document; this is in line with our hypothesis that most of the important information and rationale for judgment is mainly towards the end of the document.",
"Secondly, although attention scores are optimized (via loss minimization or accuracy maximization) to concentrate on the last chunks, this is not the case with occlusion scores.",
"There is no optimization of occlusion scores; yet they still focus on the chunks at the end, which affirms our hypothesis.",
"We compare the performance of occlusion method explanations with the expert annotators' gold explanations by measuring the overlap between the two.",
"We used the same measures ( 4.2) ROUGE-L, ROUGE-1, ROUGE-2, Jaccard Similarity, BLEU, METEOR, Overlap Maximum, and Overlap Minimum Table 5 compares machine explanations with Figure 3: Averaged chunk scores for attention and occlusion the gold explanations.",
"The highest overlap value ( 0 . 8337 ) is observed for the measure Overlap-Min with Expert 4. The values for Overlap-Min depict high agreements of the explainability model with all the experts.",
"However, the values for the other evaluation measures, e.g., ROUGE-L, are in the low to medium range, the highest being 0 .",
"4445 for ROUGE-L and Expert 4. The results show the wide gap between how a machine would explain a judgment and the way a legal expert would explain it.",
"The results motivate us for future research in this direction of developing an explainable model.",
"6 Conclusion This paper introduces the ILDC corpus and corresponding CJPE task.",
"The corpus is annotated with case decisions and explanations for the decisions for a separate test set.",
"Analysis of the corpus and modeling results shows the complexity of legal documents that pose challenges from a computational perspective.",
"We hope that the corpus and the task would provide a challenging and interesting resource for the Legal NLP researchers.",
"For future work, we would like to train a legal transformer similar to LEGAL-BERT (Chalkidis et al., 2020) on our Indian legal case documents.",
"Moreover, we would also like to focus upon using rhetorical roles Bhattacharya et al. (2019b) of the sentences to include structural information of the documents for CJPE task as well.",
"We would like to thank anonymous reviewers for their insightful comments.",
"We would like to thank student research assistants Abin Thomas Alex, Amrita Ghosh, Parmeet Singh, and Unnati Jhunjhun-wala from West Bengal National University of Juridical Sciences (WBNUJS) for annotating the documents.",
"This work would not have been possible without their help.",
"The corpus is created from publicly available data: proceedings of Supreme Court of India (SCI).",
"The data was scraped from the website: www.",
"indiankanoon.org .",
"The website allows scrapping of the data and no copyrights were infringed.",
"Annotators were selected randomly and they participated voluntarily.",
"The proposed corpus aims to promote the development of an explainable case judgment prediction system.",
"The system intends to assist legal professionals in their research and decision-making and not replace them.",
"Therefore, ethical considerations such as allowing legal rights and obligations of human beings to be decided and pronounced upon by non-human intelligence are not being breached by the system.",
"The system proposes to provide valuable information that might be useful to a legal professional to make strategic decisions, but the actual decision-making process is still going to be carried out by the professional himself.",
"Therefore, the system is not intended to produce a host of artificial lawyers and judges regulating human behavior.",
"At the same time, the final expert human analysis of the systemic output should ensure that any existing flaw, absurdity, or overt or latent bias gets subjected to an additional layer of ethical scrutiny.",
"In this way, the usual ethical concerns associated with the concept of case-law prediction also get addressed to a considerable extent since the system is not performing any judicial role herein nor deciding the legal rights or liabilities of human beings.",
"Instead, the system is purported to be used primarily by legal professionals to make strategic decisions of their own, said decisions being still subjected to legal and judicial scrutiny performed by human experts.",
"Nevertheless, the community needs to pursue more research in this regard to fully understand the unforeseen social implications of such system.",
"This paper takes initial steps by introducing the corpus and baseline models to the community.",
"Care has been taken to select cases in a completely random manner, without any particular focus on the type of law or the identities or socio-politico-economic background of the parties or the judges involved.",
"Specifically, the aforementioned identities have been deliberately anonymized so as to minimize or eliminate any possible bias in the course of prediction.",
"The subjectivity that is associated with the judicial decision-making may also be controlled in this way, since the system is focusing on how consideration of the facts and applicable law are supposed to determine the outcome of the cases, instead of any individual bias on the judge's part; another judge might not share such bias, and therefore the only common point of reference that the two judges would have would be the relevant facts of the case and the laws involved.",
"This also gets reflected in the objective methodology used in the selection of annotators and by eliminating any interaction between the annotators themselves while at the same time paying attention to the factors or observations common to the output from the various annotators.",
"The only specification with regard to the forum has been made by taking all the cases from the domain of the Supreme Court of India, owing to the propensity of the apex court of the land towards focusing on the legalities of the issues involved rather than rendering mere fact-specific judgments, as well as the binding nature of such decisions on the subordinate courts of the land.",
"This would also allow the results to be further generalized and applied to a broader set of cases filed before other forums, too, since the subordinate courts are supposed to follow the reasoning of the Supreme Court's judgments to the greatest possible extent.",
"As a result, the impact of the training and testing opportunities provided to the system by a few Supreme Court cases is likely to be much greater than the mere absolute numbers would otherwise suggest."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"other",
"abstain",
"other",
"objective",
"method",
"other",
"method",
"other",
"other",
"other",
"method",
"method",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Recent work has found evidence that Multilingual BERT (mBERT), a transformer-based multilingual masked language model, is capable of zero-shot cross-lingual transfer, suggesting that some aspects of its representations are shared cross-lingually.",
"To better understand this overlap, we extend recent work on finding syntactic trees in neural networks' internal representations to the multilingual setting.",
"We show that subspaces of mBERT representations recover syntactic tree distances in languages other than English, and that these subspaces are approximately shared across languages.",
"Motivated by these results, we present an unsupervised analysis method that provides evidence mBERT learns representations of syntactic dependency labels, in the form of clusters which largely agree with the Universal Dependencies taxonomy.",
"This evidence suggests that even without explicit supervision, multilingual masked language models learn certain linguistic universals.",
"Past work (Liu et al., 2019; Tenney et al., 2019a,b) has found that masked language models such as BERT (Devlin et al., 2019) learn a surprising amount of linguistic structure, despite a lack of direct linguistic supervision.",
"Recently, large multilingual masked language models such as Multilingual BERT (mBERT) and XLM (Conneau and Lample, 2019; Conneau et al., 2019) have shown strong cross-lingual performance on tasks like XNLI (Lample and Conneau, 2019; Williams et al., 2018) and dependency parsing (Wu and Dredze, 2019).",
"Much previous analysis has been motivated by a desire to explain why BERT-like models perform so well on downstream applications in the monolingual setting, which begs the question: what properties of these models make them so cross-lingually effective?",
"In this paper, we examine the extent to which Multilingual BERT learns a cross-lingual representation of syntactic structure.",
"We extend probing methodology, in which a simple supervised model is used to predict linguistic properties from a model's representations.",
"In a key departure from past work, we not only evaluate a probe's performance (on recreating dependency tree structure), but also use the probe as a window into understanding aspects of the representation that the probe was not trained on (i.e. dependency labels; Figure 1).",
"In particular, we use the structural probing method of Hewitt and Manning (2019), which probes for syntactic trees by finding a linear transformation under which two words' distance in their dependency parse is approximated by the squared distance between their model representation vectors under a linear transformation.",
"After evaluating whether such transformations recover syntactic tree distances across languages in mBERT, we turn to analyzing the transformed vector representations themselves.",
"We interpret the linear transformation of the structural probe as defining a syntactic subspace (Figure 2), which intuitively may focus on syntactic aspects of the mBERT representations.",
"Since the subspace is optimized to recreate syntactic tree distances, it has no supervision about edge labels (such as adjectival modifier or noun subject ).",
"This allows us to unsupervisedly analyze how representations of head-dependent pairs in syntactic trees cluster and qualitatively discuss how these clusters relate to linguistic notions of grammatical relations.",
"We make the following contributions: We find that structural probes extract considerably more syntax from mBERT than baselines in 10 languages, extending the structural probe result to a multilingual setting.",
"We demonstrate that mBERT represents some syntactic features in syntactic subspaces that overlap between languages.",
"We find that structural probes trained on one language can recover syntax in other languages (zero-shot), demonstrating that the syntactic subspace found for each language picks up on features that BERT uses across languages.",
"Representing a dependency by the difference of the head and dependent vectors in the syntactic space, we show that mBERT represents dependency clusters that largely overlap with the dependency taxonomy of Universal Dependencies (UD) (Nivre et al., 2020); see Figure 1.",
"Our method allows for fine-grained analysis of the distinctions made by mBERT that disagree with UD, one way of moving past probing's limitation of detecting only linguistic properties we have training data for rather than properties inherent to the model.",
"Our analysis sheds light on the cross-lingual properties of Multilingual BERT, through both zero-shot cross-lingual structural probe experiments and novel unsupervised dependency label discovery experiments which treat the probe's syntactic subspace as an object of study.",
"We find evidence that mBERT induces universal grammatical relations without any explicit supervision, which largely Figure 2: The structural probe recovers syntax by finding a syntactic subspace in which all syntactic trees' distances are approximately encoded as squared L 2 distance (Hewitt and Manning, 2019).",
"agree with the dependency labels of Universal Dependencies.",
"1 2 Methodology We present a brief overview of Hewitt and Manning (2019)'s structural probe, closely following their derivation.",
"The method represents each dependency tree T as a distance metric where the distance between two words d T ( w i , w j ) is the number of edges in the path between them in T .",
"It attempts to find a single linear transformation of the model's word representation vector space under which squared distance recreates tree distance in any sentence.",
"Formally, let h (cid:96) 1: n be a sequence of n representations produced by a model from a sequence of n words w (cid:96) 1: n composing sentence (cid:96) .",
"Given a matrix B R k m which specifies the probe parameters, we define a squared distance metric d B as the squared L 2 distance after transformation by B : d B ( h (cid:96)i , h (cid:96)j ) = || B h (cid:96)i B h (cid:96)j || 22 We optimize to find a B that recreates the tree distance d T (cid:96) between all pairs of words ( w (cid:96)i , w (cid:96)j ) in all sentences s (cid:96) in the training set of a parsed corpus.",
"Specifically, we optimize by gradient descent: arg min B (cid:88) (cid:96) 1 | s (cid:96) | 2 (cid:88) i,j | d T (cid:96) ( w (cid:96)i , w (cid:96)j ) d B ( h (cid:96)i , h (cid:96)j ) | For more details, see Hewitt and Manning (2019).",
"Departing from prior work, we view the probe-transformed word vectors B h themselvesnot just the distances between themas objects of study.",
"1 Code for reproducing our experiments is available here: https://github.com/ethanachi/ multilingual-probing-visualization The rows of B are a basis that defines a subspace of R m , which we call the syntactic subspace , and may focus only on parts of the original BERT representations.",
"A vector B h corresponds to a point in that space; the value of each dimension equals the dot product of h with one of the basis vectors.",
"2 2.1 Experimental Settings These settings apply to all experiments using the structural probe throughout this paper.",
"Data Multilingual BERT is pretrained on corpora in 104 languages; however, we probe the performance of the model in 11 languages (Arabic, Chinese, Czech, English, Farsi, Finnish, French, German, Indonesian, Latvian, and Spanish).",
"3 , 4 Specifi-cally, we probe the model on trees encoded in the Universal Dependencies v2 formalism (Nivre et al., 2020).",
"Model In all our experiments, we investigate the 110M-parameter pre-trained weights of the BERT-Base, Multilingual Cased model.",
"5 Baselines We use the following baselines: 6 MBERTRAND : A model with the same parametrization as mBERT but no training.",
"Specifically, all of the contextual attention layers are reinitialized from a normal distribution with the same mean and variance as the original parameters.",
"However, the subword embeddings and positional encoding layers remain unchanged.",
"As randomly initialized ELMo layers are a surprisingly competitive baseline for syntactic parsing (Conneau et al., 2018), we also expect this to be the case for BERT.",
"In our experiments, we find that this baseline performs approximately equally across layers, so we draw always from Layer 7.",
"LINEAR : All sentences are given an exclusively left-to-right chain dependency analysis.",
"2 For ease of notation, we will discuss vectors B h as being in the syntactic subspace, despite being in R k .",
"3 When we refer to all languages , we refer to all languages in this set, not all languages that mBERT trains on.",
"4 This list is not typologically representative of all human languages.",
"However, we are constrained by the languages for which both large UD datasets and mBERT's pretraining are available.",
"Nevertheless, we try to achieve a reasonable spread over language families, while also having some pairs of close languages for comparison.",
"5 https://github.com/google-research/bert 6 We omit a baseline that uses uncontextualized word embeddings because Hewitt and Manning (2019) found it to be a weak baseline compared to the two we use.",
"EVALUATION To evaluate transfer accuracy, we use both of the evaluation metrics of Hewitt and Manning (2019).",
"That is, we report the Spearman correlation between predicted and true word pair distances (DSpr.).",
"7 We also construct an undirected minimum spanning tree from said distances, and evaluate this tree on undirected, unlabeled attachment score (UUAS), the percentage of undirected edges placed correctly when compared to the gold tree.",
"We first investigate whether mBERT builds syntactic subspaces, potentially private to each language, for a subset of the languages it was trained on; this is a prerequisite for the existence of a shared , cross-lingual syntactic subspace.",
"Specifically, we train the structural probe to recover tree distances in each of our eleven languages.",
"We experiment with training syntactic probes of various ranks, as well as on embeddings from all 12 layers of mBERT.",
"We find that the syntactic probe recovers syntactic trees across all the languages we investigate, achieving on average an improvement of 22 points UUAS and 0.175 DSpr.",
"over both baselines (Ta-ble 1, section IN-LANGUAGE ).",
"8 Additionally, the probe achieves significantly higher UUAS (on average, 9.3 points better on absolute performance and 6.7 points better on improvement over baseline) on Western European languages.",
"9 .",
"Such languages have been shown to have better performance on recent shared task results on multilingual parsing (e.g. Zeman et al., 2018).",
"However, we do not find a large improvement when evaluated on DSpr.",
"(0.041 DSpr. absolute, -0.013 relative).",
"We find that across all languages we examine, the structural probe most effectively recovers tree structure from the 7th or 8th mBERT layer (Fig-ure 4).",
"Furthermore, increasing the probe maximum rank beyond approximately 64 or 128 gives 7 Following Hewitt and Manning (2019), we evaluate only sentences of lengths 5 to 50, first average correlations for word pairs in sentences of a specific length, and then average across sentence lengths.",
"8 Throughout this paper, we report improvement over the stronger of our two baselines per-language.",
"9 Here, we define Western European as Czech, English, French, German, and Spanish.",
"no further gains, implying that the syntactic subspace is a small part of the overall mBERT representation, which has dimension 768 (Figure 3).",
"These results closely correspond to the results found by Hewitt and Manning (2019) for an equivalently sized monolingual English model trained and evaluated on the Penn Treebank (Marcus et al., 1993), suggesting that mBERT behaves similarly to monolingual BERT in representing syntax.",
"We now evaluate the extent to which Multilingual BERT's syntactic subspaces are similar across languages.",
"To do this, we evaluate the performance of a structural probe when evaluated on a language unseen at training time.",
"If a probe trained to predict syntax from representations in language i also predicts syntax in language j , this is evidence that mBERT's syntactic subspace for language i also encodes syntax in language j , and thus that syntax is encoded similarly between the two languages.",
"Specifically, we evaluate the performance of the structural probe in the following contexts: Direct transfer , where we train on language i and evaluate on language j .",
"Hold-one-out transfer , where we train on all languages other than j and evaluate on language j .",
"Building off these cross-lingual transfer experiments, we investigate whether there exists a single joint syntactic subspace that encodes syntax in all languages, and if so, the degree to which it does so.",
"To do so, we train a probe on the concatenation of data from all languages, evaluating it on the concatenation of validation data from all languages.",
"We find that mBERT's syntactic subspaces are transferable across all of the languages we examine.",
"Specifically, transfer from the best source language (chosen post hoc per-language) achieves on average an improvement of 14 points UUAS and 0.128 DSpr.",
"over the best baseline (Table 1, section SINGLETRAN ).",
"10 Additionally, our results demonstrate the existence of a cross-lingual syntactic subspace; on average, a holdout subspace trained on all languages but the evaluation language achieves an improvement of 16 points UUAS and 0.137 DSpr.",
"over baseline, while a joint ALLLANGS subspace trained on a concatenation of data from all source languages achieves an improvement of 19 points UUAS and 0.156 DSpr.",
"(Table 1, section HOLDOUT , ALLLANGS ).",
"Furthermore, for most languages, syntactic information embedded in the post hoc best cross-lingual subspace accounts for 62.3% of the total possible improvement in UUAS (73.1% DSpr.) in recovering syntactic trees over the baseline (as represented by in-language supervision).",
"Holdout transfer represents on average 70.5% of improvement in UUAS (79% DSpr.) over the best baseline, while evaluating on a joint syntactic subspace accounts for 88% of improvement in UUAS (89% DSpr.).",
"These results demonstrate the degree to which the cross-lingual syntactic space represents syntax cross-lingually.",
"10 For full results, consult Appendix Table 1.",
"Our experiments attempt to evaluate syntactic overlap through zero-shot evaluation of structural probes.",
"In an effort to measure more directly the degree to which the syntactic subspaces of mBERT overlap, we calculate the average principal angle 11 between the subspaces parametrized by each language we evaluate, to test the hypothesis that syntactic subspaces which are closer in angle have closer syntactic properties (Table 4).",
"We evaluate this hypothesis by asking whether closer subspaces (as measured by lower average principal angle) correlate with better cross-lingual transfer performance.",
"For each language i , we first compute an ordering of all other languages j by increasing probing transfer performance trained on j and evaluated on i .",
"We then compute the Spearman correlation between this ordering and the ordering given by decreasing subspace angle.",
"Averaged across all languages, the Spearman correlation is 0.78 with UUAS, and 0.82 with",
"DSpr., showing that transfer probe performance is substantially correlated with subspace similarity.",
"To get a finer-grained understanding of how syntax is shared cross-lingually, we aim to understand whether less common syntactic features are embedded in the same cross-lingual space as syntactic features common to all languages.",
"To this end, we examine two syntactic relationsprenominal and postnominal adjectiveswhich appear in some of our languages but not others.",
"We train syntactic probes to learn a subspace on languages that primarily only use one ordering (i.e. majority class is greater than 95% of all adjectives), then evaluate their UUAS score solely on adjectives of the other ordering.",
"Specifically, we evaluate on French, which has a mix (69.8% prenominal) of both orderings, in the hope that evaluating both orderings in the same language may help correct for biases in pairwise language similarity.",
"Since the evaluation ordering is out-of-domain for the probe, predicting evaluation-order dependencies successfully suggests that the learned subspace is capable of generalizing between both kinds of adjectives.",
"cally, for both primarily-prenominal and primarily-postnominal training languages, postnominal adjectives score on average approximately 2 points better than prenominal adjectives (Table 2).",
"Given the previous evidence that mBERT shares syntactic representations cross-lingually, we aim to more qualitatively examine the nature of syntactic dependencies in syntactic subspaces.",
"Let D be a dataset of parsed sentences, and the linear transformation B R k m define a k -dimensional syntactic subspace.",
"For every non-root word and hence syntactic dependency in D (since every word is a dependent of some other word or an added ROOT symbol), we calculate the k -dimensional head-dependent vector between the head and the dependent after projection by B .",
"Specifically, for all head-dependent pairs ( w head , w dep ) , we compute v diff = B ( h head h dep ) .",
"We then visualize all differences over all sentences in two dimensions using t-SNE (van der Maaten and Hinton, 2008).",
"As with multilingual probing, one can visualize head-dependent vectors in several ways; we present the following experiments:",
"dependencies from one language, projected into a different language's space (Figure 1) dependencies from one language, projected into a holdout syntactic space trained on all other languages (Figure 5)",
"For all these experiments, we project into 32-dimensional syntactic spaces.",
"12 Additionally, we expose a web interface for visualization in our GitHub repository.",
"13 5.3 Results When projected into a syntactic subspace determined by a structural probe, we find that difference vectors separate into clusters reflecting linguistic characteristics of the dependencies.",
"The cluster identities largely overlap with (but do not exactly agree with) dependency labels as defined by Universal Dependencies (Figure 6).",
"Additionally, the clusters found by mBERT are highly multilingual.",
"When dependencies from several languages are projected into the same syntactic subspace, whether trained monolingually or cross-lingually, we find that dependencies of the same label share the same cluster (e.g. Figure 1, which presents both English 12 We reduce the dimensionality of the subspaces here as compared to our previous experiments to match t-SNE suggestions and more aggressively filter non-syntactic information.",
"13 https://github.com/ethanachi/ multilingual-probing-visualization/blob/master/visualization.md Example sentences (trimmed for clarity).",
"Visualizing syntactic differences in the syntactic space provides a surprisingly nuanced view of the native distinctions made by mBERT.",
"In Figure 6, these differences are colored by gold UD dependency labels.",
"A brief summary is as follows: Adjectives Universal Dependencies categorizes all adjectival noun modifiers under the amod relation.",
"However, we find that mBERT splits adjectives into two groups: prenominal adjectives in cluster",
"(b) (e.g., Chinese ) and postnominal adjectives in cluster (u) (e.g., French applications domestiques ).",
"Nominal arguments mBERT maintains the UD distinction between subject ( nsubj ) and object ( obj ).",
"Indirect objects ( iobj ) cluster with direct objects.",
"Interestingly, mBERT generally groups adjunct arguments ( obl ) with nsubj if near the beginning of a sentence and obj otherwise.",
"Relative clauses In the languages in our dataset, there are two major ways of forming relative clauses.",
"Relative pronouns (e.g., English the man who is hungry are classed by Universal Dependencies as being an nsubj dependent, while subordinating markers (e.g., English I know that she saw me ) are classed as the dependent of a mark relation.",
"However, mBERT groups both of these relations together, clustering them distinctly from most nsubj and mark relations.",
"Negatives Negative adverbial modifiers (English not , Farsi (cid:81)(cid:30) (cid:9)(cid:171) , Chinese ) are not clustered with other adverbial syntactic relations ( advmod ), but form their own group",
"(h).",
"14 Determiners The linguistic category of determiners ( det ) is split into definite articles",
"(i), in-definite articles",
"(e), possessives",
"(f), and demon-stratives",
"(g).",
"Sentence-initial definite articles",
"(k) cluster separately from other definite articles",
"(j).",
"Expletive subjects Just as in UD, with the separate relation expl , expletive subjects, or third-person pronouns with no syntactic meaning (e.g. English It is cold , French Il faudrait , Indonesian Yang menjadi masalah kemudian ), cluster separately",
"(k) from other nsubj relations (small cluster in the bottom left).",
"Overall, mBERT draws slightly different distinctions from Universal Dependencies.",
"Although some are more fine-grained than UD, others appear to be more influenced by word order, separating relations that most linguists would group together.",
"Still others are valid linguistic distinctions not distinguished by the UD standard.",
"Previous work has found that it is possible to recover dependency labels from mBERT embeddings, in the form of very high accuracy on dependency label probes (Liu et al., 2019; Tenney et al., 2019b).",
"However, although we know that dependency label probes are able to use supervision to map from mBERT's representations to UD dependency labels, this does not provide full insight into the nature of (or existence of) latent dependency label structure in mBERT.",
"By contrast, in the structural probe, B is optimized such that (cid:107) v diff (cid:107) 2 1 , but no supervision as to dependency label is given.",
"The contribution of our method is thus to provide a view into mBERT's own dependency label representation.",
"In Appendix A, Figure 8, we provide a similar visualization as applied to MBERTRAND , finding much less cluster coherence.",
"Our head-dependent vector visualization uses a supervised probe, but its objects of study are properties of the representation other than those relating to the probe supervision signal.",
"Because the probe 14 Stanford Dependencies and Universal Dependencies v1 had a separate neg dependency, but it was eliminated in UDv2.",
"never sees supervision on the task we visualize for, the visualized behavior cannot be the result of the probe memorizing the task, a problem in probing methodology (Hewitt and Liang, 2019).",
"Instead, it is an example of using probe supervision to focus in on aspects that may be drowned out in the original representation.",
"However, the probe's linear transformation may not pick up on aspects that are of causal influence to the model.",
"Cross-lingual embedding alignment Lample et al. (2018) find that independently trained monolingual word embedding spaces in ELMo are isometric under rotation.",
"Similarly, Schuster et al. (2019) and Wang et al. (2019) geometrically align contextualized word embeddings trained independently.",
"Wu et al. (2019) find that cross-lingual transfer in mBERT is possible even without shared vocabulary tokens, which they attribute to this iso-metricity.",
"In concurrent work, Cao et al. (2020) demonstrate that mBERT embeddings of similar words in similar sentences across languages are approximately aligned already, suggesting that mBERT also aligns semantics across languages.",
"K et al. (2020) demonstrate that strong cross-lingual transfer is possible without any word piece overlap at all.",
"Analysis with the structural probe In a monolingual study, Reif et al. (2019) also use the structural probe of Hewitt and Manning (2019) as a tool for understanding the syntax of BERT.",
"They plot the words of individual sentences in a 2-dimensional PCA projection of the structural probe distances, for a geometric visualization of individual syntax trees.",
"Further, they find that distances in the mBERT space separate clusters of word senses for the same word type.",
"Understanding representations Pires et al. (2019) find that cross-lingual BERT representations share a common subspace representing useful linguistic information.",
"Libovick`y et al. (2019) find that mBERT representations are composed of a language-specific component and a language-neutral component.",
"Both Libovick`y et al. (2019) and Kudugunta et al. (2019) perform SVCCA on LM representations extracted from mBERT and a massively multilingual transformer-based NMT model, finding language family-like clusters.",
"Li and Eisner (2019) present a study in syntactically motivated dimensionality reduction; they find that after being passed through an information bottleneck and dimensionality reduction via t-SNE, ELMo representations cluster naturally by UD part of speech tags.",
"Unlike our syntactic dimensionality reduction process, the information bottleneck is directly supervised on POS tags, whereas our process receives no linguistic supervision other than unlabeled tree structure.",
"In addition, the reduction process, a feed-forward neural network, is more complex than our linear transformation.",
"Singh et al. (2019) evaluate the similarity of mBERT representations using Canonical Correlation Analysis (CCA), finding that overlap among subword tokens accounts for much of the representational similarity of mBERT.",
"However, they analyze cross-lingual overlap across all components of the mBERT representation, whereas we evaluate solely the overlap of syntactic subspaces.",
"Since syntactic subspaces are at most a small part of the total BERT space, these are not necessarily mutually contradictory with our results.",
"In concurrent work, Michael et al. (2020) also extend probing methodology, extracting latent ontologies from contextual representations without direct supervision.",
"Language models trained on large amounts of text have been shown to develop surprising emergent properties; of particular interest is the emergence of non-trivial, easily accessible linguistic properties seemingly far removed from the training objective.",
"For example, it would be a reasonable strategy for mBERT to share little representation space between languages, effectively learning a private model for each language and avoiding destructive interference.",
"Instead, our transfer experiments provide evidence that at a syntactic level, mBERT shares portions of its representation space between languages.",
"Perhaps more surprisingly, we find evidence for fine-grained, cross-lingual syntactic distinctions in these representations.",
"Even though our method for identifying these distinctions lacks dependency label supervision, we still identify that mBERT has a cross-linguistic clustering of grammatical relations that qualitatively overlaps considerably with the Universal Dependencies formalism.",
"The UUAS metric We note that the UUAS metric alone is insufficient for evaluating the accuracy of the structural probe.",
"While the probe is optimized to directly recreate parse distances, (that is, d B ( h (cid:96)i , h (cid:96)j ) d (cid:96)T ( w (cid:96)i , w (cid:96)j ) ) a perfect UUAS score under the minimum spanning tree construction can be achieved by ensuring that d B ( h (cid:96)i , h (cid:96)j ) is small if there is an edge between w (cid:96)i and w (cid:96)j , and large otherwise, instead of accurately recreating distances between words connected by longer paths.",
"By evaluating Spearman correlation between all pairs of words, one directly evaluates the extent to which the ordering of words j by distance to each word i is correctly predicted, a key notion of the geometric interpretation of the structural probe.",
"See Maudslay et al. (2020) for further discussion.",
"Limitations Our methods are unable to tease apart, for all pairs of languages, whether transfer performance is caused by subword overlap (Singh et al., 2019) or by a more fundamental sharing of parameters, though we do note that language pairs with minimal subword overlap do exhibit nonzero transfer, both in our experiments and in others (K et al., 2020).",
"Moreover, while we quantitatively evaluate cross-lingual transfer in recovering dependency distances, we only conduct a qualitative study in the unsupervised emergence of dependency labels via t-SNE.",
"Future work could extend this analysis to include quantitative results on the extent of agreement with UD.",
"We acknowledge as well issues in interpreting t-SNE plots (Wattenberg et al., 2016), and include multiple plots with various hyperparameter settings to hedge against this confounder in Figure 11.",
"Future work should explore other multilingual models like XLM and XLM-RoBERTa (Lample and Conneau, 2019) and attempt to come to an understanding of the extent to which the properties we've discovered have causal implications for the decisions made by the model, a claim our methods cannot support.",
"We would like to thank Erik Jones, Sebastian Schuster, and Chris Donahue for helpful feedback and suggestions.",
"We would also like to thank the anonymous reviewers and area chair Adina Williams for their helpful comments on our draft."
] | [
"abstain",
"objective",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"method",
"method",
"abstain",
"method",
"objective",
"objective",
"objective",
"result",
"abstain",
"objective",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"result",
"method",
"other",
"abstain",
"method",
"abstain",
"method",
"objective",
"method",
"objective",
"objective",
"method",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"other",
"method",
"method",
"method",
"objective",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"result",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"The cross-database context-dependent Text-to-SQL (XDTS) problem has attracted considerable attention in recent years due to its wide range of potential applications.",
"However, we identify two biases in existing datasets for XDTS: (1) a high proportion of context-independent questions and (2) a high proportion of easy SQL queries.",
"These biases conceal the major challenges in XDTS to some extent.",
"In this work, we present CHASE , a large-scale and pragmatic Chinese dataset for XDTS.",
"It consists of 5,459 coherent question sequences (17,940 questions with their SQL queries annotated) over 280 databases, in which only 35% of questions are context-independent, and 28% of SQL queries are easy.",
"We experiment on CHASE with three state-of-the-art XDTS approaches.",
"The best approach only achieves an exact match accuracy of 40% over all questions and 16% over all question sequences, indicating that CHASE highlights the challenging problems of XDTS.",
"We believe that CHASE can provide fertile soil for addressing the problems.",
"The problem of mapping a natural language utterance into an executable SQL query in the cross-database and context-dependent setting has attracted considerable attention due to its wide range of applications (Wang et al., 2020b; Zhong et al., 2020).",
"This problem is notoriously challenging, due to the complex contextual dependencies among questions in a sequence.",
"Consider the question sequence in Figure 1.",
"In order to understand the last question, one needs to figure out the elliptical object of the verb (have) from the first two questions in the sequence, which is (first pick player).",
"Questions like this are Work done during an internship at Microsoft Research.",
"context-dependent , since they require resolutions of contextual dependencies such as ellipsis in this question.",
"There are also context-independent questions that can be understood individually, such as the first question in Figure 1.",
"For ease of reference, we refer to this cross-database context-dependent Text-to-SQL problem as XDTS .",
"To study the challenges in XDTS, a continuous effort has been dedicated to constructing datasets, including SParC (Yu et al., 2019a) and CoSQL (Yu et al., 2019b).",
"However, through a careful analysis on existing datasets, we identify two biases in them and these biases conceal the major challenges in XDTS to some extent.",
"First, there are only a limited number of context-dependent questions in existing datasets.",
"Specifically, only 32% of questions in CoSQL are context-dependent, and only 66% of question sequences have context-dependent questions.",
"SParC has more context-dependent questions than CoSQL, but it still has 48% of context-independent questions.",
"Such a limited number of context-dependent questions is unexpected, because prior work (Bertomeu et al., 2006) has shown that questions within a database dialogue are highly likely to be context-dependent, and how to effectively model the context to understand a context-dependent question is one of the major challenges in XDTS.",
"Second, 40% of SQL queries in both SParC and CoSQL are particularly easy, involving at most one condition expression.",
"This biased distribution of SQL queries is potentially caused by their construction methods.",
"In fact, we find that SQL queries for question sequences created from scratch are much more challenging.",
"Upon identifying the limitations of existing datasets, we present CHASE , a large-scale and pragmatic Chinese dataset for XDTS.",
"CHASE consists of 5,459 question sequences (17,940 questions with their SQL queries annotated) over 280 multi-table relational databases.",
"Compared with SParC and CoSQL, the number of context-independent questions in CHASE is reduced from 48% and 68% to 35%, and the number of easy SQL queries is reduced from 40% and 41% to 28%.",
"Moreover, CHASE has richer semantic annotations, including the contextual dependency and schema linking (Lei et al., 2020) of each question.",
"CHASE is also the first Chinese dataset for XDTS.",
"CHASE is made up of two parts: CHASE-C and CHASE-T.",
"In CHASE-C, we recruit 12 Chinese college students who are proficient in SQL to create question sequences from scratch and annotate corresponding SQL queries.",
"To ensure the diversity and cohesion of question sequences, we propose an intent recommendation method.",
"When a student is going to raise a question, an intent category is randomly sampled with the method, and the student is recommended to write the question and SQL query according to it.",
"In CHASE-T, inspired by the construction of CSpider (Min et al., 2019), we translate all the questions, SQL queries, and databases in SParC from English to Chinese.",
"We also try our best to mitigate the biases in SParC.",
"To understand the characteristics of CHASE , we conduct a detailed data analysis and experiment with three state-of-the-art (SOTA) XDTS approaches, namely, EditSQL (Zhang et al., 2019), IGSQL (Cai and Wan, 2020), and our extension of RAT-SQL (Wang et al., 2020a).",
"The best approach only achieves an exact match accuracy of 40% over all questions and 16% over all question sequences, indicating that CHASE presents significant challenges for future research.",
"The dataset, benchmark approaches, and our annotation tools are available at https://xjtu-intsoft.github.io/chase .",
"In summary, this paper makes the following main contributions: We identify two biases in existing datasets for XDTS: (1) a high proportion of context-independent questions and (2) a high proportion of easy SQL queries.",
"We propose an intent recommendation method to guide the question sequence creation.",
"The analysis on CHASE shows that our method is useful to enrich the diversity and cohesion of question sequences.",
"CHASE , to the best of our knowledge, is the first large-scale and pragmatic Chinese dataset for XDTS.",
"Experimental results on CHASE with three state-of-the-art approaches show that there is still a long way to solve the challenging problems of XDTS.",
"In this section, we first formally define the problem of XDTS and its evaluation metrics.",
"Then, we present our study to understand the limitations and biases of existing datasets in Contextual Dependency and SQL Hardness Distribution .",
"Let Q i = (cid:104) q i 1 , , q in (cid:105) and Y i = (cid:104) y i 1 , , y in (cid:105) denote a question sequence and its SQL queries, where q ij is the j -th question in Q i and y ij is the corresponding SQL query for q ij .",
"Given a database DB i , a question q i j , and the question's context (cid:104) q i 1 , , q ij 1 (cid:105) , the goal of XDTS is to generate the SQL query y ij for q ij .",
"An XDTS dataset is a set of question sequences {Q i , Y i , DB i } Ni =1 .",
"Two metrics are widely used to evaluate the prediction accuracy for XDTS: Question Match and Interaction Match .",
"Question Match is 1 when the predicted SQL query of q ij matches y ij .",
"1 Interaction Match is 1 when all predicted SQL queries of Q i match Y i .",
"1 Following (Yu et al., 2018), we decompose a predicted query into different clauses, such as SELECT , WHERE , and compute scores for each clause using set matching separately.",
"Dataset There are two datasets for studying XDTS, all of which are English corpora.",
"(1) SParC (Yu et al., 2019b) SParC is the first dataset for XDTS.",
"It is constructed upon the Spider dataset (Yu et al., 2018).",
"Given a pair of question and SQL query chosen from Spider, an annotator was asked to write a sequence of questions to achieve the gold specified in the chosen pair.",
"(2) CoSQL (Yu et al., 2019a) CoSQL is a corpus for task-oriented dialogue.",
"It uses SQL queries for dialogue state tracking.",
"Hence, it is also used to study XDTS.",
"Question sequences in CoSQL were collected under the Wizard-of-Oz setup (Kelley, 1984).",
"An annotator was assigned a pair of question and SQL query chosen from Spider, and she was asked to raise interrelated questions towards the goal specified in the pair.",
"Another annotator wrote the SQL query for the question if it was answerable.",
"Benchmark Approach We consider three SOTA approaches as our benchmark approaches to understand the characteristics of existing datasets: EditSQL (Zhang et al., 2019), IGSQL (Cai and Wan, 2020), and RAT-CON .",
"RAT-CON is our extension of RAT-SQL (Wang et al., 2020a), which is the SOTA approach for the context-independent Text-to-SQL problem.",
"Appendix A.1 provides the details of our extension.",
"All of the three approaches utilize BERT (Devlin et al., 2019) for encodings.",
"Prior work (Bertomeu et al., 2006) on database question answering dialogues reveals that questions within a dialogue tend to be context-dependent, i.e., the meaning of a question cannot be understood without its context.",
"The last two questions in Figure 1 are typical context-dependent questions, requiring resolutions of ellipsis.",
"In fact, how to effectively model the context to understand a context-dependent question is one of the major challenges in XDTS (Liu et al., 2020).",
"Hence, we study this characteristic of existing datasets to understand how pragmatic and challenging they are.",
"To measure the contextual dependency of an XDTS dataset, we manually classify all the questions in its development set into context-dependent and context-independent.",
"If a question is context-dependent, we further label whether it has coreference or ellipsis , which are two frequently observed linguistic phenomena in dialogues (An-droutsopoulos et al., 1995).",
"Note that a question Dataset Context Independent Context Dependent Overall Coreference Ellipsis SParC 47.5% 52.5% 36.6% 20.9% CoSQL 68.2% 31.8% 18.1% 4.9% CHASE 35.3% 64.7% 36.2% 29.0% CHASE-C 28.8% 71.2% 40.3% 31.4% CHASE-T 42.2% 57.8% 33.1% 26.4% Table 1: Measurement of Contextual dependency.",
"can have both coreference and ellipsis.",
"Each question is first classified by one author of this paper, and then cross-checked and corrected by another.",
"As shown in Table 1, there are only a limited number of context-dependent questions in existing datasets.",
"Specifically, only 32% of questions in CoSQL are context-dependent, and the remaining 68% questions can be understood without the context.",
"Among the 293 question sequences in the development set of CoSQL, 34% of them do not have any context-dependent question.",
"Table 15 in Appendix provides a set of CoSQL question sequences and our classification results.",
"Compared with CoSQL, SParC has more context-dependent questions and more questions that require resolutions of coreference and ellipsis.",
"Nevertheless, 48% of its questions are still context-independent.",
"Table 2 shows the Question Match (QM) and Interaction Match (IM) of our benchmark approaches on SParC and CoSQL.",
"The QM on context-dependent questions is substantially lower than that on context-independent ones, showing that it is challenging for SOTA approaches to generate SQL queries for context-dependent questions.",
"In view of this challenge and the limited number of context-dependent questions in existing datasets, it is necessary to construct a more pragmatic dataset, involving more context-dependent questions, for studying XDTS.",
"SQL hardness is defined as a four-level complexity for SQL queries: easy , medium , hard , and extra hard , according to the number of components, selections, and conditions in a SQL query (Yu et al., 2018).",
"The more components a SQL query has, the more complex it is.",
"Intuitively, the more hard and extra hard SQL queries a dataset has, the more challenging the dataset is.",
"Table 3 presents the SQL hardness distribution in the development set of SParC and CoSQL.",
"We can observe a biased distribution in both datasets, i.e., more than 40% of SQL queries are easy.",
"This biased distribution is potentially caused by their construction methods.",
"Take SParC as an example.",
"A question sequence is constructed by decomposing a complex SQL query into multiple thematically related ones.",
"Although this method is cost-effective, there is little chance that a SQL query is more complicated than the one that it is decomposed from.",
"As we will show in Section 4.3, the SQL hardness distribution of question sequences created from scratch differs a lot from those created via decomposition.",
"Given the limitations of existing datasets, we present CHASE , a large-scale and pragmatic Chinese dataset for XDTS.",
"Unlike the construction of SParC and CoSQL, we do not specify a final goal for each question sequence.",
"Instead, we motivate our annotators to raise diverse and coherent questions via an intent recommendation method.",
"Based on this method, we collect a set of relational databases, and we recruit annotators to create question sequences from scratch and annotate corresponding SQL queries.",
"Data collected in this way are referred as CHASE-C .",
"Besides, inspired by the construction of CSpider (Min et al., 2019) and Vietnamese Spider (Tuan Nguyen et al., 2020), we translate all the questions, SQL queries, and databases in SParC from English to Chinese.",
"During translation, we also try out best to mitigate the biases in SParC.",
"Data collected with this method are referred as CHASE-T .",
"CHASE is make up of both CHASE-C and CHASE-T.",
"Since all existing datasets for XDTS are constructed for English, prior work on this problem primarily focuses on English, leaving other languages underexplored.",
"To enrich the language diversity, in this paper, we construct CHASE for Chinese, and we leave the support of more languages as our important future work.",
"In XDTS, the intent of a question q ij is fully re-flected by its SQL query y ij .",
"Hence, by defining a rich set of relations between y ij 1 and y ij , we can derive diverse y ij based on y ij 1 .",
"Consequently, we can motivate annotators to raise questions with diverse intents.",
"We define four basic intent categories of relations between y ij 1 and y ij : (1) Same Instances .",
"y ij focuses on the other properties of the instances queried in y ij 1 , e.g., by replacing columns in the SELECT clause of y ij 1 .",
"(2) Different Instances of the Same Entity .",
"y ij queries the same type of entity and properties as in y ij 1 , but it focuses on different instances, e.g., by adding an extra condition in the WHERE clause.",
"(3) Different Entity .",
"y ij queries a different type of entity than y ij 1 , e.g., by altering the tables in the FROM clause of y ij 1 .",
"(4) Display .",
"y ij alters the way to display the information queried in y ij 1 , e.g., by adding an ORDER BY clause or DISTINCT in the SELECT clause.",
"We define 16 relations in these four categories, and we also allow combinations of them.",
"Due to the limit of space, we only present 8 relations with their examples in Table",
"4. Complete relations are available in Table 12 of Appendix.",
"When an annotator is going to raise a follow-up question, one of the five intent categories in Table 4 will be randomly selected.",
"The annotator is then recommended to choose a relation belonging to the selected category and raise the question according to the relation.",
"Also, the annotator is allowed to change the intent category when it is not applicable or she has a better choice.",
"With this intent recommendation method, follow-up questions will be closely related to their previous questions and present rich intent diversity.",
"Data in CHASE-C are collected in three stages: (1) database collection; (2) question sequence creation; and (3) data review.",
"We collect 120 Chinese multi-table relational databases from the DuSQL dataset (Wang et al., 2020c).",
"There are 200 databases and 813 tables in DuSQL, but most of the tables are crawled from encyclopedias and forums.",
"Hence, there are a lot of missing entries and noises (e.g., duplicated or conflicted columns, tables in a database describing unrelated topics, and missing foreign key constraints).",
"To obtain high-quality databases, we manually revise all the databases, dropping those without related tables, resolving duplicated or conflicted columns, and complementing missing entries.",
"As a result, we collect 120 high-quality databases, covering 60 different domains such as Sport, Education, and Entertainment.",
"We recruit 12 Chinese college students that are skilled at SQL to create question sequences for databases from scratch.",
"They are also asked to write the SQL query for each question.",
"When a student starts a question sequence creation session, she is shown all the contents from a database, and she can get familiar with the database by executing arbitrary SQL queries.",
"Once she gets ready, she will receive a specification of the minimum number of questions in the sequence.",
"2 She can raise the first question with her interests.",
"Take the creation of question sequence in Figure 1 as an example.",
"The student asks the first question MVP and writes its corresponding SQL query.",
"The execution results of the SQL query will be shown to the student, helping her raise the follow-up question.",
"After that, she receives the intent category Different Instances of the Same Entity , which is randomly sampled by our annotation tool.",
"3 She chooses the Overlap relation in this category and raises the second question .",
"This creation session continues until the minimum number of questions is reached.",
"To help study the characteristics of questions and address the schema linking challenge (Guo et al., 2019b; Lei et al., 2020) in Text-to-SQL, we also ask the students to label each question's contextual dependency as in Section 2.3 and the linking between database schema items (tables and columns in databases) and their mentions in questions.",
"To ensure the data quality, we conduct two rounds of data review.",
"First, when a student creates her first 20 question sequences, we carefully review all the annotations to check whether the questions in each sequence are thematically related and whether the semantics of SQL queries match their questions.",
"If not, we run a new round of training for the student.",
"Through this round of review, we can resolve misunderstandings of annotations as early as possible.",
"After the finish of the question sequence creation stage, we review all the question sequences like in the first round, and we ask the students to modify their annotations if there are any problems.",
"The original SParC dataset consists of 4,298 question sequences and 200 databases, but only 3,456 and 160 of them are publicly available for training and development.",
"Hence, we could only translate those to construct CHASE-T.",
"The translation work is performed by 11 college students, 10 of whom also participate in the question sequence creation stage of CHASE-C.",
"Each database and all its question sequences are translated by one student.",
"The student also needs to label each question's contextual dependency and the linking between schema items and their mentions in the translated questions.",
"We encourage the student to translate a question based on its semantics to obtain the most natural question in Chinese.",
"To mitigate the biases in SParC, we ask our students to modify those context-independent or thematically unrelated questions and SQL queries to make the question sequences more coherent and natural.",
"Our intent recommendation method is also applied to guide the modification.",
"To ensure the data quality, we also run a two-round data review as in Section 3.2.3.",
"During the construction of CHASE-T, we identi-fied and fixed 150 incorrect SQL queries in SParC.",
"4 Also, we modified 1,470 SQL queries to make the question sequences in CHASE-T more coherent.",
"We compute the statistics of CHASE and conduct a thorough analysis to understand its three characteristics: contextual dependency, SQL hardness distribution, and mention of database schema items.",
"Table 5 summarizes the statistics of CHASE .",
"CHASE has 5,459 questions sequences (17,940 4 We have emailed the authors of SParC to apply our patch to fix the incorrect SQL queries. Dataset Split # DB # Seq. # Pair CHASE Train 200 3,949 12,914 Dev 40 755 2,494 Test 40 755 2,532 CHASE-C Train 80 1,377 5,141 Dev 20 333 1,291 Test 20 333 1,262 CHASE-T Train 140 3,034 9,043 Dev 20 422 1,203 Test -Table 6: Dataset split statistics. questions with their corresponding SQL queries annotated) over 280 databases.",
"CHASE-C contributes 37% question sequences and 43% question-SQL pairs; CHASE-T takes the rest part.",
"CHASE is the largest dataset for XDTS to date, consisting of the most question sequences, SQL queries, and databases.",
"CHASE also has rich semantic annotations, including contextual dependency and schema linking, which can inspire innovations to address challenges in XDTS.",
"Table 16 in Appendix provides a list of question sequences in CHASE .",
"Data Split According to the cross-database setting of XDTS, we split CHASE such that a database appears in only one of the train, development, and test set.",
"To understand the characteristics of the data collected in CHASE-C and CHASE-T, we also split them accordingly.",
"Since CHASE-T is constructed from SParC, we follow the train and development split of the original SParC dataset.",
"Table 6 shows the data split statistics.",
"Table 1 presents the contextual dependency characteristic of CHASE .",
"The numbers are computed on the development set in consistency with our study setup in Section 2.3.",
"The number of context-dependent questions in CHASE (65%) is substantially larger than existing datasets.",
"Also, CHASE has more questions that require resolutions of coref-Dataset Exact String Match Fuzzy String Match Semantic Match CHASE 48.2% 40.2% 11.6% CHASE-C 41.2% 44.8% 14.0% CHASE-T 53.7% 37.0% 9.9% Table 7: Mention of database schema items.",
"erence and ellipsis.",
"From this point of view, CHASE is a better testbed for XDTS.",
"When it comes to CHASE-C and CHASE-T, 71% of questions in CHASE-C are context-dependent, showing that question sequences collected with our method have richer contextual dependencies than those collected via decomposition.",
"Compared with SParC, the number of context-dependent questions in CHASET increases from 53% to 58% through our effort.",
"Table 3 shows the SQL hardness distribution of CHASE .",
"SQL queries in different hardness levels are more evenly distributed in CHASE , and only 28% of them are easy.",
"By comparing CHASE-C with existing datasets, we can observe a remarkable difference between their hardness distributions.",
"Specifically, the number of easy queries (19%) in CHASE-C is less than that of hard (24%) and extra hard (20%) queries, indicating that question sequences created from scratch with our method are much more challenging.",
"In terms of CHASE-T, the number of easy queries decreases from 40% to 37% through our effort, compared with SParC.",
"To understand how database schema items (tables and columns) are mentioned in questions, for each item annotated in the schema linking, we examine whether or not it can exactly match its mention in the question (Suhr et al., 2020).",
"As shown in Table 7, among the 26,464 items annotated in the schema linking of CHASE , 48% of them are exactly mentioned in questions ( Exact String Match ), and 40% of them have at least one token that appears in their mentions ( Fuzzy String Match ).",
"The remaining 12% items cannot be matched with their mentions via any string-match based methods ( Semantic Match ).",
"Table 8 presents four typical examples for fuzzy string match and semantic match.",
"Compared with CHASE-T, whose data are constructed from SParC, CHASE-C has more items in the fuzzy string match and semantic match groups, implying that CHASE-C is more challenging and its mentions of schema items are more diverse.",
"To understand the performance of the SOTA approaches on CHASE , CHASE-C, and CHASE-T, we experiment with the three approaches introduced in Section 2.2.",
"Appendix A.3 provides the details of our adaptations for Chinese inputs and the experimental setup.",
"Table 9 presents the experimental results, from which we make four main observations.",
"First, the performance of the SOTA approaches on CHASE is far from satisfactory.",
"The best approach on CHASE , IGSQL, only achieves 40.4% Question Match (QM), which is significantly lower than the SOTA QM on SParC (60.1%) and CoSQL (50.8%).",
"In terms of Interaction Match (IM), the best approach on CHASE only achieves 15.6%, lagging behind the SOTA IM on SParC (38.1%) and CoSQL (20.1%) by a large margin.",
"5 These results show that CHASE presents significant challenges for future research on XDTS.",
"Second, the performance of the SOTA approaches on CHASE-C is lower than that on CHASE-T.",
"Specifically, IGSQL can achieve 43.3% QM and 26.3% IM on CHASE-T, but only 32.6% QM and 9.3% IM on CHASE-C.",
"It shows that question sequences created from scratch with our method is much more challenging, which is consistent with our analysis in Section",
"4. Third, the performance of the SOTA approaches on CHASE-T is lower than that on SParC.",
"There are two reasons for the degradation.",
"First, during the construction of CHASE-T, we try our best to mitigate the two biases found in Section 2, 5 CoSQL has more questions in a question sequence (5.2) than SParC (3.0) and CHASE (3.3) on average.",
"which makes CHASE-T more pragmatic and challenging than SParC.",
"Second, existing approaches for XDTS are tuned for English only, and some components of these approaches cannot process Chinese inputs as well as English inputs.",
"Finally, although RAT-CON achieves the SOTA performance on SParC and CoSQL, it lags behind EditSQL and IGSQL by a large margin on CHASE and CHASE-C.",
"Through a careful examination, we find that RAT-SQL (Wang et al., 2020a), the model that RAT-CON builds upon, adopts a string-match based method to find the linking between database schema items and their mentions in questions.",
"However, this string-match based method struggles when many schema items are not exactly mentioned in questions.",
"Also, this method struggles in Chinese probably because it is only tuned for English.",
"The annotations of schema linking in CHASE can provide a great opportunity for future research to tackle this problem.",
"Table 10 shows the QM of IGSQL on the development set of CHASE , stratified by contextual dependency,",
"dependency, SQL hardness, and question position.",
"6 We can observe a remarkable discrepancy between QM on context-independent and context-dependent questions.",
"To tackle this problem, more advanced context modeling methods are needed.",
"Our annotations of contextual dependency in CHASE can enable a fine-grained analysis on XDTS approaches, and they potentially can be used to address this problem.",
"Besides, we observe that the QM of IGSQL on medium, hard, and extra hard queries of CHASE is higher than that of CHASE-C and CHASE-T, implying that more training samples for these complex queries can improve an ap-proach's performance on them.",
"A similar observation can be obtained in the question position.",
"The QM of IGSQL on questions in turn 4 and > =5 is higher than that of CHASE-C and CHASE-T.",
"Table 11 shows the predictions of IGSQL for the question sequence shown in Figure 1.",
"q 1 queries the players that have won MVP, but IGSQL misses the MVP Record table, probably because the 6 Table 13 and 14 in Appendix present the detailed experimental results of EditSQL and RAT-CON .",
"FROM clause of SQL is synthesized based on the other predicted clauses.",
"q 2 requires a resolution of ellipsis.",
"It queries the college with the most first pick players, but IGSQL fails to resolve the ellipsis and predicts the wrong column in the SELECT clause.",
"The last question omits the object first pick players of the verb have, but the approach cannot fully resolve it and misses the first pick constraint in the WHERE clause.",
"Dataset XDTS is a sub-task of context-dependent semantic parsing (CDSP) (Suhr et al., 2018; Guo et al., 2019a; Li et al., 2020).",
"Many datasets have been constructed for CDSP.",
"They can be categorized into two groups according to their annotations.",
"(1) Denotation Utterances in this group of datasets are only labelled with their denotations, i.e., the execution results of logical forms.",
"SEQUENTIALQA (Iyyer et al., 2017), SCONE (Long et al., 2016), and CSQA (Saha et al., 2018) are representative datasets in this group.",
"SEQUENTIALQA was constructed by decomposing some complicated questions from WikiTableQuestions (Pasupat and Liang, 2015) into sequences of simple questions.",
"A question sequence in SCONE was collected by randomly generating a sequence of world states and asking annotators to write an utterance between each pair of successive states.",
"CSQA was constructed by collecting a large number of individual questions and converting them into question sequences via a set of manually crafted templates.",
"(2) Logical Form Utterances in this group are labelled with their logical forms.",
"Except for SParC and CoSQL, ATIS (Hemphill et al., 1990; Dahl et al., 1994) and TEMPSTRUCTURE (Chen and Bunescu, 2019) also fall into this group.",
"ATIS was constructed under the Wizard-of-Oz (WOZ) setup.",
"An annotator raised a question, and another annotator wrote the corresponding SQL query.",
"Unlike datasets for XDTS, ATIS only focuses on the flight planning domain, which limits the possible SQL logic it contains.",
"TEMPSTRUCTURE was also constructed under the WOZ setup, but it synthesized many artificial question sequences with templates to enlarge the dataset.",
"CHASE belongs to the group of logical form.",
"To the best of our knowledge, it is the largest dataset with logical forms annotated for CDSP.",
"Also, CHASE is the first Chinese dataset for CDSP.",
"Approach A lot of approaches have been proposed to address XDTS (Zhang et al., 2019; Cai and Wan, 2020; Zhong et al., 2020; Hui et al., 2021; Yu et al., 2021).",
"Zhang et al. (2019) proposed EditSQL, which generates a SQL query by editing the query generated for previous turns.",
"EditSQL also uses an interaction-level encoder (Suhr et al., 2018) to model the interactions between the current question and previous questions.",
"IGSQL (Cai and Wan, 2020) improves over EditSQL by introducing a graph encoder to model database schema items together with historically mentioned items.",
"Hui et al. (2021) jointly modeled a question sequence, schema items, and their interactions via a dynamic graph and a graph encoder.",
"They also proposed a re-ranking module to improve the generation accuracy.",
"Liu et al. (2020) systematically compared different context modeling methods on SParC and CoSQL.",
"They found that concatenating all questions as inputs rivals or even outperforms more complicated context modeling methods.",
"This finding also motivates us to implement the strong benchmark approach, RAT-CON .",
"This work presents CHASE , to date the largest dataset for XDTS, consisting of 5,459 question sequences over 280 databases.",
"Each question in CHASE has rich semantic annotations, including its SQL query, contextual dependency, and schema linking.",
"Experimental results show that CHASE highlights the challenging problems of XDTS and there is a long way for us to achieve real Text-to-SQL demands of users.",
"Currently, CHASE is constructed for Chinese.",
"We plan to support more languages in the future.",
"Besides, we plan to explore the ways to utilize the rich semantic annotations in CHASE to address the challenges in XDTS.",
"We thank Wuxia Jin and the anonymous reviewers for their helpful discussion and detailed comments.",
"We thank Weixu Zhang, Jiawei Lin, Xi-aotong Zheng, Nan Hu, Tingting Zhang, Zekun Qi, Chengzu Li, Junjie Tao, Jinghan He, and Yu Ma for participating in the construction of CHASE .",
"Ming Fan was partially supported by NSFC (61902306), China Postdoctoral Science Foundation (2019TQ0251, 2020M673439), Youth Talent Support Plan of Xi'an Association for Science and Technology (095920201303).",
"This work presents CHASE , a free and open dataset for the research community to study the cross-database context-dependent Text-to-SQL problem (XDTS).",
"Data in CHASE are collected from two sources.",
"First, we collect 120 databases from the DuSQL (Wang et al., 2020c) dataset, a free and open dataset for the Chinese Text-to-SQL problem.",
"To collect question sequences on these 120 databases, we recruit 12 Chinese college students (5 females and 7 males).",
"Each student is paid 10 yuan ($1.6 USD) for creating each question sequence.",
"This compensation is determined according to prior work on similar dataset construction (Yu et al., 2019a).",
"Since all question sequences are collected against open-access databases, there is no privacy issue.",
"Second, to enlarge our dataset, we translate all the data, including questions, SQL queries, and databases, from English to Chinese in SParC (Yu et al., 2019b).",
"SParC is a free and open English dataset for XDTS.",
"11 college students (5 females and 6 males) are recruited to perform the translation, each of whom is paid 2 yuan ($0.3 USD) for translating each question.",
"The details of our data collection and characteristics are introduced in Section 3 and",
"4. References I. Androutsopoulos, G.D. Ritchie, and P. Thanisch."
] | [
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"objective",
"objective",
"result",
"objective",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Statutory reasoning is the task of determining whether a legal statute, stated in natural language, applies to the text description of a case.",
"Prior work introduced a resource that approached statutory reasoning as a monolithic textual entailment problem, with neural baselines performing nearly at-chance.",
"To address this challenge, we decompose statutory reasoning into four types of language-understanding challenge problems, through the introduction of concepts and structure found in Prolog programs.",
"Augmenting an existing benchmark, we provide annotations for the four tasks, and baselines for three of them.",
"Models for statutory reasoning are shown to benefit from the additional structure, improving on prior baselines.",
"Further, the decomposition into subtasks facilitates finer-grained model diagnostics and clearer incremental progress.",
"As more data becomes available, Natural Language Processing (NLP) techniques are increasingly being applied to the legal domain, including for the prediction of case outcomes (Xiao et al., 2018; Vacek et al., 2019; Chalkidis et al., 2019a).",
"In the US, cases are decided based on previous case outcomes, but also on the legal statutes compiled in the US code.",
"For our purposes, a case is a set of facts described in natural language, as in Figure 1, in blue.",
"The US code is a set of documents called statutes , themselves decomposed into subsections .",
"Taken together, subsections can be viewed as a body of interdependent rules specified in natural language, prescribing how case outcomes are to be determined.",
"Statutory reasoning is the task of determining whether a given subsection of a statute applies to a given case, where both are expressed in natural language.",
"Subsections are implicitly framed as predicates, which may be true or false of a given case.",
"Holzenberger et al. (2020) introduced SARA, a benchmark for the task of statutory reasoning, as well as two different approaches to solving this problem.",
"First, a manually-crafted symbolic reasoner based on Prolog is shown to perfectly solve the task, at the expense of experts writing the Prolog code and translating the natural language case descriptions into Prolog-understandable facts.",
"The second approach is based on statistical machine learning models.",
"While these models can be induced computationally, they perform poorly because the complexity of the task far surpasses the amount of training data available.",
"We posit that statutory reasoning as presented to statistical models is underspecified, in that it was cast as Recognizing Textual Entailment (Da-gan et al., 2005) and linear regression.",
"Taking inspiration from the structure of Prolog programs, we re-frame statutory reasoning as a sequence of four tasks, prompting us to introduce a novel extension of the SARA dataset (Section 2), referred to as SARA v2 .",
"Beyond improving the model's performance, as shown in Section 3, the additional structure makes it more interpretable, and so more suitable for practical applications.",
"We put our results in perspective in Section 4 and review related work in Section 5.",
"The symbolic solver requires experts translating the statutes and each new case's description into Prolog.",
"In contrast, a machine learning-based model has the potential to generalize to unseen cases and to changing legislation, a significant advantage for a practical application.",
"In the following, we argue that legal statutes share features with the symbolic solver's first-order logic.",
"We formalize this connection in a series of four challenge tasks, described in this section, and depicted in Figure 1.",
"We hope they provide structure to the problem, and a more efficient inductive bias for machine learning algorithms.",
"The annotations mentioned throughout the remainder of this section were developed by the authors, entirely by hand, with regular guidance from a legal scholar 1 .",
"Examples for each task are given in Appendix A. Statistics are shown in Figure 2 and further detailed in Appendix B. Argument identification This first task, in conjunction with the second, aims to identify the arguments of the predicate that a given subsection represents.",
"Some terms in a subsection refer to something concrete, such as the United States or April 24th, 2017.",
"Other terms can take a range of values depending on the case at hand, and act as placeholders.",
"For example, in the top left box of Figure 1, the terms a taxpayer and the taxable year can take different values based on the context, while the terms section 152 and this paragraph have concrete, immutable values.",
"Formally, given a sequence of tokens t 1 , ..., t n , the task is to return a set of start and end indices ( s, e ) { 1 , 2 , ..., n } 2 where each pair represents a span.",
"We borrow from the terminology of predicate argument alignment (Roth and Frank, 2012; Wolfe et al., 2013) and call these placeholders arguments .",
"The first task, which we call argument identification , is tagging which parts of a subsection denote such placeholders.",
"We provide annotations for argument identification as character-level spans representing arguments.",
"Since each span is a pointer to the corresponding argument, we made each span the shortest meaningful phrase.",
"Figure",
"2(b) shows corpus statistics about placeholders.",
"Argument coreference Some arguments detected in the previous task may appear multiple times within the same subsection.",
"For instance, in the top left of Figure 1, the variable representing the taxpayer in",
"2(a)(1)(B) is referred to twice.",
"We refer to the task of resolving this coreference problem at the level of the subsection as argument coreference .",
"While this coreference can span across subsections, as is the case in Figure 1, we intentionally leave it to the next task.",
"Keeping the notation of the above paragraph, given a set of spans { ( s i , e i ) } Si =1 , the task is to return a matrix C { 0 , 1 } S S where C i,j = 1 if spans ( s i , e i ) and ( s j , e j ) denote the same variable, 0 otherwise.",
"Corpus statistics about argument coreference can be found in Figure",
"2(a).",
"After these first two tasks, we can extract a set of arguments for every subsection.",
"In Figure 1, for",
"2(a)(1)(A), that would be { Taxp , Taxy , Spouse , Years } , as shown in the bottom left of Figure 1.",
"Structure extraction A prominent feature of legal statutes is the presence of references, implicit and explicit, to other parts of the statutes.",
"Resolving references and their logical connections, and passing arguments appropriately from one subsection to the other, are major steps in statutory reasoning.",
"We refer to this as structure extraction .",
"This mapping can be trivial, with the taxpayer and taxable year generally staying the same across subsections.",
"Some mappings are more involved, such as the taxpayer from",
"152(b)(1) becoming the dependent in",
"152(a).",
"Providing annotations for this task in general requires expert knowledge, as many references are implicit, and some must be resolved using guidance from Treasury Regulations.",
"Our approach contrasts with recent efforts in breaking down complex questions into atomic questions, with the possibility of referring to previous answers (Wolfson et al., 2020).",
"Statutes contain their own breakdown into atomic questions.",
"In addition, our structure is interpretable by a Prolog engine.",
"We provide structure extraction annotations for SARA in the style of Horn clauses (Horn, 1951), using common logical operators, as shown in the bottom left of Figure 1.",
"We also provide character offsets for the start and end of each subsection.",
"Argument identification and coreference, and structure extraction can be done with the statutes only.",
"They correspond to extracting a shallow version of the symbolic solver of Holzenberger et al. (2020).",
"Argument instantiation We frame legal statutes as a set of predicates specified in natural language.",
"Each subsection has a number of arguments, provided by the preceding tasks.",
"Given the description of a case, each argument may or may not be associated with a value.",
"Each subsection has an @truth argument, with possible values True or False , reflecting whether the subsection applies or not.",
"Concretely, the input is (1) the string representation of the subsection, (2) the annotations from the first three tasks, and (3) values for some or all of its arguments.",
"Arguments and values are represented as an array of key-value pairs, where the names of arguments specified in the structure",
"an-2(a)(1) (Taxp1, Taxy2, Spouse3, Years4, Household5, Dependent6, Deduction7, Cost8) :",
"notations are used as keys.",
"In Figure 1, compare the names of arguments in the green box with the key names in the blue boxes.",
"The output is values for its arguments, in particular for the @truth argument.",
"In the example of the top right in Figure 1, the input values are taxpayer = Alice and taxable year = 2017 , and one expected output is @truth = True.",
"We refer to this task as argument instantiation .",
"Values for arguments can be found as spans in the case description, or must be predicted based on the case description.",
"The latter happens often for dollar amounts, where incomes must be added, or tax must be computed.",
"Figure 1 shows two examples of this task, in blue.",
"Before determining whether a subsection applies, it may be necessary to infer the values of unspec-ified arguments.",
"For example, in the top of Figure 1, it is necessary to determine who Alice's deceased spouse and who the dependent mentioned in",
"2(a)(1)(B) are.",
"If applicable, we provide values for these arguments, not as inputs, but as additional supervision for the model.",
"We provide manual annotations for all (subsection, case) pairs in SARA.",
"In addition, we run the Prolog solver of Holzenberger et al. (2020) to generate annotations for all possible (subsection, case) pairs, to be used as a silver standard, in contrast to the gold manual annotations.",
"We exclude from the silver data any (subsection, case) pair where the case is part of the test set.",
"This increases the amount of available training data by a factor of 210.",
"We provide baselines for three tasks, omitting structure extraction because it is the one task with the highest return on human annotation effort 2 .",
"In other words, if humans could annotate for any of these four tasks, structure extraction is where we posit their involvement would be the most worthwhile.",
"Further, Pertierra et al. (2017) have shown that the related task of semantic parsing of legal statutes is a difficult task, calling for a complex model.",
"We run the Stanford parser (Socher et al., 2013) on the statutes, and extract all noun phrases as spans specifically, all NNP, NNPS, PRP$, NP and NML constituents.",
"While de-formatting legal text can boost parser performance (Morgenstern, 2014), we found it made little difference in our case.",
"As an orthogonal approach, we train a BERT-based CRF model for the task of BIO tagging.",
"With the 9 sections in the SARA v2 statutes, we create 7 equally-sized splits by grouping 68, 3301 and 7703 into a single split.",
"We run a 7-fold cross-validation, using 1 split as a dev set, 1 split as a test set, and the remaining as training data.",
"We embed each paragraph using BERT, classify each contextual subword embedding into a 3-dimensional logit with a linear layer, and run a CRF (Lafferty et al., 2001).",
"The model is trained with gradient descent to maximize the log-likelihood of the sequence of gold tags.",
"We experiment with using Legal BERT (Holzenberger et al., 2020) and BERT-base-cased (Devlin et al., 2019) as our BERT model.",
"We freeze its parameters and optionally unfreeze the last layer.",
"We use a batch size of 32 paragraphs, a learning rate of 10 3 and the Adam optimizer (Kingma and Ba, 2015).",
"Based on F1 score measured on the dev set, the best model uses Legal BERT and unfreezes its last layer.",
"Test results are shown in Table 1.",
"Argument coreference differs from the usual coreference task (Pradhan et al., 2014), even though we are using similar terminology, and frame it in a similar way.",
"In argument coreference, it is equally 2 Code for the experiments can be found under https: //github.com/SgfdDttt/sara_v2 Parser-based avg stddev macro precision 17.6 4.4 16.6 recall 77.9 5.0 77.3 F1 28.6 6.2 27.3 BERT-based avg stddev macro precision 64.7 15.0 65.1 recall 69.0 24.2 59.8 F1 66.2 20.5 62.4 Table 1: Argument identification results.",
"as important to link two coreferent argument mentions as it is not to link two different arguments.",
"In contrast, regular coreference emphasizes the prediction of links between mentions.",
"We thus report a different metric in Tables 2 and 4, exact match coreference , which gives credit for returning a cluster of mentions that corresponds exactly to an argument.",
"In Figure 1, a system would be rewarded for linking together both mentions of the taxpayer in",
"2(a)(1)(B), but not if any of the two mentions were linked to any other mention within",
"2(a)(1)(B).",
"This custom metric gives as much credit for correctly linking a single-mention argument (no links), as for a 5-mention argument (10 links).",
"coreference links.",
"Under usual coreference metrics, this system can have low performance.",
"predicts a coreference link if the placeholder strings of two arguments are identical, up to the presence of the words such , a , an , the , any , his and every .",
"We also provide usual coreference metrics in Table 3, using the code associated with Pradhan et al. (2014).",
"This baseline perfectly resolves coreference for 80.8% of subsections, versus 68.9% for the single mention baseline.",
"In addition, we provide a cascade of the best methods for argument identification and coreference, and report results in Table 4.",
"The cascade perfectly resolves a subsection's arguments in only 16.4% of cases.",
"This setting, which groups the first two tasks together, offers a significant challenge.",
"Argument instantiation takes into account the information provided by previous tasks.",
"We start by instantiating the arguments of a single subsection, without regard to the structure of the statutes.",
"We then describe how the structure information is incorporated into the model.",
"Algorithm 1 Argument instantiation for a single subsection Require: argument spans with coreference information A , input argument-value pairs D , subsection text s , case description c Ensure: output argument-value pairs P 1: function ARGINSTANTIATION ( A, D, s, c ) 2: P 3: for a in A \\ { @truth } do 4: r INSERTVALUES ( s, A, D, P ) 5: y BERT ( c, r ) 6: x COMPUTEATTENTIVEREPS ( y, a ) 7: v PREDICTVALUE ( x ) 8: P P ( a, v ) 9: end for 10: r INSERTVALUES ( s, A, D, P ) 11: y BERT CLS ( c, r ) 12: t TRUTHPREDICTOR ( y ) 13: P P ( @truth , t ) 14: return P 15: end function Single subsection We follow the paradigm of Chen et al. (2020), where we iteratively modify the text of the subsection by inserting argument values, and predict values for uninstantiated arguments.",
"Throughout the following, we refer to Algorithm 1 and to its notation.",
"For each argument whose value is provided, we replace the argument's placeholders in subsection s by the argument's value, using INSERTVALUES (line 4).",
"This yields mostly grammatical sentences, with occasional hiccups.",
"With",
"2(a)(1)(A) and the top right case from Figure 1, we obtain (A) Alice spouse died during either of the two years immediately preceding 2017.",
"We concatenate the text of the case c with the modified text of the subsection r , and embed it using BERT (line 5), yielding a sequence of contextual subword embeddings y = { y i R 768 | i = 1 ...n } .",
"Keeping with the notation of Chen et al. (2020), assume that the embedded case is represented by the sequence of vectors t 1 , ..., t m and the embedded subsection by s 1 , ..., s n .",
"For a given argument a , compute its attentive representation s 1 , ..., s m and its augmented feature vectors x 1 , ..., x m .",
"This operation, described by Chen et al. (2020), is performed by COMPUTEATTENTIVEREPS (line 6).",
"The augmented feature vectors x 1 , ..., x m represent the argument's placeholder, conditioned on the text of the statute and case.",
"Based on the name of the argument span, we predict its value v either as an integer or a span from the case description, using PREDICTVALUE (line 7).",
"For integers, as part of the model training, we run k-means clustering on the set of all integer values in the training set, with enough centroids such that returning the closest centroid instead of the true value yields a numerical accuracy of 1 (see below).",
"For any argument requiring an integer (e.g. tax ), the model returns a weighted average of the centroids.",
"The weights are predicted by a linear layer followed by a softmax, taking as input an average-pooling and a maxpooling of x 1 , ..., x m .",
"For a span from the case description, we follow the standard procedure for fine-tuning BERT on SQuAD (Devlin et al., 2019).",
"The unnormalized probability of the span from tokens i to j is given by e l x i + r x j where l , r are learnable parameters.",
"The predicted value v is added to the set of predictions P (line 8), and will be used in subsequent iterations to replace the argument's placeholder in the subsection.",
"We repeat this process until a value has been predicted for every argument, except @truth (lines 3-9).",
"Arguments are processed in order of appearance in the subsection.",
"Finally, we concatenate the case and fully grounded subsection and embed them with BERT (lines 10-11), then use a linear predictor on top of the representation for the [CLS] token to predict the value for the @truth argument (line 12).",
"Subsection with dependencies To describe our procedure at a high-level, we use the structure of the statutes to build out a computational graph, where nodes are either subsections with argument-value pairs, or logical operations.",
"We resolve nodes one by one, depth first.",
"We treat the single-subsection model described above as a function, taking as input a set of argument-value pairs, a string representation of a subsection, and a string representation of a case, and returning a set of argument-value pairs.",
"Algorithm 2 and Figure 3 summarize the following.",
"We start by building out the subsection's dependency tree, as specified by the structure annotations (lines 2-4).",
"First, we build the tree structure using BUILDDEPENDENCYTREE .",
"Then, values for arguments are propagated from parent to child, from the root down, with POPULATEARGVALUES .",
"The tree is optionally capped to a predefined depth.",
"Each node is either an input for the single-subsection function or its output, or a logical operation.",
"We then traverse the tree depth first, performing the following operations, and replacing the node with the result of the operation: If the node q is a leaf, resolve it using the single-subsection function ARGINSTANTIATION (lines 6-9 in Algorithm 2; step 1 in Figure 3).",
"If the node q is a subsection that is not a leaf, find its child node x (GETCHILD , line 12), and corresponding argument-value pairs other than @truth , D x (GETARGVALUEPAIRS , line 13).",
"Merge D x with D q , the argument-value pairs of the main node q (line 14).",
"Finally, resolve the parent node q using the single-subsection function (lines 15-16; step 3 in Figure 3. If node q is a logical operation (line 17), get its children C (GETCHILDREN , line 18), to which the operation will be applied with DOOPERATION (line 19) as follows: If q == NOT, assign the negation of the child's @truth value to q .",
"If q == OR, pick its child with the highest @truth value, and assign its arguments' values to q .",
"If q == AND, transfer the argument-value pairs from all its children to q .",
"In case of conflicting values, use the value associated with the lower @truth value.",
"This operation can be seen in step 4 of Figure 3. This procedure follows the formalism of neural module networks (Andreas et al., 2016) and is illustrated in Figure 3. Reentrancy into the dependency tree is not possible, so that a decision made earlier cannot be backtracked on at a later stage.",
"One could imagine doing joint inference, or using heuristics for revisiting decisions, for example with a limited number of reentrancies.",
"Humans are generally able to resolve this task in the order of the text, and we assume it should be possible for a computational model too.",
"Our solution is meant to be computationally efficient, with the hope of not sacrificing too much performance.",
"Revisiting this assumption is left for future work.",
"Metrics and evaluation Arguments whose value needs to be predicted fall into three categories.",
"The @truth argument calls for a binary truth value, and we score a model's output using binary accuracy.",
"The values of some arguments, such as gross income , are dollar amounts.",
"We score such values using numerical accuracy, as 1 if ( y, y ) = | y y | max (0 . 1 y, 5000) < 1 else 0 , where y is the prediction and y the target.",
"All other argument values are treated as strings.",
"In those cases, we compute accuracy as exact match between predicted and gold value.",
"Each of these three metrics defines a form of accuracy.",
"We average the three metrics, weighted by the number of samples, to obtain a unified accuracy metric, used to compare the performance of models.",
"Training Based on the type of value expected, we use different loss functions.",
"For @truth , we use binary cross-entropy.",
"For numerical values, we use the hinge loss max (( y, y ) 1 , 0) .",
"For strings, let S be all the spans in the case description equal to the expected value.",
"The loss function is log( (cid:80) i j e l x i + r x j ) log( (cid:80) i,j S e l x i + r x j ) (Clark and Gardner, 2018).",
"The model is trained end-to-end with gradient descent.",
"We start by training models on the silver data, as a pre-training step.",
"We sweep the values of the learning rate in { 10 2 , 10 3 , 10 4 , 10 5 } and the batch size in { 64 , 128 , 256 } .",
"We try both BERT-base-cased and Legal BERT, allowing updates to the parameters of its top layer.",
"We set aside 10% of the silver data as a dev set, and select the best model based on the unified accuracy on the dev set.",
"Training is split up into three stages.",
"The single-subsection model iteratively inserts values for arguments into the text of the subsection.",
"In the first stage, regardless of the predicted value, we insert the gold value for the argument, as in teacher forcing (Kolen and Kremer, 2001).",
"In the second and third stages, we insert the value predicted by the model.",
"When initializing the model from one stage to the next, we pick the model with the highest unified accuracy on the dev set.",
"In the first two stages, we ignore the structure of the statutes, which effectively caps the depth of each dependency tree at 1.",
"Picking the best model from this pre-training step, we perform fine-tuning on the gold data.",
"We take a k-fold cross-validation approach (Stone, 1974).",
"We randomly split the SARA v2 training set into 10 splits, taking care to put pairs of cases testing the same subsection into the same split.",
"Each split contains nearly exactly the same proportion of binary and numerical cases.",
"We sweep the values of the learning rate and batch size in the same ranges as above, and optionally allow updates to the parameters of BERT's top layer.",
"For a given set of hyperparameters, we run training on each split, using the dev set and the unified metric for early stopping.",
"We use the performance on the dev set averaged across the 10 splits to evaluate the performance of a given set of hyperparameters.",
"Using that criterion, we pick the best set of hyper-parameters.",
"We then pick the final model as that which achieves median performance on the dev set, across the 10 splits.",
"We report the performance of that model on the test set.",
"In Table 5, we report the relevant argument instantiation metrics, under @truth , dollar amount and string .",
"For comparison, we also report binary and numerical accuracy metrics defined in Holzenberger et al. (2020).",
"The reported @truth dollar amount string unified binary numerical baseline 58.3 7.5 18.2 11.5 4.4 7.4 43.3 6.2 50 8.3 30 18.1 + silver 58.3 7.5 39.4 14.6 4.4 7.4 47.2 6.2 50 8.3 45 19.7 BERT 59.2 7.5 23.5 12.5 37.5 17.3 49.4 6.2 51 8.3 30 18.1 pre-training 57.5 7.5 20.6 11.9 37.5 17.3 47.8 6.2 49 8.3 30 18.1 structure 65.8 7.2 20.6 11.9 33.3 16.8 52.8 6.2 59 8.2 30 18.1 pre-training, 60.8 7.4 20.6 11.9 33.3 16.8 49.4 6.2 53 8.3 30 18.1 structure (best results in bold ) Table 5: Argument instantiation.",
"baseline has three parameters.",
"For @truth , it returns the most common value for that argument on the train set.",
"For arguments that call for a dollar amount, it returns the one number that minimizes the dollar amount hinge loss on the training set.",
"For all other arguments, it returns the most common string answer in the training set.",
"Those parameters vary depending on whether the training set is augmented with the silver data.",
"Our goal in providing the baselines of Section 3 is to identify performance bottlenecks in the proposed sequence of tasks.",
"Argument identification poses a moderate challenge, with a language model-based approach achieving non-trivial F1 score.",
"The simple parser-based method is not a sufficient solution, but with its high recall could serve as the backbone to a statistical method.",
"Argument coreference is a simpler task, with string matching perfectly resolving nearly 80% of the subsections.",
"This is in line with the intuition that legal language is very explicit about disambiguating coreference.",
"As reported in Table 3, usual coreference metrics seem lower, but only reflect a subset of the full task: coreference metrics are only concerned with links, so that arguments appearing exactly once bear no weight under that metric, unless they are wrongly linked to another argument.",
"Argument instantiation is by far the most challenging task, as the model needs strong natural language understanding capabilities.",
"Simple baselines can achieve accuracies above 50% for @truth , since for all numerical cases, @truth = True.",
"We receive a slight boost in binary accuracy from using the proposed paradigm, departing from previous results on this benchmark.",
"As compared to the baseline, the models mostly lag behind for the dollar amount and numerical accuracies, which can be explained by the lack of a dedicated numerical solver, and sparse data.",
"Further, we have made a number of simplifying assumptions, which may be keeping the model from taking advantage of the structure information: arguments are instantiated in order of appearance, forbidding joint prediction; revisiting past predictions is disallowed, forcing the model to commit to wrong decisions made earlier; the depth of the dependency tree is capped at 3; and finally, information is being passed along the dependency tree in the form of argument values, as opposed to dense, high-dimensional vector representations.",
"The latter limits both the flow of information and the learning signal.",
"This could also explain why the use of dependencies is detrimental in some cases.",
"Future work would involve joint prediction (Chan et al., 2019), and more careful use of structure information.",
"Looking at the errors made by the best model in Table 5 for binary accuracy, we note that for 39 positive and negative case pairs, it answers each pair identically, thus yielding 39 correct answers.",
"In the remaining 11 pairs, there are 10 pairs where it gets both cases right.",
"This suggests it may be guessing randomly on 39 pairs, and understanding 10.",
"The best BERT-based model for dollar amounts predicts the same number for each case, as does the baseline.",
"The best models for string arguments generally make predictions that match the category of the expected answer (date, person, etc) while failing to predict the correct string.",
"Performance gains from silver data are noticeable and generally consistent, as can be seen by comparing brown and blue cells in Table 5.",
"The silver data came from running a human-written Prolog program, which is costly to produce.",
"A possible substitute is to find mentions of applicable statutes in large corpora of legal cases (Caselaw, 2019), for example using high-precision rules (Ratner et al., 2017), which has been successful for extracting information from cases (Boniol et al., 2020).",
"In this work, each task uses the gold annotations from upstream tasks.",
"Ultimately, the goal is to pass the outputs of models from one task to the next.",
"Law-related NLP tasks have flourished in the past years, with applications including answering bar exam questions (Yoshioka et al., 2018; Zhong et al., 2020), information extraction (Chalkidis et al., 2019b; Boniol et al., 2020; Lam et al., 2020), managing contracts (Elwany et al., 2019; Liepina et al., 2020; Nyarko, 2021) and analyzing court decisions (Sim et al., 2015; Lee and Mouritsen, 2017).",
"Case-based reasoning has been approached with expert systems (Popp and Schlink, 1974; Hellawell, 1980; v.",
"d. L. Gardner, 1983), high-level hand-annotated features (Ashley and Bruninghaus, 2009) and transformer-based models (Rabelo et al., 2019).",
"Closest to our work is Saeidi et al. (2018), where a dialog agent's task is to answer a user's question about a set of regulations.",
"The task relies on a set of questions provided within the dataset.",
"Clark et al. (2019) as well as preceding work (Friedland et al., 2004; Gunning et al., 2010) tackle a similar problem in the science domain, with the goal of using the prescriptive knowledge from science textbooks to answer exam questions.",
"The core of their model relies on several NLP and specialized reasoning techniques, with contextualized language models playing a major role.",
"Clark et al. (2019) take the route of sorting questions into different types, and working on specialized solvers.",
"In contrast, our approach is to treat each question identically, but to decompose the process of answering into a sequence of subtasks.",
"The language of statutes is related to procedural language, which describes steps in a process.",
"Zhang et al. (2012) collect how-to instructions in a variety of domains, while Wambsganss and Fromm (2019) focus on automotive repair instructions.",
"Branavan et al. (2012) exploit instructions in a game manual to improve an agent's performance.",
"Dalvi et al. (2019) and Amini et al. (2020) turn to modeling textual descriptions of physical and biological mechanisms.",
"Weller et al. (2020) propose models that generalize to new task descriptions.",
"recognition (Ratinov and Roth, 2009), part-of-speech tagging (Petrov et al., 2012; Akbik et al., 2018), and coreference resolution (Pradhan et al., 2014).",
"Structure extraction is conceptually similar to syntactic (Socher et al., 2013) and semantic parsing (Berant et al., 2013), which Pertierra et al. (2017) attempt for a subsection of tax law.",
"Argument instantiation is closest to the task of aligning predicate argument structures (Roth and Frank, 2012; Wolfe et al., 2013).",
"We frame argument instantiation as iteratively completing a statement in natural language.",
"Chen et al. (2020) refine generic statements by copying strings from input text, with the goal of detecting events.",
"Chan et al. (2019) extend transformer-based language models to permit inserting tokens anywhere in a sequence, thus allowing to modify an existing sequence.",
"For argument instantiation, we make use of neural module networks (Andreas et al., 2016), which are used in the visual (Yi et al., 2018) and textual domains (Gupta et al., 2020).",
"In that context, arguments and their values can be thought of as the hints from Khot et al. (2020).",
"The Prolog-based data augmentation is related to data augmentation for semantic parsing (Campagna et al., 2019; Weir et al., 2019).",
"Solutions to tackle statutory reasoning may range from high-structure, high-human involvement expert systems, to less structured, largely self-supervised language models.",
"Here, taking inspiration from Prolog programs, we introduce a novel paradigm, by breaking statutory reasoning down into a sequence of tasks.",
"Each task can be annotated for with far less expertise than would be required to translate legal language into code, and comes with its own performance metrics.",
"Our contribution enables finer-grained scoring and debugging of models for statutory reasoning, which facilitates incremental progress and identification of performance bottlenecks.",
"In addition, argument instantiation and explicit resolution of dependencies introduce further interpretability.",
"This novel approach could possibly inform the design of models that reason with rules specified in natural language, for the domain of legal NLP and beyond."
] | [
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"method",
"other",
"other",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain"
] |
[
"The quadratic computational and memory complexities of large Transformers have limited their scalability for long document summarization.",
"In this paper, we propose HEPOS , a novel efficient encoder-decoder attention with head-wise positional strides to effectively pinpoint salient information from the source.",
"We further conduct a systematic study of existing efficient self-attentions.",
"Combined with HEPOS , we are able to process ten times more tokens than existing models that use full attentions.",
"For evaluation, we present a new dataset, GOVREPORT , with significantly longer documents and summaries.",
"Results show that our models produce significantly higher ROUGE scores than competitive comparisons, including new state-of-the-art results on PubMed.",
"Human evaluation also shows that our models generate more informative summaries with fewer unfaithful errors.",
"Long documents, such as scientific papers and government reports, often discuss substantial issues at length, and thus are time-consuming to read, let alone to comprehend.",
"Generating abstractive summaries can help readers quickly grasp the main topics, yet prior work has mostly focused on short texts (containing hundreds of words), e.g., news articles (Gehrmann et al., 2018; Liu and Lapata, 2019; Zhang et al., 2019).",
"Model training efficiency and summary quality present a pair of challenges for long document summarization.",
"State-of-the-art systems (Lewis et al., 2020; Zhang et al., 2019) are built upon Transformer (Vaswani et al., 2017), which uses attentions to compute pairwise relations between tokens.",
"Such framework has quadratic time and memory complexities, and is too costly for long documents 1 .",
"Solutions have been proposed to reduce 1 For instance, to fine-tune BART on documents of 10 K the calculation of encoder self-attentions (Wang et al., 2020c; Zaheer et al., 2020) by selectively attending to neighboring tokens (Beltagy et al., 2020; Child et al., 2019) or relevant words (Kitaev et al., 2020; Tay et al., 2020a).",
"Yet, these methods do not apply to encoder-decoder attentions in summarization models since they collaborate and dynamically pinpoint salient content in the source as the summary is decoded.",
"Truncation is commonly used to circumvent the issue.",
"However, training on curtailed content further aggravates hallucination in existing abstractive models (Maynez et al., 2020).",
"We argue that summarizing long documents (e.g., with thousands of words or more) requires efficient handling of both types of attentions.",
"To this end, we propose an efficient encoder-decoder attention with head-wise positional strides (HEPOS ) , where the attention heads follow a strided pattern and have varying starting positions.",
"HEPOS reduces computational and memory costs while (1) maintaining the power of emphasizing important tokens, and (2) preserving the global context per head.",
"HEPOS successfully doubles the processed input sequence size, when combined with any encoder.",
"To the best of our knowledge, we are the first to study efficient encoder-decoder attentions and provide a systematic comparison of diverse encoder attentions for the task of summarization.",
"2 For evaluation, we collect a new large-scale dataset, GOVREPORT , consisting of about 19 .",
"5 k U.S. government reports with expert-written abstractive summaries.",
"3 GOVREPORT has two important features: (1) It contains significantly longer documents ( 9 . 4 k words) and summaries ( 553 words) than existing datasets, such as PubMed and arXiv (Cohan et al., 2018) (see Table 2); (2) Salient tokens with a batch size of 1 , 70 GB of memory is needed for encoder attentions, and 8 GB for encoder-decoder attentions.",
"content is spread throughout the documents, as opposed to cases where summary-worthy words are more heavily concentrated in specific parts of the document.",
"These properties make GOVREPORT an important benchmark for producing long document summaries with multiple paragraphs.",
"We conduct experiments on GOVREPORT and scientific papers in PubMed and arXiv.",
"First, when summarizing documents of the same length, HEPOS attention yields significantly better ROUGE scores than a non-trivial comparison that projects attentions into low-rank space (Wang et al., 2020c).",
"Second, when trained on the same GPU, HEPOS attention, combined with sparse encoder attentions, is able to read more than 10 K words and obtains significantly higher ROUGE scores on GOVREPORT and new state-of-the-art results on PubMed, compared with full encoder-decoder attention models which can process at most 5 K input words.",
"Human judges further rate the summaries generated by our models to be more informative and faithful .",
"We further propose a new evaluation metric for faithfulness , inspired by APES (Eyal et al., 2019), a fill-in-the-blank QA metric for summary evaluation.",
"With questions generated from references, our metric, APES src , compares QA answers by reading the source and the system summary.",
"It is shown to be better correlated with human judgment than the original metric and an entailment-based scorer (Kryscinski et al., 2020).",
"The rest of the paper is organized as follows.",
"We describe efficient encoder attentions in prior work in 2, and formulate our proposed encoder-decoder attention in 3. The GOVREPORT data is presented in 4. We then share details on evaluation metrics ( 5) and experimental results ( 6).",
"Additional related work is listed in 7, with conclusion in 8.",
"Transformer models are built upon multi-head attentions in multiple layers.",
"The attention is calculated as Attention ( Q , K , V ) = softmax ( QKT d k ) V , where Q , K , and V are query, key, and value matrices, each consisting of n vectors for a document with n tokens, thus the quadratic memory footprint.",
"Here, we present an overview of representative methods for efficient encoder self-attentions (henceforth encoder attentions) that can be built upon large pre-trained seq2seq models, e.g., BART (Lewis et al., 2020).",
"We follow the naming Model Complexity # New Para.",
"convention of Tay et al. (2020b), and summarize their memory complexities and numbers of newly learned parameters in Table 1.",
"Fixed patterns are used to limit the scope of attentions.",
"In our experiments, in addition to window-based attentions, we also combine them with global tokens, stride patterns, or random attentions.",
"Sliding window attentions (Beltagy et al., 2020) aim to capture the local context, which is critical for language understanding (Liu* et al., 2018; Child et al., 2019).",
"Concretely, each query token attends to w/ 2 neighboring tokens on both left and right, yielding a memory complexity of O ( nw ) .",
"Adaptive span is proposed by Sukhbaatar et al. (2019) to learn attention windows at different layers.",
"This is implemented by learning a masking function for each head independently.",
"In practice, the adaptive span attention has a complexity of O ( n w ) , where w is the maximum values of predicted spans for all heads.",
"Besides, it introduces O (1) new parameters for learning spans.",
"Global tokens (Beltagy et al., 2020) are often added to sliding windows to let pre-selected tokens attend to the full sequence, to build global representations.",
"Importantly, global attention operations are symmetric, i.e., a global token is also attendable to all tokens in the sequence.",
"We select the first g tokens as global tokens, as leading sentences are often important for summarization.",
"Memory complexity is O (2 ng ) due to the symmetric attentions.",
"Stride patterns are proposed by Child et al. (2019) to capture long term interactions, where each query attends to every s -th token, with s as the stride size.",
"It thus has a complexity of O ( n 2 /s ) .",
"Random attention is motivated by the fact that randomly constructed graphs with ( n ) edges can approximate the complete graphs spectrally (Za-heer et al., 2020).",
"Zaheer et al. (2020) propose to allow each query to attend to r random keys, resulting in a complexity of O ( nr ) .",
"For efficient implementations, input tokens are first segmented into blocks.",
"Tokens in the same block attend to tokens in another randomly selected block.",
"Wang et al. (2020c) show that self-attention matrices are low-rank.",
"They propose Linformer that linearly projects key and value matrices into a low-dimensional space, e.g., from n to k , to achieve a O ( nk ) complexity.",
"It also introduces O ( n ) new parameters for projection matrix learning.",
"Recently, learnable sparse attentions are proposed to better capture both local and global contexts than attentions based on fixed patterns.",
"Locality-sensitive hashing (LSH) attentions use a random-projection hashing function to hash similar queries and keys into the same buckets in l rounds (Kitaev et al., 2020).",
"Attentions are then computed among tokens within each bucket.",
"For bucket size b l , the complexity of LSH attention is O ( lnb l ) .",
"Sinkhorn attentions first segment a sequence into blocks, which are then arranged by a learned Sinkhorn sorting network (Tay et al., 2020a).",
"Given the new permutation, each query attends to b s tokens within the same block to maintain the local context and another b s tokens in a neighboring block to capture global interactions.",
"Its complexity is O (2 nb s ) .",
"We also describe several notable methods that are not suitable for our experiments and excluded from this study: Recurrence over input segments are tailored for an autoregressive decoder only (Dai et al., 2019); memory methods use a separate memory module to attend to full sequences (Lee et al.,",
"All but three GAO summaries include What GAO Found.",
"The percentages of GAO summaries that contain Why GAO did this study and What GAO rec-ommends are 94 .",
"8% and 29 .",
"0% .",
"For comparison, structured summaries are also observed on PUBMED (Cohan et al., 2018) samples.",
"Though they do not contain explicit aspect labels, the summaries can often be broken down into Introduc-tion, Methods, Results, and Conclusion via keyword matching.",
"Details about keyword choices for each aspect are provided in Table 11 in Appendix D. Comparison with Existing Long Document Summarization Datasets.",
"In Table 2, we compare GOVREPORT with several existing long document summarization datasets, including PUBMED and ARXIV (Cohan et al., 2018) that consist of scientific publications; BILLSUM (Kornilova and Ei-delman, 2019), a collection of congressional bills; and BIGPATENT (Sharma et al., 2019), a corpus of 4 www.gao.gov 5 crsreports.congress.gov Dataset # Doc Summary Doc Comp.",
"First, documents and summaries in GovReport are significantly longer than prior datasets .",
"Next, we inspect the distribution of summary-worthy bigrams in the source by dividing each document into ten equisized partitions.",
"For each partition, we count the occurrence of unique bigrams that also appear in the reference, accumulated from the start of the document to the end of the partition.",
"Fig. 2 shows that key information is spread throughout documents in GOVREPORT , with new salient bigrams being steadily added as more content is consumed.",
"For ARXIV and BIGPATENT , only about 10% of new salient bigrams are accumulated in the second half of the documents, reflecting the heavy positional bias in these two datasets.",
"In contrast, in GovReport and BILLSUM , more than 18% of new summary-worthy bigrams appear in the later half of the articles, showing a more even distribution.",
"A similar trend is observed on unigrams.",
"However, BILLSUM has the shortest documents among the five datasets.",
"This work aims to evaluate whether processing more text improves both informativeness and faithfulness of abstractive summaries.",
"In addition to ROUGE (Lin, 2004) and human evaluation, we extend existing QA-based metric (Eyal et al., 2019) and consider an entailment-based scorer.",
"QA-based Evaluation.",
"We present a new faithfulness evaluation metric by extending the APES score (Eyal et al., 2019).",
"We follow APES to construct a set of cloze questions , { q } , from each reference summary by masking entities.",
"Events, dates, and numbers are also masked, as they are prevalent in our data.",
"Each masked phrase becomes the gold-standard answer a ref for a question q .",
"We do not generate natural language questions (Durmus et al., 2020; Wang et al., 2020a), due to the lack of accurate question generation models for the domains of government reports and scientific papers.",
"QA models are trained by reading a question and a context to label the answer span in the context.",
"We construct context by greedily selecting sentences that maximize the improvement of ROUGE-2 recall when compared with the reference summary.",
"If the answer a ref cannot be found in the context, the sample is excluded from training.",
"We train all QA models by fine-tuning BERT (Devlin et al., 2019) to predict the answer span.",
"To evaluate the faithfulness of a system summary, APES uses the QA model to read the summary and a question q to label an answer a sys .",
"It calculates a unigram F1 score by comparing a sys and a ref .",
"Different from APES, we further use the QA model to read the context (sentences selected from the source) and give an answer a cxt to the question q .",
"We compute a unigram F1 by comparing a sys and a cxt , denoted as APES src .",
"Given that existing summarization models rarely rewrite names or numbers correctly, our metric can better capture faithfulness by using a gold-standard answer constructed from the source article than from the human-written abstract.",
"To extract entities and events , we deploy a state-of-the-art IE framework, OneIE (Lin et al., 2020) on GOVREPORT .",
"On PubMed, we retrain OneIE on Genia 2011 (BioNLP, 2011) and 2013 (BioNLP, 2013), and PubMed (Wei et al., 2019) datasets to extract domain-specific entities and events, such as entities of Gene and Disease .",
"We additionally include numbers and dates extracted by spaCy (Honnibal and Montani, 2017).",
"Entailment-based Evaluation.",
"We further consider FactCC (Kryscinski et al., 2020), which evaluates factual consistency of a system summary by predicting an entailment score between the source and the summary.",
"We reproduce their method on our datasets.",
"Additional details for implementing the evaluation models and the entity extraction models are given in Appendix B. 6 Experimental Results In this section, we start with describing training details in 6.1.",
"We then compare attention variants on documents of the same length ( 6.2) and study whether reading more text can generate more informative summaries ( 6.3).",
"We further report human evaluation on summary informativeness and faithfulness as well as automatic faithfulness scores ( 6.4).",
"Finally, we investigate whether automatic metrics correlate with human judgment ( 6.5).",
"We fine-tune BART (Lewis et al., 2020) for all experiments.",
"We implement our models with Py-Torch (Paszke et al., 2019) and Fairseq (Ott et al., 2019).",
"Additional position embeddings are initialized randomly for models that handle longer inputs.",
"The learning rate is set to 1 10 4 and learning rate warm-up is applied for the first 10,000 steps.",
"Adafactor (Shazeer and Stern, 2018) optimizer with a gradient clipping of 0.1 is used.",
"All models are trained on two Quadro RTX 6000 GPUs with 24GB memory or one Quadro RTX 8000 with 48GB memory.",
"We set a batch size of 2 per step and accumulate gradient every 32 steps.",
"During test, we adopt a beam size of 4 and a length penalty of 2 (Wu et al., 2016) on all datasets.",
"Comparisons.",
"We first experiment with articles that are all truncated at 1024 tokens.",
"For encoder attentions, we consider the following variants: (1) sliding WINDOW ; (2) adaptive span (ADASPAN ); (3) GLOBAL tokens; (4) STRIDE ; (5) RANDOM tokens; (6) Linformer (LIN .); (7) locality sensitive hashing (LSH); and (8) SINKHORN .",
"We ensure models are comparable by setting hyperparame-ters to satisfy w = w = k = lb l = 2 b s = 256 , so that models have similar memory complexity.",
"For LSH attentions, we select l = 4 rounds of hashing.",
"Following prior work (Zaheer et al., GovReport (new) PubMed System R-1 R-2 R-L R-1 R-2 R-L FULL 52.83 20.50 50.14 45.36 18.74 40.26 Encoder variants w/ full enc-dec attn.",
"2020), we combine GLOBAL , STRIDE , and RANDOM with WINDOW and ADASPAN , where we set g = n 2 /s = r = 128 for a fair comparison.",
"We adapt Linformer to encoder-decoder attentions to compare with HEPOS , where we use s h = n/k = 4 for all experiments.",
"Finally, we report results using FULL, i.e., the original, encoder and encoder-decoder attentions.",
"Results.",
"Among all encoder variants, learnable patterns perform the best, approaching the performance of full attentions on both GovReport and PubMed, as shown in Table 3. Within learnable patterns, Sinkhorn attention consistently obtains better ROUGE scores.",
"Moreover, combining techniques in fixed patterns is more effective than simply using window-based sparse attentions, though with an increased memory cost.",
"For encoder-decoder attentions, HEPOS consistently yields higher ROUGE scores than Linformer on both datasets , using either full or Sinkhorn encoder.",
"Notably, coupled with a Sinkhorn attention, our model's performance matches the variant using GovReport PubMed System (MAXLEN ) R-1 R-2 R-L R-1 R-2 R-L Baselines PEGASUS (1024) 45.97 20.15 41.34 TLM (full) 42.13 16.27 39.21 SEAL (full) 46.50 20.10 42.20 DANCER (full) 46.34 19.97 42.42 BIGBIRD (3072) 46.32 20.65 42.33 Encoder variants w/ full enc-dec attn.",
"full encoder attention, implying the effectiveness of HEPOS on both identifying the salient content and capturing the global context.",
"Comparisons include recent top-performing abstractive models: PEGASUS (Zhang et al., 2019), a large pre-trained summarization model with truncated inputs; TLM (Pilault et al., 2020), DANCER (Gidiotis and Tsoumakas, 2020), and SEAL (Zhao et al., 2020), all of them using hybrid extract-then-abstract methods; and BIGBIRD (Za-heer et al., 2020), which combines sliding window, global and random token attentions in the encoder.",
"For encoder variants, we pick the best performing model from fixed patterns to be combined with full encoder-decoder attention, i.e., sliding window with stride (STRIDE ), low-rank method (LIN .), and learnable patterns (LSH and SINKHORM ).",
"We then combine learnable patterns with HEPOS to support processing more text.",
"All models consume as long an input as the memory allows.",
"Results.",
"Overall, models that read more text obtain higher ROUGE scores , according to results on GovReport and PubMed in Table 4. First, different encoder variants with full encoder-decoder attentions attain better results than the full attentions baseline except Linformer.",
"Second, adding HEPOS encoder-decoder attention almost doubles the words that can be processed and further improves the performance.",
"This highlights the importance of handling both encoder attentions and encoder-decoder attentions efficiently.",
"Notably, HEPOS with an LSH encoder achieves new state-of-the-art results on PubMed , outperforming BigBird which only uses sparse attentions on the encoder.",
"We also report performances of our two best models with HEPOS on arXiv in Table 5, and they outperform all competitive abstractive models.",
"As can be seen from the sample summaries in Fig. 3, our model that reads in 10 k tokens generates more informative summary than the full attention model that only processes 1 k tokens.",
"Fig. 4 further shows that ROUGE-2 scores can be consistently lifted when reading more input, with similar trends observed on ROUGE-1 and ROUGE-L.",
"More sample outputs are presented in Appendix C. 6.4 Reading More Input Improves Faithfulness Here we first show human evaluation results on informativeness and unfaithful errors in the generated summaries.",
"We sample 100 documents from GovReport and PubMed (50 each) with structured references that are labeled with aspects as described in 4 and Appendix D. Each sample is evaluated by two fluent English speakers, who have cumulatively annotated tens of thousands of sentences for the same tasks before this work.",
"Annotators are asked to label each summary sentence with an aspect and then decide whether it contains any type of error.",
"Three types of unfaithful errors are considered:",
"(i) hallucination fabricating content not present in the input,",
"(ii) deletion incorrectly Human-written Summary: In fiscal year 2018, Medicaid covered approximately 75 million individuals at an estimated cost of $629 billion, $393 billion of which were federal funds.",
"(...)",
"While CMS is generally required to disallow, or recoup, federal funds from states for eligibility-related improper payments if the state's eligibility error rate exceeds 3 percent, it has not done so for decades, because the method it used for calculating eligibility error rates was found to be insufficient for that purpose.",
"To address this, in July 2017, CMS issued revised procedures through which it can recoup funds for eligibility errors , beginning in fiscal year 2022.",
"(...)",
"Model w/ full",
"attn.: Medicaid is a federal-state program that provides health care coverage to low-income individuals and families.",
"(...)",
"CMS officials stated that they have provided states with guidance on how to use data from SSA's automated system for eligibility determinations, (...) CMS officials said that they did not have guidance on when states should use SSA data to evaluate eligibility based on nonfinancial or financial criteria.",
"Model w/ HEPOS enc-dec attn.",
"(ours): The Patient Protection and Affordable Care Act (PPACA) expanded Medicaid coverage to millions of low-income adults and children with disabilities and their eligible dependents.",
"(...)",
"The selected states also reported that they did not have adequate processes to address these issues.",
"CMS has taken steps to improve its oversight of the Medicaid program, including issuing guidance to states on the use of MAGI-exempt bases for determining eligibility, but these efforts have not been fully implemented.",
"(...)",
"deleting crucial entities, events, or clauses, and",
"(iii) false concatenation inappropriately concatenating components from different sentences.",
"1 is given if any judge determines that a certain type of error exists in the sentence, 0 otherwise.",
"summary covers important information of an aspect when compared with the reference.",
"All system summaries and references are presented in a random order.",
"Human evaluation guidelines and sample summaries for different aspects are included in Appendix D. Results.",
"Overall, reading more text significantly improves informativeness as well as reduces fabricated content.",
"From Table 6, we observe that HEPOS attention, combined with a SINKHORN encoder, obtains better informativeness scores than comparisons that read in less text on both datasets.",
"This echos results from automatic evaluation in the previous section.",
"Moreover, both models that use efficient attentions reduce unfaithfulness, especially hallucination errors, when compared with the full attention model, which only reads 1024 tokens.",
"As the models read more content, they learn to surface more factual and richer content in the summaries, as seen in Fig. 3. Next, we explore if reading more helps correctly reflect the content in documents' later sections.",
"We plot aspect-level human ratings of informativeness and unfaithful errors on PubMed and GovReport in Fig. 5 and Fig. 6. We report percentages of sentences with unfaithful errors by majority voting (i.e., at least one error is found by both annotators in the sentence).",
"As can be seen, our models consistently improve informativeness and reduce errors across sections, especially for Results and Conclusions on PubMed and What GAO rec-ommends on GovReportthese sections often appear in the later part of the source documents.",
"APES src captures this, but not APES.",
"Summarizing long inputs has been investigated in many domains, including books (Mihalcea and Ceylan, 2007), patents (Trappey et al., 2009), movie scripts (Gorinski and Lapata, 2015), and scientific publications (Qazvinian and Radev, 2008).",
"However, the datasets are often too small to train neural models.",
"Cohan et al. (2018) publish two large-scale datasets by collecting articles from ARXIV and PUBMED .",
"Popular methods rely on extractive summarizers that identify salient sentences based on positional information (Dong et al., 2020) or combined global and local contexts (Xiao and Carenini, 2019), where each sentence is represented as aggregated word embeddings.",
"However, extractive summaries are often redundant and incoherent, highlighting the need for handling long documents via abstractive summarization.",
"To that end, extract-then-abstract methods are proposed.",
"For example, Pilault et al. (2020) first extract relevant sentences and then rewrite them into paper abstracts.",
"Our work is in line with building end-to-end abstractive summarization models for long input.",
"Cohan et al. (2018) design a hierarchical encoder to read different sections separately, and then use combined attentions over words and sections to generate the summary.",
"Multiple agents are created to read segments separately, and then collaboratively write an abstract (Celikyilmaz et al., 2018).",
"However, both work truncates articles to 2 K words.",
"Although efficient encoder attentions have been studied in Zaheer et al. (2020) for abstractive summarization, at most 3 K tokens can be consumed by their models.",
"Our HEPOS encoder-decoder attention are able to process more than 10 K tokens, significantly improving summary informativeness and faithfulness.",
"We investigate efficient attentions for long document summarization.",
"We propose a novel encoder-decoder attention, HEPOS , based on head-wise positional strides that can effectively identify salient content.",
"Models based on HEPOS attention can process at least twice as many words and produce more informative summaries with less unfaithful errors, according to both automatic evaluation and human evaluation.",
"We further show that our new cloze QA metric better correlates with human judgment than prior faithfulness evaluation metrics.",
"This research is supported in part by Oracle for Research Cloud Credits, National Science Foundation through Grant IIS-1813341, and by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via contract # FA8650-17-C-9116.",
"The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government.",
"The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.",
"We thank three anonymous reviewers for their valuable suggestions and comments."
] | [
"abstain",
"objective",
"method",
"method",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"other",
"other",
"other",
"other"
] |
[
"The composition of richly-inflected words in morphologically complex languages can be a challenge for language learners developing literacy.",
"Accordingly, Lane and Bird (2020) proposed a finite state approach which maps prefixes in a language to a set of possible completions up to the next morpheme boundary, for the incremental building of complex words.",
"In this work, we develop an approach to morph-based auto-completion based on a finite state morphological analyzer of Plains Cree ( nhiyawwin ), showing the portability of the concept to a much larger, more complete morphological transducer.",
"Additionally, we propose and compare various novel ranking strategies on the morph auto-complete output.",
"The best weighting scheme ranks the target completion in the top 10 results in 64.9% of queries, and in the top 50 in 73.9% of queries.",
"The ACL 2022 theme track asks how we can scale up current NLP technologies for the rich diversity of human languages.",
"We contend that impactful work, intended to support local language community goals, does not necessarily focus on scale or current NLP technologies.",
"Community values and motivations regarding language technology is as rich and varied as human language itself, and solutions which are well received in one community may not be adequate or appropriate in another.",
"Context must be considered, from which novel tasks take shape.",
"Lane and Bird (2020)'s re-imagining of word completion for morphologically-rich languages is an example of work born from a local context.",
"Based in Kunwinjku-speaking communities in northern Australia, we wanted to support peo-ple's desire to learn literacy and practice building the language's long polysynthetic words.",
"This led to the idea for a tool that helps users incrementally build complex words by suggesting completions Figure 1: Given some string prefix of a word in Plains Cree, Morph completion suggests continuations up to the next morpheme boundary (in bold), for interactive and incremental building of morphologically complex words.",
"up to the next morph boundary (Figure 1).",
"While Plains Cree and Kunwinjku speaking communities are thousands of miles apart, the authors of this work noticed some similarities in their respective contexts: both Plains Cree and Kunwinjku are polysynthetic languages, and both are working with communities to support language learning initiatives.",
"Other aspects of our situations differ.",
"For example, the FST morphological analyzer for Kunwinjku can be described as a field tool, developed by a single researcher over the course of a couple years, while the Plains Cree morphological analyzer has been in continuous development by a team for over 8 years.",
"The Plains Cree model is much more extensive, robust and complete.",
"How will the morph completion work transfer to such a large and technically refined project?",
"How can we leverage our unique constellation of resources to adapt morph completion to suit Plains Cree?",
"These are the question we set out to answer in this work.",
"This work examines the assumptions of morph-based auto-complete, and extends existing work to suit Plains Cree.",
"Our contributions are: An imple-3284 mentation of morph-based completion algorithm for Plains Cree 1 , a discussion of contextual similarities and differences between Kunwinjku and Plains Cree and how this affects the utility of the morph-based completion concept, and a novel ranking algorithm which enables the morph-completion concept to scale to a much larger, more extensive grammar.",
"Speakers of morphologically complex languages are engaged in activities to maintain orality and support literacy.",
"Two examples are the Plains Cree and Kunwinjku speaking communities.",
"Plains Cree (endonymically known as nhiyawwin , ISO 639-3: crk) is a member of the Algonquian family.",
"It is the western-most Cree dialect, spoken by about 20,000 speakers in Alberta, Saskathewan, and northern Montana (Wolfart, 1973; Harrigan et al., 2017).",
"Years of documentary linguistic work have produced extensive language resources in the form of grammars (Wolfart, 1973; Wolvengrey, 2011; Dahlstrom, 2014) and textbooks (Okimasis, 2018; Ratt, 2016).",
"Kunwinjku (ISO 639-3:gup) is a member of the Gunwinyguan language family.",
"It is spoken by an estimated 1,700 speakers in the west Arnhem region of northern Australia.",
"Kunwinjku has its own documentary resources: grammars (Evans, 2003; Carroll, 1976), and a language primer (Etherington and Etherington, 1998).",
"Despite these volumes, literacy in Kunwinjku is quite rare.",
"While by some standards these languages might be classified as low-resource\", the depth and abundance of descriptive linguistic work has paved the way for the development of computational models of Kunwinjku and Plains Cree morphology (Lane and Bird, 2019; Harrigan et al., 2017; Arppe et al., 2017; Schmirler et al., 2017; Snoek et al., 2014). Based on these computational models of morphology, language technologies are being developed to support the language goals of these communities: smart dictionaries and spellcheckers (Arppe et al., 2016), word-builder application (Lane and Bird, 2020), and intelligent language learning applications (Bontogon et al., 2018). 1 The source code for the original FST and the morph-completion model is available online at https://github. com/giellalt/lang-crk Finite State Morphology Morph completion models build on the established foundation of finite state models for morphological generation and analysis (Beesley and Karttunen, 2003). Under this formalism, it is customary to split the modelling task into two parts: the first task is to define the morphological inventory and valid transitions between morph classes, i.e. morphosyntax. The second handles any alternation that occurs at the morpho-phonological interface. Several (open-source) toolkits exist which implement these basic modeling capabilities: Foma (Hulden, 2009), HFST (Lindn et al., 2013), OpenFST (Allauzen et al., 2007), and Pyini (Gorman, 2016). The Plains Cree morphological models are implemented with both HFST and Foma within the GiellaLT framework (Moshagen et al., 2014) and have been under active development for 8 years, and give a comprehensive treatment of noun (Snoek et al., 2014) and verb (Harrigan et al., 2017) morphology. As such, the Plains Cree model has had the opportunity to develop treatments for difficult-to-model features, such as reduplication. The Plains Cree model currently contains 21,232 stem (5,553 noun stems, 47 pronoun stems, 1,669 particles, 104 numerals, and 13,860 verb stems), derived from the lexical database underlying the bilingual Cree-English dictionary by Wolvengrey (2001). The Kunwinjku model, on the other hand, is implemented using Foma, and has only been under periodic development for the last 2 years. In terms of size, the Kunwinjku FST contains significantly fewer stem entries: 573 verb stems 2 , and 748 noun stems (Lane and Bird, 2019). Despite these differences in implementation and scale, we show in this work that FST morph completion can be successfully adapted to work with Plains Cree. 2.1 FST-based Morph Completion Lane and Bird (2020) present an approach to automatic word completion intended to assist language learners and speakers of morphologically complex languages who are building confidence in writing. For example, in Kunwinjku the verb stem bawo means to leave.",
"This stem can then be inflected to convey subject, object, tense, comitative, and adverbial information: 2 Though these forms can combine with derivational affixes to create a number of additional stems.",
"(1) bene-bad-yi-bawo-ng3UA.3SG.P-now-COM-leave-PP The two of them left him with [E.10.162]",
"Building valid surface forms poses a challenge for learners of the language who may not yet have mastery of the morphology and orthography.",
"Moreover, the vocabulary of morphologically complex languages is a combinatorial function of morpheme possibilities, making word-level prediction intractable.",
"This use case drives the reconception of word completion as prediction up to the next morpheme boundary, to incrementally and interactively assist in the building of complex words.",
"The model is implemented as an extension to a standard finite state morph analyzer, and assumes that the FST model contains some intermediate representation in which morph boundaries are explicitly marked.",
"In brief their finite state algorithm:",
"1. Alters the existing morphological analyzer so that it does not remove morph boundary symbols",
"2. Recognizes all possible prefixes composed of user input followed by any character up to the next morph boundary symbol.",
"3. Generates a list of completions possible from the given prefix, constrained by the space of morphotactically valid words defined by the morph analyzer.",
"A detailed explanation of their algorithm, and implementation examples can be found in (Lane and Bird, 2020).",
"They deploy their model in a Kunwinjku dictionary interface, serving a list of partial completions which are refreshed per keystroke.",
"The user builds complex words incrementally, guided by the FST model.",
"When a word is fully formed, the interface queries the dictionary database, using the regular morph analyzer to retrieve relevant lexical entries.",
"Adapting the autocompletion algorithm to Plains Cree is relatively straightforward, owing in part to the similarities between Plains Cree and Kunwinjku.",
"Like Kunwinjku, Plains Cree is a polysynthetic agglutinating language (Wolfart, 1973).",
"Also like Kunwinjku, Plains Cree verbs have been described using templatic morphology: According to (Wolvengrey, 2012), there are 8 prefixal slots plus some amount of reduplication.",
"Suffixially, Wolvengrey (2012) provides 10 seperate slots, but in practice, these are regularly chunked together into a single portamanteau morph (Harrigan et al., 2017; Okimasis, 2018).",
"For this reason, Plains Cree is often treated as a mostly-prefixing language, similar to Kunwinjku.",
"Both Kunwinjku and Plains Cree also exhibit word-internal dependencies as well as noun incorporation.",
"In terms of agreement, Kunwinjku verbs exhibit circumfixal markers for tense, where morphemes directly before and after the verb stem must agree for the feature.",
"Plains Cree, on the other hand, exhibits dependency in its person marking (where the left-most and right-most morphemes of a verb form a circumfix) as well as its comitative derivation (where morphemes immediately to the left and right of the verb stem constitute a circumfix).",
"Noun incorporation is present in both languages, though it is more common in Kunwinjku, where it occupies a slot in the prefixal morphology.",
"Where Noun Incorporation is present in Plains Cree, it interrupts the verb stem itself and is rare enough to often be lexicalized as a separate verb all together.",
"These similarities and differences have consequences for each language's underlying FST and thus the autocompletion algorithm.",
"Issues of long-distance dependencies are essentially handled in an identical way, through the use of flag diacritics to restrict progression through the model (Harrigan et al., 2017; Lane and Bird, 2019).",
"In terms of derivational morphology, the Kunwinjku FST marks derivational morpheme bound-ries identically to inflectional ones.",
"With respect to the Plains Cree FST, we have explored including derivational boundaries within stems, but have left that out of the morpheme completion solution, in part because it increases complexity, size, and speed of model, and in part since making use of derivational boundaries would split stems in a manner that would require users to have an understanding of the derivational morphology of Plains Cree that most, in particualr learners, do not possess.",
"We opt instead to pre-compile derivational stems, thus ignoring derivational boundaries in favor of providing full-stem-length suggestions to users.",
"In this section we give a detailed overview of our implementation, with examples written in the",
"XFST formalism (Beesley and Karttunen, 2003).",
"Our first step is to capture the full lexical side of our morphological analyzer ( Words ) with morph boundaries present (a derivational boundary / is added in conjunction with the occurrence of a lexeme-internal hyphen that is not associated with an inflectional boundary, i.e. < or > ): (2) define AddBoundary [[..] -> \"/\" || \"-\" _ \\[ \"<\" | \">\"]]; define CorrectWords [Words .o. AddBoundary]; As is done in the previous work, we define FSTs which recognize morph boundaries (Bx), and everything except morph boundaries (Ax): (3) define Bx [ \"<\" | \">\" | \"/\" ]; % Note: \"/\" denotes a derivational % morpheme boundary define Ax [ ? Bx ]; We define an FST which defines spelling relaxation rules.",
"Fortunately, this can be imported directly from our existing Plains Cree spelling relaxation module, with some minor additions.",
"That FST contains rules which allow the arbitrary substitution of long and short vowels, or the deletion/in-sertion of sounds in particular contexts.",
"As a simplified example: (4) define SpellRelax [ a (->) ,, e (->) ,, i (->) ,, o (->) ,, (->) a ,, (->) i ,, (->) o ,, [..] (->) h || Vowel _ Stop ,, h (->) 0 || Vowel _ Stop ]; Next, we define a series of helper FSTs.",
"InsertBoundary optionally inserts morph boundaries in any context.",
"NextMorph outputs everything from a given string up until the next morph boundary.",
"PrefixStrings outputs all possible prefixes of a given input.",
"rmBoundary removes morph boundary symbols from the given input.",
"(5) define InsertBoundary [0 (->) Bx]; (6) define NextMorph [?+ [ 0:Ax ]* 0:Bx]; (7) define PrefixStrings [?* [ 0:? ]*]; (8) define rmBoundary [Bx -> 0]; We compose these FSTs to form the FST which takes a string as input, and returns a list of possible completions up to the next morph boundary: (9) define MorphComplete [InsertBoundary .o. NextMorpheme .o. [PrefixStrings .o. [CorrectWords",
"\">\"].l].u",
".o.",
"rmBoundary ] ; The FST defined up to this point can be used to produce morph completions for a given input.",
"The FST can be made tolerant of orthographic variation by composing SpellRelax with MorphComplete : (10) regex [SpellRelax .o. MorphComplete]; Up until this point, our implementation does not differ significantly from the algorithm proposed by Lane and Bird (2020), except in the definition of morph boundaries, and in the particulars of the spelling relaxation rules.",
"However, because the Plains Cree morphological analyzer has a much larger lexical inventory than the Kunwinjku analyzer, we found the space of possible completionsparticularly when allowing for orthographic variationto be unmanageably large.",
"In order to make use of the output of morph completion in Plains Cree, we need to extend the original algorithm to address the issue of result ranking.",
"The Plains Cree morph completion FST can sometimes return thousands of results for a given query.",
"The possibility of having thousands of results increases significantly when spelling relaxation rules are introduced (e.g., compare Figure 2 with Figure 3).",
"In order to render the model usable, it is essential to enforce a ranking of the results.",
"We tried 4 different ranking schemes and evaluated their effect on the morph completion space.",
"Data All 4 approaches leverage a corpus of written Plains Cree to collect frequency statistics of various subword units.",
"We use the morphosyntactically-tagged corpus of Arppe et al. (2020), which has recently been extended with the so-called Bloomfield texts, and which includes a frequency-sorted list of tokens and their corresponding morphological analysis.",
"This resource counts the occurrences of 33,655 unique words across a corpus of texts, including conversations, dialogues, narratives and lectures, amounting to 242,937 words total (Wolfart, 2000; Bear et al., 1992; Ka-Npitehtew, 1998; Masuskapoe, 2010; Ahenakew, 1987; Whitecalf, 1983; Minde, 1997; Bloomfield, 1930, 1934).",
"We refer to this resource as the AWB corpus (for Freda Ahenakew, H. 3287 Christoph Wolfart, and Leonard Bloomfield who collected and compiled the original texts) from now on in this work.",
"The first ranking strategy we developed uses the AWB corpus to count the frequency of all possible prefixes for each word in the word list, up to and including all complete words.",
"These counts are converted to a probability distribution by dividing by the total number of counted prefixes.",
"We take the negative log of this probability to obtain the weight of the prefix.",
"Lower weights correspond to more likely prefixes.",
"Unobserved prefixes are handled by obtaining a weight of 15 plus and additional weight of 1 per character after the first.",
"This effectively places unobserved suggestions lower than any possible observed prefix in priority, and favors shorter unobserved prefixes over longer ones.",
"The prefix weights are composed with output of the morph completion FST.",
"Note that only the HFST compiler and its lookup utilities, hfst-lookup or hfst-optimized-lookup , support the incorporation and presentation of weights in an FST (as presented here).",
"(12) define PrefixWeighting [ObservedPrefixWeights | UnobservedPrefixWeights]; regex [SpellRelax .o. MorphComplete]",
".o.",
"PrefixWeighting; In the evaluation we refer to this weighting scheme as pWFST.",
"A drawback of the prefix-weighting scheme is that it assigned weights to observed prefixes without considering shared transition information between morphs.",
"This means that the resulting WFST model which stores weights for all observed prefixes, can start to reach greater than 100 megabytes in size, which may be be prohibitive for mobile deployment scenarios.",
"Considering this, our second weighting scheme weights transitions rather than prefixes, and results in a smaller WFST models since transitions of various prefixes can be shared.",
"Our transition-based weighting scheme comes from Sahala et al. (2020)'s work on a finite state morphological analyzer for Babylonian 3 .",
"The approach uses a manually disambiguated list of surface form and analysis pairs to estimate the likelihood of final analyses, represented as a sequence of transitions from internal FST states.",
"It does this by counting transitions between states for a given form/analysis pair, and normalizing these counts into a probability distribution for each state.",
"If C s ( x : y ) denotes the counts at state s for symbol-pairs x : y , then the transition weight w is defined as: w = C s ( x : y ) ( f s + (cid:80) z : u C s ( z : u )) In the evaluation we refer to this weighting scheme as tWFST.",
"As with pWFST, the HFST compiler and lookup functionalities are necessary for the inclusion and presentation of weights.",
"Language models are probability distributions over a sequence of tokens.",
"Perplexity is a measure used to relate how well a given sequence of tokens fits a trained language model.",
"Given a language model q , the perplexity of a sequence of tokens t 1 , ..., t n is calculated as follows: P P = e 1 n (cid:80) n i =1 ln q ( t i ) Lower perplexity scores denote greater coherence according to the language of the training data.",
"For the purpose of ranking possible morph completions generated by a finite state transducer, we train a language model to represent the language of valid prefixes in Plains Cree, according to statistics gleaned from a corpus of text.",
"We process the corpus for the language modelling task by splitting the text into word level tokens.",
"The list of tokens is then divided into an 80/10/10 train/validation/test split.",
"We then split each token in the data into its set of all possible prefixes with the beginning and ending word boundaries marked.",
"For example, the word mtos becomes the set of instances: (13) < BOS > m < EOS > < BOS > m < EOS > < BOS > m t < EOS > < BOS > m t o < EOS > < BOS > m t o s < EOS > 3 The code for the weighting scheme, written by Miikka Silfverberg, can be found at https://github.com/ mpsilfve/fst-corpus-weights 3288 Figure 2: The distribution of completion options for Plains Cree verbs by prefix length.",
"We train a character-based language model using Fairseq's Transformer LM architecture with default parameters (Ott et al., 2019) for 20 epochs, and select the model which minimizes error on the test set.",
"With this model, we assign perplexity scores and rerank the output of the two WFST models.",
"Because the number of possible results from the morph completion model can sometimes reach the order of 10 5 , we limit the LM scoring to the top 500 candidates of the weighted FSTs.",
"The intention is that this model can be deployed to support text entry in a morphologically complex language.",
"We therefore want to measure how often the model is able to deliver a useful set of results.",
"We define a useful result set as one which returns at least one completed prefix which is a proper substring of the target full-word, within the top N ranked results.",
"For example, if N = 10 , our target word is nik-kityimikawinn and our query is ni , then a result of nikappearing in the top 10 results is counted as useful.",
"In order to evaluate the usefulness of the models systematically, we randomly sampled 100 unique fully-inflected word forms from the Dog Biscuits story by Solomon Ratt which is publicly available online.",
"4 These words are broken down into their complete set of prefixes, which created a set of 1,322 prefixes.",
"Each prefix is used as a query, retrieving result sets at N = 10 , 25 , 50 , and we report on the percent of queries which return a valid completion in the top N for each of the ranking strategies described in Section 4.1 (See Table 4 for results).",
"We measure the completion space of two versions of the morph completion model.",
"Given a large sample of model inputs (prefixes), the x -axis repre-4 https://creeliteracy.org/2014/01/20/dog-biscuits-y-dialect-with-audio/ 3289 Top 10 Top 20 Top 50 FST 38.4 42.7 48.0 pWFST 64.9 69.5 73.9 tWFST 40.1 47.6 61.2 pWFST.LM 40.3 48.0 60.2 tWFST.LM 48.7 55.6 66.3 Figure 4: Given a random sample of 1,322 prefixes derived from 100 Plains Cree verbs, we show the proportion of these prefixes which which produce a valid completion in in the top N ranked results.",
"sents all prefixes of length x .",
"The y -axis shows the number of completions generated from the model, and the data is represented as distributions over all inputs of length x .",
"The first model shows the completion space of the sample when we strictly adhere to orthographic standard (Figure 2).",
"The second model implements spelling relaxation (Figure 3).",
"The effect of spelling relaxation on the number of possible completions is, as one would expect, a significant upward shift in the number of possible completions across all character positions.",
"Given that the purpose of morph-based completion is to help guide the user to build out complex words, we would prefer to deploy a spelling-relaxed version of the model.",
"However, the magnitude of the measured completion space of this model would make this infeasible, as the median number of completions stays above 100 up until 12 characters of the input have been typed.",
"Thus, coming up with an effective weighting strategy is absolutely essential in order to have a model that can handle orthographic variation in the input.",
"The baseline, unweighted strategy can be seen in Figure 5.",
"Here, the distribution of rankings roughly imitates the shape of the full completion space (Fig-ure 3), with the majority of mass occurring above our ideal ranking threshold of 10.",
"To be precise, with the baseline no-ranking strategy, 38.4% of sampled queries result in a valid completion ranked in the top 10 results, 42.7% give a valid completion in the top 20, and 48.0% give a valid completion in the top 50.",
"In contrast, the best weighting scheme is the WFST, which significantly improves the distribution of rankings compared to the baseline, with the majority of queries providing valid completions in the top 10.",
"More precisely, 64.9% of prefix queries result in a a valid completion ranked in the top 10, 69.5% in the top 20, and 73.9% in the top 50 (Figure 6).",
"This section gives an overview of the use of the morphological autocomplete system by one of the authors, an English native, second language learner of Plains Cree.",
"5 This use case aligns with a major subset of potential users: literate but nonfluent learners of the language.",
"6 By restricting the autocomplete results to the top 10 most heavily weighted items, we have found the system to perform quite well.",
"The system was evaluated by typing the basic introductory phrase tnisi nithtemak!",
"Atticus nitisiyihkson kwa kk-nistomitanaw -tnitahtopiponyn.",
"nitatoskn amiskwaciy-wskahikanihk , which translates to Hello friends! my name is Atticus and I am 29 years old. I work in Alberta.",
"This phrase was chosen as it contains fairly common lexical items while also being a realistic use case.",
"In typing these words, no diacritics were used, as typing a circumflex on a North American keyboard requires a number of extra strokes, and such diacritics are often not included in non-professional Plains Cree writing.",
"Additionally, Atticus , was not typed into the autocomplete system as it is not a Plains Cree word.",
"Writing this excerpt reveals interesting user experience data.",
"While most words had the appropriate autocomplete suggestion, the word nitisiyihkson could only be suggested by typing all but the last two letters: nitisiyihks .",
"Typing any less did not result in target suggestions.",
"This was likely due to the fact that the system seemed to prefer analysing the string as beginning with the morpheme nitisiyi-, rather than the target morphemes of ni-t-isiyihkso.",
"This is particularly notable as introductions are common, especially for language learners.",
"Similarly, in autocompleting kk-nistomitanaw results were unexpected.",
"The correct breakdown for this word is kk-nisto-mitanaw ; despite this, nistoand -mitanaw are written together orthographically.",
"The morphological autocomplete suggestions when given the input string keka-ni 5 The code for the GUI can be found at https:// github.com/abbottLane/cree-wordbuilder 6 Ideally, this evaluation would be done by a number of literate Plains Cree native speakers; however, the number of fully literate speakers who are comfortable using a computer is quite limited.",
"Because experimenting on Plains Cree speakers requires special consideration and taking into account a low tolerance for non-canonical language (Harrigan et al., 2019), we decided to first test the system on one of the authors so as to retain the maximum number of native speaker participants for future releases.",
"produce only kka-nso, specifically with the final hyphen.",
"If one types kka-nstom , the system suggests kk-nistomitanaw .",
"The the previous two cases, the autocomplete and weighting system are working exactly as expected.",
"The issue instead lies with the underlying corpus, which features neither kk-nistomitanaw nor any form of the verb nitisiyihkson .",
"The corpora used as a base for weighting is mostly lectures or discussion between individuals who are otherwise familiar with each other.",
"Unsurprisingly, this did not result in instances of individuals introducing themselves to one another or discussing anyone named Atticus.",
"Further, in typing only eas an input string is not useful in and of itself, as all verbs can take this morpheme (written as ).",
"In addition to the expected benefits of autocompletion, the system empowers users to type the language even if they are not entirely sure of the correct form of a word.",
"As an example, the term nitatoskn comes from the root atosk.",
"Although person marking morphology in the form of a nit -n circumfix is easy enough for learners to remember, in some conjugations, the final becomes an .",
"This is not consistent among conjugation classes, and some verb classes show the opposite alternation.",
"As a result, second language learners can struggle with knowing whether to write nitatoskn or *nitatoskn .",
"The autocom-3291 plete system solves this problem by suggest only the correct nitatosk when the user types nitato .",
"The main drawback of this system from a user perspective is that target completions were rarely the top most suggestions, but this appears to be due to minimal training data for the weighting, and is not critical as long as the user is competent enough to know which suggestions are categorically incorrect.",
"In this paper we presented an approach to morph-based autocompletion for Plains Cree.",
"Informed by our particular context and availability of corpus data, we expanded on their approach by exploring three different weighting schemes to rein in the magnitude of possible completions per query, which are a result of our need to accommodate a more complex FST grammar, and greater orthographic flexibility.",
"Our results show that all three weighting schemes go a long way to move target string rank distributions below desired thresholds, with the lexical weighting approach ranking the target completion in the top 10 results in 64.9% of queries, and in the top 50 in 73.9% of queries.",
"The qualitative evaluation highlighted the usefulness of using an underlying FST to generate completions: long-distance dependencies and circumfixes are respected by the autocomplete algorithm, and so morphotactic integrity is preserved.",
"Additionally, spelling relaxation rules in the underlying FST mean that the user does not need to worry as much about inputting diacritics, or exact spelling: the algorithm will suggest and rank alternate surface forms which vary along these dimensions.",
"In future work we hope to deploy morph completion models in mobile devices, to support text entry.",
"However, before we get there, we need to do proper user testing with members of the community and get their feedback on a polished demo of the project at this stage.",
"Indeed, a limitation of this work is that we chose not to carry out thorough user testing in the Cree Community at this stage.",
"It is natural for researchers to want to rush prototypes into the hands of prospective users, but this can lead to technology burnout among otherwise will-ing collaborators (Harrigan et al., 2019; Le Ferrand et al., 2022).",
"We performed intrinsic evaluation by measuring the model's completion space to judge the feasibility of moving forward with the concept, and did self-testing to convince ourselves that the user experience is workable.",
"That is the scope of this work.",
"Meaningful advances in language technology for low-resource, Indigenous, and/or endangered languages entails progress in our recognition and engagement with the context and use cases for such technologies at a community level.",
"Morph-based autocompletion is designed to support text entry and word-building for morphologically rich languages.",
"There are myriad factors which affect the usefulness of any approach in the real world.",
"In our experience, connecting with real-world contexts leads to a better understanding of use-cases and problem constraints.",
"This, in turn, fuels creativity and leads to better outcomes for the language communities we work with.",
"We are grateful to the Bininj people of northern Australia and the Plains Cree ( nhiyawak ) communities, in particular those associated with the Maskwacs Education Schools Commission (MESC) in Maskwacs, Alberta, Canada, for the opportunity to work with them on language projects.",
"Our thanks as well to the anonymous ACL reviewers and to Steven Bird for their feedback on earlier versions of this paper.",
"We are also grateful to Miikka Silfverberg for making available his implementation for the transitional weighting of a HFST model.",
"This research was supported in part by the Australian government through a PhD scholarship, and grants from the Australian Research Council and the Indigenous Language and Arts Program, and by a Partnership Grant (#895-2019-1012) from the Social Sciences and Humanities Research Council (SSHRC) of Canada."
] | [
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"This paper investigates how to effectively incorporate a pre-trained masked language model (MLM), such as BERT, into an encoder-decoder (EncDec) model for grammatical error correction (GEC).",
"The answer to this question is not as straightforward as one might expect because the previous common methods for incorporating a MLM into an EncDec model have potential drawbacks when applied to GEC.",
"For example, the distribution of the inputs to a GEC model can be considerably different (erroneous, clumsy, etc.) from that of the corpora used for pre-training MLMs; however, this issue is not addressed in the previous methods.",
"Our experiments show that our proposed method, where we first fine-tune a MLM with a given GEC corpus and then use the output of the fine-tuned MLM as additional features in the GEC model, maximizes the benefit of the MLM.",
"The best-performing model achieves state-of-the-art performances on the BEA-2019 and CoNLL-2014 benchmarks.",
"Our code is publicly available at: https://github.com/ kanekomasahiro/bert-gec .",
"Grammatical Error Correction (GEC) is a sequence-to-sequence task where a model corrects an ungrammatical sentence to a grammatical sentence.",
"Numerous studies on GEC have successfully used encoder-decoder (EncDec) based models, and in fact, most current state-of-the-art neural GEC models employ this architecture (Zhao et al., 2019; Grundkiewicz et al., 2019; Kiyono et al., 2019).",
"In light of this trend, one natural, intriguing question is whether neural EndDec GEC models can benefit from the recent advances of masked language models (MLMs) since MLMs such as BERT (Devlin et al., 2019) have been shown to yield substantial improvements in a variety of NLP tasks (Qiu et al., 2020).",
"BERT, for example, builds on the Transformer architecture (Vaswani et al., 2017) and is trained on large raw corpora to learn general representations of linguistic components (e.g., words and sentences) in context, which have been shown useful for various tasks.",
"In recent years, MLMs have been used not only for classification and sequence labeling tasks but also for language generation, where combining MLMs with EncDec models of a downstream task makes a noticeable improvement (Lample and Conneau, 2019).",
"Common methods of incorporating a MLM to an EncDec model are initialization (init) and fusion (fuse).",
"In the init method, the downstream task model is initialized with the parameters of a pre-trained MLM and then is trained over a task-specific training set (Lample and Conneau, 2019; Rothe et al., 2019).",
"This approach, however, does not work well for tasks like sequence-to-sequence language generation tasks because such tasks tend to require a huge amount of task-specific training data and fine-tuning a MLM with such a large dataset tends to destruct its pre-trained representations leading to catastrophic forgetting (Zhu et al., 2020; McCloskey and Cohen, 1989).",
"In the fuse method, pre-trained representations of a MLM are used as additional features during the training of a task-specific model (Zhu et al., 2020).",
"When applying this method for GEC, what the MLM has learned in pre-training will be preserved; however, the MLM will not be adapted to either the GEC task or the task-specific distribution of inputs (i.e., erroneous sentences in a learner corpus), which may hinder the GEC model from effectively exploiting the potential of the MLM.",
"Given these drawbacks in the two common methods, it is not as straightforward to gain the advantages of MLMs in GEC as one might expect.",
"This background motivates us to investigate how a MLM should be incorporated into an EncDec GEC model to maximize its benefit.",
"To the best of our knowledge, no research has addressed this research question.",
"In our investigation, we employ BERT, which is a widely used MLM (Qiu et al., 2020), and evaluate the following three methods:",
"(a) initialize an EncDec GEC model using pre-trained BERT as in Lample and Conneau (2019) (BERT-init),",
"(b) pass the output of pre-trained BERT into the EncDec GEC model as additional features (BERT-fuse) (Zhu et al., 2020), and",
"(c) combine the best parts of",
"(a) and",
"(b).",
"In this new method",
"(c), we first fine-tune BERT with the GEC corpus and then use the output of the fine-tuned BERT model as additional features in the GEC model.",
"To implement this, we further consider two options: (c1) additionally train pre-trained BERT with GEC corpora (BERT-fuse mask), and (c2) fine-tune pre-trained BERT by way of the grammatical error detection (GED) task (BERT-fuse GED).",
"In (c2), we expect that the GEC model will be trained so that it can leverage both the representations learned from large general corpora (pre-trained BERT) and the task-specific information useful for GEC induced from the GEC training data.",
"Our experiments show that using the output of the fine-tuned BERT model as additional features in the GEC model (method",
"(c)) is the most effective way of using BERT in most of the GEC corpora that we used in the experiments.",
"We also show that the performance of GEC improves further by combining the BERT-fuse mask and BERT-fuse GED methods.",
"The best-performing model achieves state-of-the-art results on the BEA-2019 and CoNLL-2014 benchmarks.",
"Studies have reported that a MLM can improve the performance of GEC when it is employed either as a re-ranker (Chollampatt et al., 2019; Kaneko et al., 2019) or as a filtering tool (Asano et al., 2019; Kiyono et al., 2019).",
"EncDec-based GEC models combined with MLMs can also be used in combination with these pipeline methods.",
"Asano et al. (2019) proposed sequence labeling models based on correction methods.",
"Our method can utilize the existing EncDec GEC knowledge, but these methods cannot be utilized due to the different architecture of the model.",
"Besides, to the best of our knowledge, no research has yet been conducted that incorporates information of MLMs for effectively training the EncDec GEC model.",
"MLMs are generally used in downstream tasks by fine-tuning (Liu, 2019; Zhang et al., 2019), however, Zhu et al. (2020) demonstrated that it is more effective to provide the output of the final layer of a MLM to the EncDec model as contextual embed-dings.",
"Recently, Weng et al. (2019) addressed the mismatch problem between contextual knowledge from pre-trained models and the target bilingual machine translation.",
"Here, we also claim that addressing the gap between grammatically correct raw corpora and GEC corpora can lead to the improvement of GEC systems.",
"In this section, we describe our approaches for incorporating a pre-trained MLM into our GEC model.",
"Specifically, we chose the following approaches: (1) initializing a GEC model using BERT; (2) using BERT output as additional features for a GEC model, and (3) using the output of BERT fine-tuned with the GEC corpora as additional features for a GEC model.",
"We create a GEC EncDec model initialized with BERT weights.",
"This approach is based on Lample and Conneau (2019).",
"Most recent state-of-the-art methods use pseudo-data, which is generated by injecting pseudo-errors to grammatically correct sentences.",
"However, note that this method cannot initialize a GEC model with pre-trained parameters learned from pseudo-data.",
"We use the model proposed by Zhu et al. (2020) as a feature-based approach (BERT-fuse).",
"This model is based on Transformer EncDec architecture.",
"It takes an input sentence X = ( x 1 , ..., x n ) , where n is its length.",
"x i is i -th token in X .",
"First, BERT encodes it and outputs a representation B = ( b 1 , ..., b n ) .",
"Next, the GEC model encodes X and B as inputs.",
"h li H is the i -th hidden representation of the l -th layer of the encoder in the GEC model.",
"h 0 stands for word embedding of an input sentence X .",
"Then we calculate h li as follows: h li = 1 2( A h ( h l 1 i , H l 1 ) + A b ( h l 1 i , B l 1 )) (1) where A h and A b are attention models for the hidden layers of the GEC encoder H and the BERT output B , respectively.",
"Then each h li is further processed by the feedforward network F which outputs the l -th layer H l = ( F ( h l 1 ) , ..., F ( h ln )) .",
"The decoder's hidden state s lt S is calculated as follows: s lt = A s ( s l 1 t , S l 1 <t +1 ) (2) s li = 1 2( A h ( s l 1 i , H l 1 ) + A b ( s l 1 i , B l 1 )) (3) s lt = F ( s lt ) (4) Here, A s represents the self-attention model.",
"Finally, s Lt is processed via a linear transformation and softmax function to predict the t -th word y t .",
"We also use the drop-net trick proposed by Zhu et al. (2020) to the output of BERT and the encoder of the GEC model.",
"The advantage of the BERT-fuse is that it can preserve pre-trained information from raw corpora, however, it may not be adapted to either the GEC task or the task-specific distribution of inputs.",
"The reason is that in the GEC model, unlike the data used for training BERT, the input can be an erroneous sentence.",
"To fill the gap between corpora used to train GEC and BERT, we additionally train BERT on GEC corpora (BERT-fuse mask) or fine-tune BERT as a GED model (BERT-fuse GED) and use it for BERT-fuse.",
"GED is a sequence labeling task that detects grammatically incorrect words in input sentences (Rei and Yannakoudakis, 2016; Kaneko et al., 2017).",
"Since BERT is also effective in GED (Bell et al., 2019; Kaneko and Komachi, 2019), it is considered to be suitable for fine-tuning to take into account grammatical errors.",
"We use the BEA-2019 workshop 1 (Bryant et al., 2019) official shared task data as training and development sets.",
"Specifically, to train a GEC model, we use W&I-train (Granger, 1998; Yannakoudakis et al., 2018), NUCLE (Dahlmeier et al., 2013), FCE-train (Yannakoudakis et al., 2011) and Lang-8 (Mizumoto et al., 2011) datasets.",
"We use W&I-dev as a development set.",
"Note that we excluded sentence pairs that were not corrected from the training data.",
"To train BERT for BERT-fuse mask and GED, 1 https://www.cl.cam.ac.uk/research/nl/ bea2019st/ GEC model Model Architecture Transformer (big) Number of epochs 30 Max tokens 4096 Optimizer Adam ( 1 = 0 .",
"we use W&I-train, NUCLE, and FCE-train as training, and W&I-dev was used as development data.",
"In GEC, it is important to evaluate the model with multiple datasets (Mita et al., 2019).",
"Therefore, we used GEC evaluation data such as W&I-test, CoNLL-2014 (Ng et al., 2014), FCE-test and JFLEG (Napoles et al., 2017).",
"We used ERRANT evaluation metrics (Felice et al., 2016; Bryant et al., 2017) for W&I-test, M 2 score (Dahlmeier and Ng, 2012) for CoNLL-2014 and FCE-test sets, and GLEU (Napoles et al., 2015) for JFLEG.",
"All our results (except ensemble) are the average of four distinct trials using four different random seeds.",
"Hyperparameter values for the GEC model is listed in Table 1. For the BERT initialized GEC model, we provided experiments based on the open-source code 2 .",
"For the BERT-fuse GEC model, we use the code provided by Zhu et al. (2020) 3 .",
"While the training the GEC model, the model was evaluated on the development set and saved every epoch.",
"If loss did not drop at the end of an epoch, the learning rate was multiplied by 0.7.",
"The training was 2 https://github.com/facebookresearch/ XLM 3 https://github.com/bert-nmt/bert-nmt BEA-test (ERRANT) CoNLL-14 ( M 2 ) FCE-test ( M 2 ) JFLEGP R F 0 .",
"stopped if the learning rate was less than the minimum learning rate or if the learning epoch reached the maximum epoch number of 30.",
"Training BERT for BERT-fuse mask and GED was based on the code from Wolf et al. (2019) 4 .",
"The additional training for the BERT-fuse mask was done in the Devlin et al. (2019)'s setting.",
"Hyperparameter values for the GED model is listed in Table 1. We used the BERT-Base cased model, for consistency across experiments 5 .",
"The model was evaluated on the development set.",
"We also performed experiments utilizing BERT-fuse, BERT-fuse mask, and BERT-fuse GED outputs as additional features to the pre-trained on the pseudo-data GEC model.",
"The pre-trained model using pseudo-data was initialized with the PRETLARGE +SSE model used in the Kiyono et al. (2019) 6 experiments.",
"This pseudo-data is generated by probabilistically injecting character errors into the output (Lichtarge et al., 2019) of a back-4 https://github.com/huggingface/ transformers 5 https://github.com/google-research/ bert 6 https://github.com/butsugiri/ gec-pseudodata translation (Xie et al., 2018) model that generates grammatically incorrect sentences from grammatically correct sentences (Kiyono et al., 2019).",
"We describe the R2L re-ranking technique incorporated in our experiments proposed by Sennrich et al. (2016), which proved to be efficient for the GEC task (Grundkiewicz et al., 2019; Kiyono et al., 2019).",
"Standard left-to-right (L2R) models generate the n -best hypotheses using scores with the normal ensemble and R2L models re-score them.",
"Then, we re-rank the n -best candidates based on the sum of the L2R and R2L scores.",
"We use the generation probability as a re-ranking score and ensemble four L2R models and four R2L models.",
"Table 2 shows the experimental results of the GEC models.",
"A model trained on Transformer without using BERT is denoted as w/o BERT.",
"In the top groups of results, it can be seen that using BERT consistently improves the accuracy of our GEC model.",
"Also, BERT-fuse, BERT-fuse mask, and BERT-fuse GED outperformed the BERT-init model in almost all cases.",
"Furthermore, we can see that using BERT considering GEC corpora as BERT-fuse leads to better correction results.",
"And the BERT-fuse GED always gives better results than the BERT-fuse mask.",
"This may be because the BERT-fuse GED is able to explicitly consider grammatical errors.",
"In the second row, the correction results are improved by using BERT as well.",
"Also in this setting, BERT-fuse GED outperformed other models in all cases except for the FCE-test set, thus, achieving state-of-the-art results with a single model on the BEA2019 and CoNLL14 datasets.",
"In the last row, the ensemble model yielded high scores on all corpora, improving state-of-the-art results by 0.2 points in CoNLL14.",
"We investigate the characteristics of the hidden representations of vanilla (i.e., without any fine-tuning) BERT and BERT fine-tuned with GED.",
"We visualize the hidden representations of the same words from the last layer of BERT HL .",
"They were chosen depending on correctness in a different context, using the above models.",
"These target eight words 7 that have been mistaken more than 50 times, were chosen from W&I-dev.",
"We sampled the same number of correctly used cases for the same word from the corrected side of W&I-dev.",
"Figure 1 visualizes hidden representations of BERT and fine-tuned BERT.",
"It can be seen that the vanilla BERT does not distinguish between correct and incorrect clusters.",
"The plotted eight words are gathered together, and it can be seen that hidden representations of the same word gather in the same place regardless of correctness.",
"On the other hand, fine-tuned BERT produces a vector space that demonstrates correct and incorrect words on different sides, showing that hidden representations take grammatical errors into account when fine-tuned on GEC corpora.",
"Moreover, it can be seen that the correct cases divided into 8 clusters, implying that BERT's information is also retained.",
"We investigate the correction results for each error type.",
"We use ERRANT (Felice et al., 2016; Bryant et al., 2017) to measure F 0 .",
"5 of the model for each error type.",
"ERRANT can automatically assign error types from source and target sentences.",
"We 7 1. the 2. , 3. in 4. to 5. of 6. a 7. for 8. is",
"Table 3 shows the results of single BERT-fuse GED and w/o BERT models without using pseudo-data on most error types including all the top-5 frequent error types in W&I-dev.",
"We see that BERT-fuse GED is better for all error types compared to w/o BERT.",
"We can say that the use of BERT fine-tuned by GED for the EncDec model improves the performance independently of the error type.",
"In this paper, we investigated how to effectively use MLMs for training GEC models.",
"Our results show that BERT-fuse GED was one of the most effective techniques when it was fine-tuned with GEC corpora.",
"In future work, we will investigate whether BERT-init can be used effectively by using methods to deal with catastrophic forgetting.",
"This work was supported by JSPS KAKENHI Grant Number 19J14084 and 19H04162.",
"We thank everyone in Inui and Suzuki Lab at the Tohoku University and Language Information Access Technology Team of RIKEN AIP.",
"We thank the anonymous reviewers for their valuable comments."
] | [
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"result",
"method",
"result",
"abstain",
"other",
"other",
"other",
"method",
"method",
"other",
"other",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"other",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"objective",
"other",
"other",
"other"
] |
[
"Zero-shot learning has been a tough problem since no labeled data is available for unseen classes during training, especially for classes with low similarity.",
"In this situation, transferring from seen classes to unseen classes is extremely hard.",
"To tackle this problem, in this paper we propose a self-training based method to efficiently leverage unlabeled data.",
"Traditional self-training methods use fixed heuristics to select instances from unlabeled data, whose performance varies among different datasets.",
"We propose a reinforcement learning framework to learn data selection strategy automatically and provide more reliable selection.",
"Experimental results on both benchmarks and a real-world e-commerce dataset show that our approach significantly outperforms previous methods in zero-shot text classification.",
"Zero-shot learning (ZSL) is a challenging task as no labeled data is available for unseen classes during training.",
"There are extensive works proposed in zero-shot image classification task.",
"The main focus of these works is how to transfer knowledge from seen classes to unseen classes.",
"To associate unseen classes with seen classes, they usually resort to semantic information such as visual attributes (Lampert et al., 2009), word embeddings of class names (Norouzi et al., 2013) and class hierarchy (Socher et al., 2013).",
"For example, if the model has not seen any instances of humpback whale in the training stage, it could still make predictions at testing stage since humpback whale is semantically close to killer whale and blue whale in the seen class set , so the model is capable of transferring knowledge from seen Corresponding Author.",
"classes to unseen classes.",
"These methods assume that semantically similar classes share similar image features, however, they may fail in the cases where classes share low similarities.",
"This problem becomes even more salient in typical NLP tasks such as text classification.",
"For example, let us consider a 10-class emotion classification task (Yin et al., 2019), in which the model is trained on class sadness while makes predictions on instances from class joy.",
"Notice that most emotions are relatively independent, which means the way we express certain emotion is pretty different from other emotions.",
"As a result, for an unseen class we can hardly find a similar class in the seen class set.",
"Transferring from seen classes to unseen classes can be extremely hard as matching patterns that can be shared among classes are rare.",
"Essentially, ZSL methods aim to learn a matching model between feature space and semantic space, which refers to text and label in text classification task respectively.",
"Matching patterns between text and label can be roughly classified as class-invariant patterns and class-specific ones.",
"The former refers to the patterns that are shared among classes, while the latter is dependent on a certain class.",
"Table 1 shows an example to illustrate this definition.",
"The string match of label and text, which is highlighted with red color, indicates a simple matching pattern that can be shared among classes.",
"On the contrary, the words that are highlighted with blue color indicates a matching pattern that is specific to a certain class and cannot be transferred among classes easily.",
"Imagine if the model is trained on sentence 1, it can make a correct prediction on sentence 2 while failing on sentence 3 probably.",
"There are mainly two ways to deal with this troublesome zero-shot learning situation, including (1) integrating more external knowledge to Label Sentence fear 1. One day, when I realized that I was alone, I felt fear of loneliness.",
"better describe class and build more sophisticated connections between classes (Rios and Kavuluru, 2018; Zhang et al., 2019); (2) integrating the unlabeled data to improve the generalization performance.",
"Generally, existing works mainly adopt the former solution, while little attention is paid to the latter one.",
"In this paper, we focus on the latter one and propose a self-training based method to leverage unlabeled data.",
"The basic idea of self-training (McClosky et al., 2006; Sagae, 2010) is to select unlabeled instances that are predicted with high confidence and add them into the training set.",
"It is straightforward to consider that if we add sentence 2 to training set, the model is capable of learning class-specific pattern as sentence 2 and sentence 3 share the intra-class similarity.",
"In this way, we can mine class-specific feature through class-invariant feature.",
"However, directly applying traditional self-training method to zero-shot learning may encounter some problems: (1) traditional self-training methods use manually designed heuristics to select data, so manual adjustment of selection strategy is costly (Chen et al., 2018).",
"(2) due to the severe domain shift (Fu et al., 2015), traditional self-training method may not provide reliable selection.",
"To alleviate these problems, we present a reinforcement learning framework to learn data selection policy, which can select unlabeled data automatically and provide more reliable selection.",
"The contributions of our work can be summarized as follows: We propose a self-training based method to leverage unlabeled data in zero-shot text classification.",
"Our method is capable of alleviating the domain shift problem and enabling transferring between classes sharing low similarities and connections.",
"We propose a reinforcement learning framework to learn data selection policy automatically instead of using manually designed heuristics.",
"Experimental results on both benchmarks and a real-world e-commerce dataset show that our method outperforms previous methods with a large margin of 15.4% and 5.4% on average in generalized and non-generalized ZSL respectively.",
"Zero-shot learning has been widely studied in image classification, in which training classes and testing classes are disjoint (Lampert et al., 2013; Larochelle et al., 2008; Rohrbach et al., 2011).",
"The general idea of zero-shot learning is to transfer knowledge from seen classes to unseen classes (Wang et al., 2019).",
"Most methods focus on learning a matching model between image feature space and class semantic space, such as visual attributes (Lampert et al., 2009), word embeddings of class names (Socher et al., 2013), class hierarchy (Socher et al., 2013).",
"For zero-shot text classification, similar methods have been adopted.",
"(Dauphin et al., 2013) associated text with class label through semantic space, which is learned by deep neural networks trained on large amounts of search engine query log data.",
"(Nam et al., 2016) proposed an approach to embed text and label into joint space while sharing word representations between text and label.",
"(Pushp and Srivastava, 2017) proposed three neural networks to learn the relationship between text and tags, which are trained on a large text corpus.",
"(Rios and Kavuluru, 2018) incorporated word embeddings and hierarchical class structure using GCN (Kipf and Welling, 2016) for multi-label zero-shot medical records classification.",
"(Zhang et al., 2019) proposed a two-phase framework together with data augmentation and feature augmentation, in which four kinds of semantic knowledge (word embeddings, class descriptions, class hierarchy, and knowledge graph) were incorporated.",
"These works benefit from large training corpus and external semantic knowledge, however, none of these works have tried to leverage unlabeled unseen data in zero-shot text classification, namely transductive zero-shot learning (Xian et al., 2018).",
"There exists some work to utilize unlabeled data in image classification to alleviate domain shift problem, including (Fu et al., 2012; Rohrbach et al., 2013; Li et al., 2015; Fu et al., 2015), etc.",
"As far as we know, our work is the first to explore transductive zero-shot learning in text classification.",
"Self-training is a widely used algorithm in semisupervised learning (Triguero et al., 2015).",
"The basic process of self-training is to iteratively select high-confidence data from unlabeled data and add these pseudo-labeled data to training set.",
"Self-training has shown its effectiveness for various natural language processing tasks, including text classification (Drury et al., 2011; Van Asch and Daelemans, 2016), name entity recognition (Kozareva et al., 2005), parsing (McClosky et al., 2006, 2008; Huang and Harper, 2009).",
"However, there are two main drawbacks of self-training.",
"Firstly, its data selection strategy is simply confidence-based, which may not provide reliable selection (Chen et al., 2011) and cause error accumulation.",
"Secondly, self-training relies on pre-defined confidence threshold which varies among datasets and manual adjustment is costly.",
"There have been some works applying reinforcement learning to data selection in semi-supervised learning, including active learning (Fang et al., 2017), self-training (Chen et al., 2018), co-training (Wu et al., 2018).",
"These works share a similar framework which uses deep Q-Network (Mnih et al., 2015) to learn a data selection strategy guided by performance change of model.",
"This process is time-consuming as the reward is immediate which means the classifier is retrained and evaluated after each instance is selected.",
"Reinforcement learning has also been applied in relation extraction to alleviate the noisy label problem caused by distant supervision.",
"(Feng et al., 2018; Qin et al., 2018) proposed a policy network to automatically identify wrongly-labeled instances in training set.",
"Earlier, (Fan et al., 2017) proposed an adaptive data selection strategy, enabling to dynamically choose different data at different train-Figure 1: Illustration of the traditional classifier and standard ZSL model.",
"Here we first formalize the zero-shot text classification problem.",
"Let Y s and Y u denote seen and unseen class set respectively, where Y s Y u = , Y s Y u = Y .",
"Suppose there is D s = { ( x si , y si ) } Ni =1 for seen classes and D u = { x ui , y ui } Mi =1 for unseen classes, where x i represents i -th text and y i represents the corresponding label.",
"As shown in Figure 1, ZSL method turns a classification problem into a matching problem between text and class label.",
"During training, we learn a matching model f ( x, y ; ) from seen classes D s and then make predictions on unseen classes: y = arg max y Y f ( x, y ; ) , (1) where refers to the parameter of f .",
"For transductive ZSL, both labeled seen data D s and unlabeled unseen data D u = { x ui } Mi =1 are available during training.",
"To tackle zero-shot text classification, a reinforced self-training framework is developed in this work.",
"Figure 2 shows an overview of our reinforced self-training framework for zero-shot text classification.",
"The goal of our framework is to select high quality data from unseen classes automatically by agent and use these data to augment the performance of the base matching model.",
"Specifically, we first train the base matching model on seen class data and make predictions on unseen class data.",
"To make it more efficient, the agent performs data selection from a subset of unlabeled data instead of all unlabeled data at each iteration.",
"We rank the instances by prediction confidence and take a certain ratio of instances from Figure 2: Overview of our reinforced self-training framework for zero-shot text classification.",
"it at each iteration.",
"The agent is responsible for selecting data from this subset and filter negative instances.",
"The reward is determined by the performance of matching model in validation set.",
"We will introduce the details of our method in the following subsections.",
"Our RL-based data selection framework is model-agnostic, which means any matching model is compatible.",
"Here we adopt the widely recognized pre-trained model BERT (Devlin et al., 2018) as the base matching model.",
"For seen classes, given text x and label y , we generate { ( x, y (cid:48) ) | y (cid:48) Y s } as training instances, in which ( x, y (cid:48) ) is a positive training instance if y (cid:48) = y .",
"We take the text as premise and transform the label into its corresponding hypothesis provided in (Yin et al., 2019).",
"Therefore, the input sequence of BERT is packed as [CLS] x [SEP] hypotheis of y (cid:48) [SEP], where [CLS] and [SEP] are special start and separator tokens, as shown in Figure 3. BERT encoder is composed of multi-layer bidirectional transformers (Vaswani et al., 2017).",
"We use the hidden vector c x,y (cid:48) RH corresponding to [CLS] in the final layer as the aggregate representation.",
"We add a linear layer and compute loss as below: p x,y (cid:48) = ( WT c x,y (cid:48) + b ) , (2) L = (cid:26) log ( p x,y (cid:48) ) y (cid:48) = y log (1 p x,y (cid:48) ) y (cid:48) (cid:54) = y , (3) where W and b are parameters of the linear layer, W RH , b R , H is the hidden dimension size, and p x,y (cid:48) indicates the matching score between x and y (cid:48) , ( ) is sigmoid function.",
"The conventional self-training method simply selects data predicted with high confidence, which",
"is confidence-based.",
"We formalize the data selection as a sequential decision-making process and introduce a RL framework to combine confidence-based strategy and performance-driven strategy.",
"We describe the whole process in Algorithm 1 .",
"The details of the RL modules are described below.",
"For each text x , we get prediction scores { p x,y (cid:48) | y (cid:48) Y u } .",
"The label y with maximum matching score is considered as the pseudo label.",
"For time step t , the current state s t consists of 2 parts: the prediction confidence p x,y , the representation of arriving instance c x,y .",
"We take the hidden vector corresponding to [CLS] as the representation of current instance ( x, y ) .",
"The policy network takes p x,y and c x,y as input and outputs the probability whether to select or not.",
"At each step, the agent is required to take action for the current instance ( x, y ) whether to select it or not.",
"At time step t , a t = 1 means the agent accepts the current instance and adds it to training set; a t = 0 means rejection.",
"The action value is obtained through sampling from the policy net-work's output P ( a | s t ) .",
"If wrongly-labeled instances are added into training set, it will degrade the performance of the matching model.",
"Therefore the function of reward is to guide the agent to select the instances that are consistent with training set.",
"The reward is determined by the performance of the matching model on validation set, which consists of 2 parts: seen validation set D sdev and unseen validation set D udev .",
"D udev comes from the pseudo labeled data, which guides newly-selected data to be consistent with previously-selected data.",
"More specifically, after each batch of selection, we train the matching model using the selected instances, and evaluate on validation set.",
"We use macro-F1 as the evaluation metric.",
"Assume there are N 3 batches in one episode, we get two F sequences F s = { F s 1 , F s 2 , ..., F sN 3 } for seen validation set and F u = { F u 1 , F u 2 , ..., F uN 3 } for unseen validation set.",
"For batch k , the reward is formulated as: r k = ( F sk s ) s + ( F uk u ) u , (4) where controls the weight of seen class and unseen class, and represent the mean and standard deviation of F , respectively.",
"We adopt a multi-layer perceptron (MLP) as the policy network.",
"The policy network receives states: the prediction confidence p x,y and the representation of arriving instance c x,y , then output the probability for each action.",
"To learn an optimal data selection policy, we aim to maximize the expected total reward, which can be formulated as:",
"where R ( s, a ) is the state-action value function and is the parameter of policy network.",
"We update the via policy gradient (Sutton et al., 2000), + J ( ) , (8) where is the discount learning rate.",
"For a batch B k , we sample an action a t for each state s t according to policy P ( a | s ) .",
"After one episode , we compute rewards { r k } N 3 k =1 by Equation 4. The gradient can be approximated by J ( ) = r k | B k | | B k | (cid:88) t =1 logP ( a t | s t ) , (9) where | B k | is the number of instances in one batch, r k is the reward of batch B k , the parameter of policy network is updated after each episode.",
"Algorithm 1 Reinforced self-training for zero-shot text classification Require: labeled seen data D s = { ( x si , y si ) } Ni =1 , unlabeled unseen data D u = { ( x ui ) } Mi =1 , seen validation set D sdev .",
"We use two kinds of datasets for our experiments.",
"The first comes from the recently released benchmarks for zero-shot text classification (Yin et al., 2019), including 3 datasets: topic, emotion and situation classification.",
"Considering that some texts in situation dataset has multiple labels, we remove texts with multiple labels and keep single-label texts.",
"To keep consistent with Equation 1, none type is not included in unseen classes.",
"Datasets are prepared with two versions of partitions with non-Seen class Unseen class #Train #Valid #Test Topic I 650000 5000 50000 II 650000 5000 50000 Emotion I 20465 2405 5101 II 14204 1419 8901 Situation I 2428 240 689 II 1747 173 1102 E-commerce I 9000 1000 5000 II 9000 1000 5000 Table 2: Statistics of text classification Datasets, where I and II refer to two ways of partitions respectively described in (Yin et al., 2019).",
"overlapping labels so as to get rid of the models over-fitting on one of them.",
"To further evaluate our method in real-world scenario, we construct a new dataset from e-commerce platform, where texts consist of user search queries.",
"For seen classes Y s , it consists of the categories of product that users click on after searching.",
"For unseen classes Y u , it consists of the pre-defined user preference classes.",
"User preference refers to the product's attribute that users prefer, such as the efficacy of cosmetic products, the style of furniture.",
"The user preference and product category are disjoint so it can be formalized as a zero-shot learning problem.",
"We annotate 10-class user preference dataset for evaluation and there is 1000 instances for each class.",
"Following (Yin et al., 2019), we created two versions of unseen classes each with 5 classes that do not overlap.",
"The statistics of datasets are shown in Table 2. 4.2 Implementation Details We use the BERT-Base (Devlin et al., 2018) as our base matching model, with 12-layer transformer blocks, 768-dimension hidden state, 12 attention heads and total 110M parameters.",
"We use the pre-trained BERT-Base-Uncased for the English benchmarks and BERT-Base-Chinese for e-commerce dataset.",
"For training stage, we use Adam (Kingma and Ba, 2014) for fine-tuning with 1 as 0.9, 2 as 0.999.",
"The max sequence length of BERT input is set to 64.",
"For other hyper-parameters, we set learning rate as 5e-5, ratio = size () /M as 0.2, iteration number N 1 as 5 and episode number N 2 as 20.",
"We select weight https://storage.googleapis.com/bert models/2018 10 18 /uncased L-12 H-768 A-12.zip https://storage.googleapis.com/bert models/2018 11 03 /chinese L-12 H-768 A-12.zip among { 1 , 2 , 5 , 10 } .",
"For baselines, we adopt 300-dim GloVe vectors (Pennington et al., 2014) for English words and 300-dim word vectors from (Li et al., 2018) for Chinese words.",
"Policy network pre-train is widely used by reinforcement learning based methods to accelerate the training of RL agent (Silver et al., 2016; Xiong et al., 2017; Qin et al., 2018).",
"We use seen class data to pre-train the agent, enabling the agent to distinguish negative instances.",
"We set early stop criteria to avoid overfitting to seen class data.",
"We compare our method with the following baselines: (1) Word2vec measures how well a label matches the text by computing cosine similarity of their representations.",
"Both the representations of text and labels are average of word embeddings.",
"(2) Label similarity (Veeranna et al.) uses word embeddings to compute semantic similarity as well, which computes the cosine similarity between class label and every n-gram (n=1,2,3) of the text, and takes the max similarity as final matching score; (3) FC and RNN+FC refers to the architecture 1 and architecture 2 proposed in (Pushp and Srivastava, 2017).",
"We also compare multiple variants of our models: (1) BERT refers to the base matching model without self-training and RL; (2) BERT+self-training refers to the traditional self-training method, which selects instances with high confidence.",
"However, confidence threshold has great impact on performance.",
"With different thresholds, the number of selected instances differs, resulting in performance change of the model.",
"To provide a fair comparison, we record the number of instances k selected in every iteration in RL selection process.",
"For self-training, we select top k instances for every iteration.",
"(3) BERT+RL refers to full model of our methods.",
"We use macro-F1 as evaluation metric in our experiments since datasets are not well balanced.",
"We report the results in two ZSL setting: generalized and non-generalized.",
"In non-generalized ZSL, at test time we aim to assign an instance to unseen class label ( Y u ).",
"While in generalized ZSL, class label comes from both unseen and seen classes ( Y s Y u ).",
"The harsh policy in testing (Yin et al., 2019) is not adopted in our experiments.",
"Table 3 shows the experimental results on benchmarks and real-world e-commerce dataset in generalized setting.",
"For baseline methods, Word2vec and Label similarity are unsupervised approaches, which cannot get desirable results as the effectiveness of these methods heavily rely on the similarity of text and label.",
"Therefore, it may not perform well on dataset like emotion detection.",
"Label similarity performs slightly better than Word2vec, which proves that max aggregation of n-grams is better than mean aggregation in Word2vec method.",
"As for the supervised FC and RNN+FC method, FC gets slightly better results than RNN+FC in most datasets.",
"As the number of categories and the scale of training dataset are small, RNN+FC may overfit on seen class data and cannot generalize well on unseen class data.",
"For variants of our method, we can observe that the full model BERT+RL outperforms all other baselines.",
"On average, BERT+RL achieves an improvement of 15.4% over BERT.",
"To be specific, the base matching model BERT performs better than previous baselines, which shows good generalization results benefiting from pre-training on large-scale corpus.",
"For BERT+self-training, the integration of unlabeled data augments the base matching model and shows superior performance than BERT.",
"Last but not least, our full model BERT+RL shows substantial improvement over BERT+self-training in most datasets.",
"Under the condition that the number of selected instances remains the same, reinforced selection strategy can still yield better performance than the simply confidence-based strategy, which proves the effectiveness of our RL policy.",
"For non-generalized ZSL setting, we can get similar results as presented in Table 4. On average, BERT+RL achieves an improvement of 5.4% over BERT.",
"However, we notice that the improvement is more significant in generalized ZSL compared to non-generalized ZSL.",
"The reason is that model trained on seen class data tends to bias towards seen classes, resulting in poor performance in generalized setting (Song et al., 2018).",
"Our approach, however, could relieve the bias in favour of seen classes by incorporating pseudo-labeled unseen class data.",
"When selecting the same number of instances per iteration, previous experimental results show our reinforced selection strategy can yield better performance than the greedy strategy.",
"We define (cid:15) as the ratio of selected instances size to all unlabeled instances size.",
"In this section, we vary the selection ratio (cid:15) among { 0 .",
"2 , 0 .",
"4 , 0 .",
"6 , 0 .",
"8 , 1 .",
"0 } for self-training method.",
"For each iteration, we select top (cid:15) N 1 M instances and add them into training set.",
"Figure 4 shows the performances with different selection ratios in generalized ZSL setting.",
"Clearly, the performance of self-training method varies with different ratio of instances selected.",
"The optimal ratio of selection instances also varies with different datasets.",
"However, our reinforced data selection strategy does not rely on manually-set ratio and can yield consistently better performance than the self-training method in most cases.",
"BERT+RL method.",
"In the left part of the table, texts predicted by BERT with highest confidence are listed.",
"We can easily find that these texts share a simple matching pattern that label words appear in the text, which is highlighted with red color.",
"These simple patterns are exactly class-invariant patterns we defined previously, which can be shared among classes.",
"In the right part of the table, we select the texts which are misclassified by BERT but are predicted correctly by BERT+RL.",
"We can observe that those texts are harder to be distinguished since these matching patterns are more class-dependent, which cannot be directly transferred from other classes.",
"There is no doubt that model trained on other classes would fail in such cases.",
"For our method, we first tackle the easy instances, then add these instances into training set iteratively.",
"With the integration of instances with easy pattern, the model can learn harder pattern gradually.",
"In this way, our method can learn to transfer between classes even with low similarity.",
"In this paper, we propose a reinforced self-training framework for zero-shot text classification.",
"To realize the transferring between classes with low similarity, our method essentially turns a zero-shot learning problem into a semi-supervised learning problem.",
"In this way, our approach could leverage unlabeled data and alleviate the domain shift between seen classes and unseen classes.",
"Beyond that, we use reinforcement learning to learn data selection policy automatically, thus obviating the need to manual adjustment.",
"Experimental results on both benchmarks and real-world e-commerce dataset demonstrate the effectiveness of the integration of unlabeled data and the reinforced data selection policy.",
"This work is funded by NSFC U19B2027/91-846204/61473260, national key research program 2018YFB1402800, and supported by Alibaba-ZJU Frontier Technology Research Center."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"objective",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"result",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"result",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"method",
"abstain",
"other"
] |
[
"Leveraging persona information of users in Neural Response Generators (NRG) to perform personalized conversations has been considered as an attractive and important topic in the research of conversational agents over the past few years.",
"Despite of the promising progress achieved by recent studies in this field, persona information tends to be incorporated into neural networks in the form of user embeddings, with the expectation that the persona can be involved via End-to-End learning.",
"This paper proposes to adopt the personality-related characteristics of human conversations into variational response generators, by designing a specific conditional variational autoencoder based deep model with two new regularization terms employed to the loss function, so as to guide the optimization towards the direction of generating both persona-aware and relevant responses.",
"Besides, to reasonably evaluate the performances of various persona modeling approaches, this paper further presents three direct persona-oriented metrics from different perspectives.",
"The experimental results have shown that our proposed methodology can notably improve the performance of persona-aware response generation, and the metrics are reasonable to evaluate the results.",
"As an essential research topic in generative conversational agents (a.k.a., chat-bots), Persona Modeling is of great importance for such deep neural network based intelligent interactive sys-tems (Li et al., 2016b; Kottur et al., 2017; Wang et al., 2017).",
"Apparently, user-personality-dependent responses provided by a chat-bot are able to significantly improve the consistency of its conversations, meanwhile, it is possible for users * Contribution during the internship at Tencent.",
"to flexibly customize the persona of a chat-bot based on some existent dialogues.",
"As for the studies on this topic, with no doubt, incorporating persona factors into End-to-End generative models is an attractive topic with great challenges.",
"The current studies mainly focus on adopting the explicit meta-data of user profiles (Qian et al., 2018; Chu et al., 2018) or character descriptions (Zhang et al., 2018; Mazare et al., 2018; Song et al., 2019) to generate persona-aware responses.",
"However, on one hand, user profiles are usually highly privacy-related and thus it is diffi-cult to obtain such information from users practically.",
"On the other hand, little correlation can be explicitly observed between such meta-data profiles and persona characteristics of users.",
"Especially, those character descriptions, tailor-made for the persona-aware response generation with the great cost of manual work, are only a variant of user profile innately in terms of different natural language forms.",
"One of the reasonable and practically executable methodologies for introducing persona factors into conversation models is to adopt the real-valued user representation as a medium (Li et al., 2016b; Kottur et al., 2017; Liu et al., 2018; Al-Rfou et al., 2016).",
"In particular, such user representations can be derived from users' historical dialog utterances with rich linguistic and personality information involved.",
"Taking persona representations as the guidance for generating customized responses becomes a widely accepted methodology due to the recent development of deep latent variable models (Zhao et al., 2017; Shen et al., 2017; Zhou and Wang, 2018).",
"However, for current models, without the explicit learning objectives or constraints, the user representation is adopted in a passive way to reduce the model loss and KL divergence via end-to-end learning.",
"In this case, it is highly possible that the Figure 1: The architecture of the Persona-Aware Variational Response Generator (PAGenerator) described in this paper.",
"Consequently, it is necessary to employ explicit guidance to help variational response generators sense persona.",
"From observations upon persona-contained dialogs, there exist intuitive characteristics for directing the optimization of the persona-aware variational response generation.",
"Obviously, for a given user, the appropriately modeled and leveraged persona information can help to generate hidden variables semantically relevant with corresponding responses.",
"Besides, since users may have their own linguistic style, the adoption of personal information in NRG aims to have direct influence on the degree of linguistic (e.g. lexical and syntactic) convergence for a specific user.",
"This paper aims at exploring the explicit guidance to help the variational response generator exploit persona information hidden in the nonstructured contents produced by the users, by utilizing intuitive characteristics of personalized conversations for model training.",
"The contributions of this paper can be summarized as follows: A persona-aware variational response generator is proposed to exploit persona while modeling the conversations.",
"Based on the model, two regularization terms are presented to guide the model in encoding user information into the latent variables and converging to user-specific responses.",
"Three discriminative metrics are further introduced to evaluate the capabilities of persona-aware response generators.",
"Based on the current progress on the development of latent variable models, we propose a persona-aware variational response generator to automatically exploit persona from conversations, and utilize such personal information to model the future conversation.",
"Besides, given that personal information can be exploited as optimization guidance to better modeling persona, we further introduce two regularization terms to guide the model learning.",
"In the following section, we first describe the general structure of PAGenerator, and then explain the two additional regularization terms.",
"Utilizing latent variables in response generation has become a widely accepted methodology in NRG due to their Bayesian essence.",
"It helps to deal with external knowledge efficiently, e.g. Persona.",
"Therefore, our proposed model is built based on the generation model with latent variables.",
"The overall architecture of the single turn persona-aware variational response generator proposed in this paper is illustrated in Figure 1.",
"Let q, r, u stand for the query, the reply and the corresponding user of r , respectively, and e u stands for the embedding of user u .",
"A bidirectional LSTM is first employed to encode the query and reply into fixed size vectors h q and h r .",
"After that, the prior network (parametrized by ) takes u e , h q as inputs to generate the distribution p ( z | q, u ) of latent variable z .",
"Meanwhile, h q , h r are fed into a posterior network (parameterized by ) to compute q ( z | q, r ) .",
"As we adopt the assumption that z follows isotropic Gaussian distribution, p ( z | q, u ) and q ( z | q, r ) are also normally distributed, such that: p ( z | q, u ) N ( p , 2 p I ) q ( z | q, r ) N ( q , 2 q I ) (1) where the means and variances are computed as follows: (cid:20) p log( 2 p ) (cid:21) = W p (cid:20) qu (cid:21) + b p (2) (cid:20) q log( 2 q ) (cid:21) = W q (cid:20) qr (cid:21) + b q (3) where W p , W q , b p and b q are the trainable parameters.",
"A sample of z using the reparametrization trick (Kingma and Welling, 2013) is then fed into the decoder as a part of input at each time step.",
"In addition, the bag-of-word (BOW) loss (Zhao et al., 2017) is employed to tackle the latent variable vanishing problem, and PAGenerator is trained to maximize the variational lower-bound (Chung et al., 2015; Serban et al., 2017): L ( , ; q, r,u ) = E q ( z | q,r ) [log p ( r | z, q, u )] KL ( q ( z | q, r ) k p ( z | q, u )) +E q ( z | q,r ) [log p ( r bow | z, q, u )] (4) 2.2 User Information Enhancing Regularization Ideally, we expect that the introduction of user embedding is fully utilized during model training.",
"However, due to the KL vanishing problem, the training of PAGenerator suffers from the hazard that the rapid decrease of L in Equation 4 might be attributed to the strong fitting capability of the decoder on the training data, rather than the involvement of user embedding.",
"Thus, we introduce a regularization term to promote the usage of user's hidden information in latent variables.",
"At the beginning, as illustrated in Figure 1, a general unk u is introduced to represent the case for user unspecified.",
"Subsequently, taking the default user embedding e unk u as input, we obtain the KL divergence as KL ( q ( z | q, r ) k p ( z | q, unk u )) from the network.",
"In this case, once the real user u is introduced, a regularization term R 1 ( , ; q, r, u ) can be constructed as follows: R 1 ( , ; q, r, u ) = max( 1 , KL ( q ( z | q, r ) k p ( z | q, u )) KL ( q ( z | q, r ) k p ( z | q, unk u ))) (5) where 1 R , 1 > 0 , and p ( z | q, unk u ) N ( p , 2 p I ) .",
"It should be noted that, according to the equation above, the two prior distributions are generated from the same network with partially different inputs ( u VS. unk u ), and the regularization constrains the prior distribution with specified user to be closer to the posterior distribution.",
"Thus, the optimization encourages the utilization of user information and correspondingly inhibits the generated results from ignoring the user information.",
"Meanwhile, R 1 in our proposed model also alleviates the KL vanishing problem.",
"The BOW loss forces the latent variables to predict the bag-of-words in the response.",
"Therefore, the semantic distribution of z is required to be capable of representing the topics and wording of the target response.",
"Besides, for a given query, the possible replies from a specific user should be more convergent to each other than those from an unknown user, due to each user's unique preference on the topics and wording.",
"Correspondingly, under the assumption that the distribution of z represents the user's language preference, the specifi-cation of user information is expected to reduce the entropy of the isotropic Gaussian distribution of z , reflected by a lower standard deviation p .",
"On this basis, we introduce another regularization term R 2 ( , ; q, r, u ) to control the variance: R 2 ( , ; q, r, u ) = max( 2 , 2 p 2 p ) (6) where 2 R and 2 > 0 .",
"R 2 prefers those z with decrease 2 in standard deviation p after specifying users, and such decrease indicates the latent variables are more semantically convergent.",
"On this basis, we update the new training objective of PAGenerator as follows: L ( , ; q, r, u ) = L ( , ; q, r, u ) R 1 ( , ; q, r, u ) R 2 ( , ; q, r, u ) (7) By employing the two regularization terms to constrain the model training, L ( , ; q, r, u ) now also pays attention to the utilization of user information and language preference.",
"In the previous section, two regularization terms are proposed to guide the model in the persona exploration.",
"However, we still lack effective persona-focused metrics to quantify how well one model is on learning persona.",
"The currently applied metrics for persona-aware NRG evaluation, such as perplexity and BLEU, are used to evaluate the plain NRG models (Li et al., 2016b; Kottur et al., 2017).",
"Apparently, such metrics are inadequate to evaluate the capacity of a response generator on capturing persona.",
"Innately, an effective persona-aware response generator should be able to successfully identify and generate responses for users according to their language styles.",
"Besides, the generated responses from different users should be diversified to each other in wording.",
"Considering these properties, we propose the following metrics to measure the level of persona-aware in response generators.",
"It is important for a persona-aware response generator to identify a user's response from other user-irrelevant ones, by detecting the user's language style in responses.",
"In this subsection, we propose User-Relative-Rank ( uRank ) to measure such capability.",
"Given a query-response-user triple { q, r, u } , a pre-trained seq2seq model S 2 S and a model M to be evaluated, we first generate n user-irrelevant responses { r i | i [1 , n ] } from S 2 S using beam search.",
"For a desired persona-aware model M , it is expected to assign the ground truth response r with a higher probability than other user-irrelevant ones { r i | i [1 , n ] } .",
"Thus, taking S 2 S as reference, we set uRank to be 1 if M scores r a higher ranking position among r i than S 2 S , specifically: rank M = |{ i | PM ( r i ) > PM ( r ) }| rank S 2 S = |{ i | PS 2 S ( r i ) > PS 2 S ( r ) }| uRank = ( 1 if rank M < rank S 2 S 0 otherwise (8) where P m ( r ) and P s 2 s ( r ) are the probabilities of { q, r, u } given by M and s 2 s respectively, | X | presents the cardinal number of a set X , and the lower score of either rank M or rank S 2 S indicates a better ranking position.",
"Overall, for model M , its average uRank for different queries denotes the rate of rank-promoted ground-truth replies.",
"Apart from perceiving users' language styles, an effective persona-aware model should also be able to imitate language styles by generating responses satisfying users' language behaviors.",
"User-Language-Perplexity ( uPPL ) is proposed to measure this property.",
"Given a user u i , to conduct such metric, a statistical language model LM i is first trained using the user's utterances.",
"After that, for a generated response r , its corresponding uPPL is defined as the perplexity of r given by LM i .",
"uPPL quantifies the power of a persona-aware model on generating responses similar to users' history utterances.",
"Finally yet importantly, due to the introduction of user information, given a query, we expect that responses for different users from a persona-aware model should be also diversified.",
"Therefore, Users-Distinct ( uDistinct ) is proposed in this paper to capture such property.",
"Given a query q i and m different users { u j | j [1 , m ] } , we generate different responses { r j | j [1 , m ] } for each user using M .",
"On this basis, Distinct-1 and Distinct-2 (Li et al., 2016a) of the response set { r j | j [1 , m ] } are utilized to measure the in-group diversity of responses generated by M within users.",
"Li et al. (2016b) also compare models through the case studies from the similar perspective.",
"To evaluate the performance of our proposed method, we implement experiments on a Chinese Social Networking Service (SNS) corpus and the Cornell Movie Dialogues corpus (Danescu-Niculescu-Mizil and Lee, 2011).",
"The Chinese SNS corpus is crawled from a Chinese social network service Douban, 1 containing totally 1,022,592 single-turn dialogues from 12,857 users; while the Cornell Movie Dialogues corpus consists of conversations from movie scrips.",
"By cleaning up the Cornell corpus with the open-source script, 2 we obtain 109,952 single-turn dialogues from 9,035 movie characters.",
"The train-ing/test ratios for the two corpora are around 200:1 and 50:1, respectively.",
"Besides, for the Douban corpus, the mean, maximum, minimum, and the standard deviation values of the number of utterances for each user are 80, 1190, 33, and 49, respectively.",
"Meanwhile, these statistics values are 14, 237, 4, and 22, correspondingly.",
"1 https://www.douban.com/group 2 https://github.com/suriyadeepan/datasets/ There are two main differences between the two datasets: 1) The scenes of conversations are different.",
"The dialogues in Douban are crawled from an open domain social media.",
"By contrast, since the characters in Cornell movie corpus are assigned with fixed personas, the language styles and habits of users are more templatized.",
"Besides, the language style in Cornell is more oral-like, with many personal pronouns.",
"2) The average number of utterances for each user of the Douban corpus is around 10 times more than that of Cornell.",
"fact bias S2SA with fact bias for persona modeling (Michel and Neubig, 2018).",
"fact bias is originally proposed in NMT, it models user information as an additional bias vector learned through a factored model in the softmax layer.",
"Speaker Model Framework proposed by Li et al. (2016b).",
"This model is similar to S2SA + fact bias, except that the user information is added as a part of decoder input rather than bias in the softmax layer.",
"VAE Standard Variational AutoEncoder for response generation (Serban et al., 2017).",
"In our experiment, we replace the utterance with the query only and apply the auxiliary BOW loss (Zhao et al., 2017) in training.",
"CVAE Conditional Variational AutoEncoder with user information as prior knowledge for modeling persona (Zhao et al., 2017).",
"Similar to VAE, bag-of-words loss is applied in CVAE.",
"For a fair comparison, we use the same configu-ration for all models.",
"The size of word embedding and user embedding are respectively set to 300 and 128.",
"All the user embeddings, including that of the unknown user, are initialized randomly and trained during the optimizing.",
"We employ a bi-directional LSTM of hidden size = 256 for encoding, and a LSTM of hidden size = 512 for decoding.",
"For latent models, the dimension of z is set as 128.",
"All models are optimized using Adam (Kingma and Ba, 2014) with learning rate = 2 e 4 and batch size = 128.",
"For latent models, we also use KL annealing ( Bowman et al., 2016) (400,000 batches for Douban corpus and 100,000 batches for Cornell Movie corpus) to achieve better performance.",
"To thoroughly evaluate our systems, both standard and persona-focused metrics are employed in our experiments.",
"For standard metrics, we adopt unigram BLEU (BLEU-1) (Papineni et al., 2002) and Word Embedding metrics (Liu et al., 2016) including Embedding Average (Average), Vector Extrema (Extrema) and Greedy Matching (Greedy) to evaluate the semantics of generated responses with regards to ground truths.",
"We use the pre-trained word embeddings from (Song et al., 2018) for the Douban corpus and embeddings from (Pen-nington et al., 2014) for the Cornell movie corpus.",
"The three proposed metrics ( uRank , uPPL and uDistinct ) are adopted to measure the performance of capturing persona.",
"For uPPL , we use a bi-gram language model for perplexity computation.",
"Since the effectiveness of uPPL relies on the quality of constructed user language models, we pretrain the SLM with the whole training data and afterwards finetune it using each user's utterances.",
"Besides, we drop the users with utterances less than 100 in Douban and 30 in Cornell.",
"The value of uRank , which depends on the rankings of predicted probabilities of responses, is not stable for latent models due to the randomness on sampling z .",
"Therefore, uRank for each latent model is computed by running 10 rounds, so that we obtain 10 ranking results and their corresponding uRank .",
"Then we average the obtained 10 uRank as the final uRank for each latent enhanced model.",
"The later experimental results show that uRank for any latent model varies slightly around 0 .",
"005 for each round.",
"For further comparisons, we also use the crowd-sourcing labeling resources of our organization to manually evaluate the relevance and the persona of generated responses.",
"Since the degree of persona reflected in the response is even more diffi-cult to be judged by humans, we simplify the annotation into a yes or no task, that is, annotators are only asked to decide whether the response can reflect persona for the given user.",
"Before that, the annotators have to read all the utterances of each user to learn the persona for judging.",
"Moreover, in practice, we limit the number of each user's sample utterances to 100.",
"However, the judgment is inevitably much more subjective.",
"Thus, for each sample, we recruit 11 annotators to label and make the final determination by voting.",
"The evaluation of relevance is relatively easy.",
"For the evaluation of relevance, each query-response pair is cross-evaluated by 3 annotators, following the labeling criterion used in (Xing et al., 2017; Wang et al., 2018).",
"The details of data sampling and labeling are given in the Supplementary Material .",
"We first report the performance on the Douban corpus.",
"The results of automatic evaluating metrics are illustrated in Table 1, numbers in bold mean that the improvement on that metric is statistically significant over other methods ( p-value 0.01 ).",
"It is observed that the BLEU-1 scores of various models are relatively low and close to each other.",
"We attribute this to the fact that the semantics of possible responses for one query is highly diversified in terms of speaking styles and topics, there might be the situation that only a small portion of words share among the responses except those of high-frequency words (Mou et al., 2016; Liu et al., 2016).",
"However, user enhanced models achieve higher BLEU-1 scores due to their capability in considering the preference of a user.",
"Furthermore, by comparing the performances on embedding metrics, we find that all models obtain decent scores, but none of the models outperform the others significantly.",
"Such phenomenons can also be observed in previous studies (Serban et al., 2017; Wang et al., 2019), since all the models generate responses semantically similar to the ground truths.",
"Despite this, PAGenerator achieves the highest score on average, which suggests the responses generated by PAGenerator are more semantically relevant to the ground truths.",
"While all models perform more or less the same on standard metrics, their experimental results on persona metrics are quite different.",
"All persona-aware NRG models outperform S2SA and VAE which contain no user information on the uRank , while the two variational models with user information significantly exceed the rest models.",
"It shows that persona-aware response generators, especially those exploiting user embeddings to generate latent variables, are more sensitive on identifying users' language styles.",
"Among all models with user modeling, our proposed PAGenerator achieves the highest uRank .",
"replies given by the three models employing user embeddings are more consistent with the user's language style, which indicates that user embedding is useful in learning language style automatically in an End-to-End NRG model.",
"By contrast, since S2SA with fact bias focuses on learning user's bias based on only unigrams, it struggles from achieving a high uPPL which scores from bigram perspective.",
"Moreover, comparing the performance of CVAE to Speaker Model, it appears that utilizing latent variables in standard method cannot further improve uPPL .",
"By contrast, the two new regularizations proposed for persona modeling can help PAGenerator generating replies with more specific persona, the uPPL of which is reduced by 21 .",
"2 points compared to CVAE.",
"As mentioned in previous sections, uDistinct measures the diversity of the generated responses between different users.",
"In general, latent models achieve higher uDistinct than non-latent ones as the randomness brought by the latent variables.",
"Within latent models, the adoption of user information in CVAE only slightly improves its uDistinct compared to VAE without user specifica-tion.",
"It indicates that user embeddings are ineffectively utilized in CVAE, and this is the motivation for us to propose new methods for variational response generator.",
"The notable improvement in uDistinct can verify their effectiveness in exploiting persona.",
"The cases can further demonstrate such improvements in Supplementary Material .",
"Besides, the comparison among baseline models is consistent with the experiments in previous studies (Li et al., 2016b; Zhou and Wang, 2018), which indicates the proposed metrics are apposite for evaluating the capability of NRG models on capturing persona.",
"To further evaluate the quality of generated responses from each model more subjectively, we also implement human labeling.",
"As shown in Table 2, adjusting unigram distributions for users by fact bias reduces the quality of generated responses.",
"By contrast, all other models produce more high-quality replies comparing with S2SA.",
"Moreover, responses from PAGenerator achieve the best human evaluation result, which indicates that the improvement of persona capturing of PAGenerator does not reduce correlation.",
"Meanwhile, in the last column, the trend of eval-Methods BLEU Embedding Persona Metrics Average Extreme Greedy uRank uPPL uDist-1 uDist-2 S2SA (Sordoni et al., 2015) 0.29 0.834 0.615 0.666 0 200.4 0.115 0.113 fact bias (Michel and Neubig, 2018) 0.29 0.840 0.618 0.671 0.022 202.3 0.091 0.101 Speaker Model (Liu et al., 2016) 0.31 0.837 0.621 0.674 0.023 163.6 0.183 0.199 VAE (Serban et al., 2017) 0.30 0.830 0.609 0.659 0.017 225.9 0.367 0.467 CVAE (Zhao et al., 2017) 0.31 0.836 0.616 0.668 0.039 174.5 0.377 0.486 PAGenerator 0.31 0.845 0.622 0.670 0.044 153.3 0.406 0.524 Table 1: Evaluation results on Douban corpus.",
"uated results on persona almost consists to those evaluated by proposed automatic evaluation metrics.",
"The PAGenerator outperforms other models, and some particular parts of replies generated by persona-aware models can reflect the personality.",
"Besides, due to the randomness, some responses given by S2SA and VAE are also labeled as persona-aware.",
"However, fewer high-quality responses generated by S2SA compared to VAE, and thus, the proportion of S2SA is even lower.",
"As shown in Table 3, the overall trend of the experimental results on Cornell corpus is consistent with that on Douban corpus.",
"The models that are aware of the specified user outperform others slightly on BLEU and Embedding metrics.",
"Regards to persona metrics, the experimental results on Cornell corpus shows two main differences:",
"a) The Speaker Model does not perform that well on user language style detection and generation, mainly because the training data of each user is less than that in Douban corpus.",
"It is hard to automatically model the informative user embedding via target oriented learning without guidance.",
"By contrast, utilizing the KL divergence as the guidance in CVAE effectively improves the experimental results.",
"b) Due to the individual characteristics of movie characters, the user-embedding-enhanced models generate more diverse responses for different users, specially PAGenerator.",
"As shown in Table 5, on the English dataset, the comparison results are almost consistent with that in Section 5.2.",
"According to the judgment of annotators, our proposed model outperforms the others from both relevance and persona perspective.",
"However, influenced by insufficient training conversations, the overall quality of generated responses for the Cornell queries is not as good as the ones given for the Douban corpus.",
"We attribute this to the difference in the corpus size and the word distribution, which is described in Section 4.1.",
"In detail, the quality of Cornell is influenced by insufficient training conversations.",
"By contrast, the persona is reflected more obviously with the help of more templatized language styles and habits of Cornell.",
"To get a better intuition about how our proposed method works, we implement the ablation tests to analyze the contribution of each component of PAGenerator in persona exploitation.",
"As illustrated in Table 4, adding the user embeddings as a part of decoder inputs brings positive improvements on all the persona-focused metrics.",
"Without UE, the parameter size of PAGenerator reduces considerably, which is harmful to the model on fitting target data.",
"Besides, without direct constraints from the decoder, user embeddings mainly act on reducing KL divergence rather than providing more informative latent variables.",
"Besides, without UE, PAGenerator also significantly outperforms VAE in all metrics, which demonstrates that R 1 and R 2 are indeed useful for guiding the latent variables Methods BLEU Embedding Persona Metrics Average Extreme Greedy uRank uPPL uDist-1 uDist-2 S2SA (Sordoni et al., 2015) 0.32 0.787 0.503 0.679 0 44.8 0.115 0.079 fact bias (Michel and Neubig, 2018) 0.30 0.785 0.501 0.676 0.044 39.3 0.127 0.095 Speaker Model (Liu et al., 2016) 0.33 0.796 0.510 0.681 0.056 41.7 0.228 0.225 VAE (Serban et al., 2017) 0.25 0.780 0.490 0.670 0.058 45.6 0.122 0.114 CVAE(Zhao et al., 2017) 0.28 0.800 0.502 0.689 0.085 37.0 0.223 0.251 PAGenerator 0.33 0.814 0.514 0.687 0.114 32.2 0.251 0.304 Table 3: Comparison of different approaches on the Cornell Movie Dialogues corpus.",
"to model the semantics under the query and users.",
"Comparing the ablation results of w/o R 1 with w/o R 2 , we can conclude that both regularizations promote uRank values.",
"However, PAGenerator w/o R 2 only achieves a mediocre result on uPPL , while only utilizing R 2 damages the model's ability in generating diverse responses for different users.",
"We attribute this divergence to the trade-off between",
"a) shared movie-style language between users and",
"b) different language preferences among actors in the movie scripts.",
"Since R 1 promotes the divergence of z between the specified and unspeci-fied users, removing R 1 raises the difficulty for the model to generate diverse responses toward different users, reflected by the low uDistinct of w/o R 1 .",
"However, promoting diversity will more or less sacrifice the model's learning on the common shared movie-style patterns, which is vital in evaluating the language cohesion.",
"Therefore, the performance of PAGenerator only with R 1 on uPPL is less-than-ideal.",
"In contrast, since R 2 emphasizes those patterns often used by a given user, it encourages the distribution of user information to be more aggregate.",
"These differences explain the opposite results of w/o R 1 and w/o R 2 .",
"In conclusion, the user embedding is an important constraint for the PAGenerator, and R 1 , R 2 can be considered to deploy for different purposes.",
"Furthermore, utilizing all components of PAGenerator described in Figure 1 guarantees a more bal-Methods Human Evaluation 0 1 2 Persona S2SA 70.6% 27.5% 1.9% 1.4% fact bias 72.2% 26.0% 1.8% 14.9% Speaker Model 62.2% 35.6% 2.2% 16.9% VAE 65.0% 31.6% 3.4% 1.1% CVAE 61.7% 34.0% 4.3% 21.6% PAGenerator 61.5 % 33.8% 4.7 % 22.8 % Table 5: Human evaluation results on the Cornell Corpus.",
"Persona-based neural conversation models can be categorized into two major research directions.",
"One is to directly train a model from conversational data by considering the persona information (Li et al., 2016b; Kottur et al., 2017; Wang et al., 2017; Madotto et al., 2019), while the other approach makes use of the profiles or side-information of users to generate the aligned responses (Chu et al., 2018; Qian et al., 2018; Zhang et al., 2018; Mazare et al., 2018; Song et al., 2019).",
"The work described in this paper belongs to the first research direction.",
"Li et al. (2016b) and Kottur et al. (2017) enrich the models by training persona vectors directly and incorporating them into the decoder.",
"Wang et al. (2017) propose three strategies to learn the language style instead of introducing new models.",
"Apart from the development of the Persona-based NRG models, recent researches also attempt to incorporate persona into neural machine translators.",
"Michel and Neubig (2018) propose to learn speaker-specific parameters for the bias term in the output to promote user preferring unigrams, and Wuebker et al. (2018) introduce offset tensors to perform fine-tuning for each user.",
"The variational response generators have drawn much attention recently, due to the observation that it can be flexible to include the effect from conditions based on its Bayesian architecture (Zhao et al., 2017; Shen et al., 2017) and naturally promote diversity by involving sampling in the generate stage (Serban et al., 2017; Du et al., 2018; Shen et al., 2018).",
"Zhao et al. (2017) and Shen et al. (2017) introduce frameworks taking various conditions to influence the model learning.",
"Afterwards, Zhou and Wang (2018) include the emoji into the variational NRG model to generate responses with particular emotions.",
"Actually, these models (Zhao et al., 2017; Shen et al., 2017; Zhou and Wang, 2018) can also be deployed to the persona-aware response generation scenario.",
"The main difference is that the speaker of the response is unpredictable based on the query.",
"Thus, we have introduced the architecture proposed by Zhao et al. (2017) and modified it to adapt to the persona-aware generation, for the meaningful comparison.",
"Especially, Song et al. (2019) have utilized persona information into the CVAE architecture, except they focus on modeling and copying users' explicit profiles.",
"In this paper, we proposed a variational neural network to model the conversation as well as the persona of users.",
"On the basis of the network, two regularization terms are designed to guide the model in emphasizing the importance of the hidden user information.",
"In addition, to better reflect the persona characteristics of the response generation model, three metrics have been introduced to quantify the level of persona of the generated responses.",
"Experimental results show that our approach significantly outperforms other baseline models and the proposed metrics are effective in evaluating the capabilities of models on generating persona-aware responses.",
"This work was supported in part by the National Natural Science Foundation of China (Grant No. 61672555), and the Science and Technology Development Fund, Macau SAR (Grant No. 0101/2019/A2).",
"We sincerely thank the anonymous reviewers for their thorough reviewing and valuable suggestions."
] | [
"abstain",
"abstain",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"objective",
"abstain",
"abstain",
"objective",
"other",
"other"
] |
[
"Fact checking is a challenging task because verifying the truthfulness of a claim requires reasoning about multiple retrievable evidence.",
"In this work, we present a method suitable for reasoning about the semantic-level structure of evidence.",
"Unlike most previous works, which typically represent evidence sentences with either string concatenation or fusing the features of isolated evidence sentences, our approach operates on rich semantic structures of evidence obtained by semantic role labeling.",
"We propose two mechanisms to exploit the structure of evidence while leveraging the advances of pre-trained models like BERT, GPT or XLNet.",
"Specifically, using XLNet as the backbone, we first utilize the graph structure to re-define the relative distances of words, with the intuition that semantically related words should have short distances.",
"Then, we adopt graph convolutional network and graph attention network to propagate and aggregate information from neighboring nodes on the graph.",
"We evaluate our system on FEVER, a benchmark dataset for fact checking, and find that rich structural information is helpful and both our graph-based mechanisms improve the accuracy.",
"Our model is the state-of-the-art system in terms of both official evaluation metrics, namely claim verification accuracy and FEVER score.",
"Internet provides an efficient way for individuals and organizations to quickly spread information to massive audiences.",
"However, malicious people spread false news, which may have significant in-fluence on public opinions, stock prices, even presidential elections (Faris et al., 2017).",
"Vosoughi et al. (2018) show that false news reaches more people Work done while this author was an intern at Microsoft Research.",
"than the truth.",
"The situation is more urgent as advanced pre-trained language models (Radford et al., 2019) can produce remarkably coherent and fluent texts, which lowers the barrier for the abuse of creating deceptive content.",
"In this paper, we study fact checking with the goal of automatically assessing the truthfulness of a textual claim by looking for textual evidence.",
"Previous works are dominated by natural language inference models (Dagan et al., 2013; Angeli and Manning, 2014) because the task requires reasoning of the claim and retrieved evidence sentences.",
"They typically either concatenate evidence sentences into a single string, which is used in top systems in the FEVER challenge (Thorne et al., 2018b), or use feature fusion to aggregate the features of isolated evidence sentences (Zhou et al., 2019).",
"However, both methods fail to capture rich semantic-level structures among multiple evidence, which also prevents the use of deeper reasoning model for fact checking.",
"In Figure 1, we give a motivating example.",
"Making the correct prediction requires a model to reason based on the understanding that Rodney King riots is occurred in Los Angeles County from the first evidence, and that Los Angeles County is the most populous county in the USA from the second evidence.",
"It is therefore desirable to mine the semantic structure of evidence and leverage it to verify the truthfulness of the claim.",
"Under the aforementioned consideration, we present a graph-based reasoning approach for fact checking.",
"With a given claim, we represent the retrieved evidence sentences as a graph, and then use the graph structure to guide the reasoning process.",
"Specifically, we apply semantic role labeling (SRL) to parse each evidence sentence, and establish links between arguments to construct the graph.",
"When developing the reasoning approach, we intend to simultaneously leverage rich semantic structures of evidence embodied in the graph and powerful contextual semantics learnt in pre-trained models like BERT (Devlin et al., 2018), GPT (Radford et al., 2019) and XLNet (Yang et al., 2019).",
"To achieve this, we first re-define the distance between words based on the graph structure when producing contextual representations of words.",
"Furthermore, we adopt graph convolutional network and graph attention network to propagate and aggregate information over the graph structure.",
"In this way, the reasoning process employs semantic representations at both word/sub-word level and graph level.",
"Both the statistic of FEVER dataset and the equation for calculating FEVER score are given in Appendix B. Our Pipeline 1 claim Document Selection documents Sentence Selection sentences Claim Verification evidence SUPPORTED |REFUTED |NOTENOUGHINFO Figure 2: Our pipeline for fact checking on FEVER.",
"We conduct experiments on FEVER (Thorne et al., 2018a), which is one of the most influen-tial benchmark datasets for fact checking.",
"FEVER consists of 185,445 verified claims, and evidence sentences for each claim are natural language sentences from Wikipedia.",
"We follow the official evaluation protocol of FEVER, and demonstrate that our approach achieves state-of-the-art performance in terms of both claim classification accuracy and FEVER score.",
"Ablation study shows that the integration of graph-driven representation learning mechanisms improves the performance.",
"We briefly summarize our contributions as follows.",
"We propose a graph-based reasoning approach for fact checking.",
"Our system apply Semantic Role Labeling (SRL) to construct graphs and present two graph-driven representation learning mechanisms.",
"Results verify that both graph-based mechanisms improve the accuracy, and our final system achieves state-of-the-art performance on the FEVER dataset.",
"With a textual claim given as the input, the problem of fact checking is to find supporting evidence sentences to verify the truthfulness of the claim.",
"We conduct our research on FEVER (Thorne et al., 2018a), short for Fact Extraction and VERification, a benchmark dataset for fact checking.",
"Systems are required to retrieve evidence sentences from Wikipedia, and predict the claim as SUPPORTED , REFUTED or NOT ENOUGH INFO (NEI) , standing for that the claim is supported by the evidence, refuted by the evidence, and is not verifiable, respectively.",
"There are two official evaluation metrics in FEVER.",
"The first is the accuracy for three-way classification.",
"The second is FEVER score, which further measures the percentage of correct retrieved evidence for SUPPORTED and REFUTED categories.",
"Here, we present an overview of our pipeline for FEVER, which follows the majority of previous studies.",
"Our pipeline consists of three main components: a document retrieval model, a sentence-level evidence selection model, and a claim verification model.",
"Figure 2 gives an overview of the pipeline.",
"With a given claim, the document retrieval model retrieves the most related documents from a given collection of Wikipedia documents.",
"With retrieved documents, the evidence selection model selects topk related sentences as the evidence.",
"Finally, the claim verification model takes the claim and evidence sentences as the input and outputs the veracity of the claim.",
"The main contribution of this work is the graph-based reasoning approach for claim verification, which is explained detailedly in Section",
"3. Our Evidence #1 : The 1992 Los Angeles riots, also known as the Rodney King riots were a series of riots, lootings, arsons, and civil disturbances that occurred in Los Angeles County, California in April and May 1992.",
"In this section, we introduce our graph-based reasoning approach for claim verification, which is the main contribution of this paper.",
"Taking a claim and retrieved evidence sentences 1 as the input, our approach predicts the truthfulness of the claim.",
"For FEVER, it is a three-way classification problem, which predicts the claim as SUPPORTED , REFUTED or NOT ENOUGH INFO (NEI) .",
"The basic idea of our approach is to employ the intrinsic structure of evidence to assess the truthfulness of the claim.",
"As shown in the motivating example in Figure 1, making the correct prediction needs good understanding of the semantic-level structure of evidence and the reasoning process based on that structure.",
"In this section, we first describe our graph construction module ( 3.1).",
"Then, we present how to apply graph structure for fact checking, including a contextual representation learning mechanism with graph-based distance calculation ( 3.2), and graph convolutional network and graph attention network to propagate and aggregate information over the graph ( 3.3 and 3.4).",
"ways to construct the graph, such as open information extraction (Banko et al., 2007), named entity recognition plus relation classification, sequence-to-sequence generation which is trained to produce structured tuples (Goodrich et al., 2019), etc.",
"In this work, we adopt a practical and flexible way based on semantic role labeling (Carreras and M`arquez, 2004).",
"Specifically, with the given evidence sentences, our graph construction operates in the following steps.",
"For each sentence, we parse it to tuples 2 with an off-the-shelf SRL toolkit developed by Al-lenNLP 3 , which is a re-implementation of a BERT-based model (Shi and Lin, 2019).",
"For each tuple, we regard its elements with certain types as the nodes of the graph.",
"We heuristically set those types as verb, argument, location and temporal, which can also be easily extended to include more types.",
"We create edges for every two nodes within a tuple.",
"We create edges for nodes across different tuples to capture the structure information among multiple evidence sentences.",
"Our idea is to create edges for nodes that are literally similar with each other.",
"Assuming entity A and entity B come from different tuples, we add one edge if one of the following conditions is satisfied: (1) A equals B ; (2) A contains B ; (3) the number of overlapped words 2 A sentence could be parsed as multiple tuples.",
"Figure 3 shows the constructed graph of the evidence in the motivating example.",
"In order to obtain the structure information of the claim, we use the same pipeline to represent a claim as a graph.",
"Our graph construction module offers an approach on modeling structure of multiple evidence, which could be further developed in the future.",
"We describe the use of graph for learning graph-enhanced 4",
"graph-enhanced contextual representations of words .",
"Our basic idea is to shorten the distance between two semantically related words on the graph, which helps to enhance their relationship when we calculate contextual word representations with a Transformer-based (Vaswani et al., 2017) pre-trained model like BERT and XLNet.",
"Supposing we have five evidence sentences { s 1 , s 2 , ... s 5 } and the word w 1 i from s 1 and the word w 5 j from s 5 are connected on the graph, simply concatenating evidence sentences as a single string fails to capture their semantic-level structure, and would give a large distance to w 1 i and w 5 j , which is the number of words between them across other three sentences (i.e., s 2 , s 3 , and s 4 ).",
"An intuitive way to achieve our goal is to define an N N matrix of distances of words along the graph, where N is the total number of words in the evidence.",
"However, this is unacceptable in practice because the 4 In Transformer-based representation learning pipeline, the basic computational unit can also be word-piece.",
"representation learning procedure will take huge memory space, which is also observed by Shaw et al. (2018).",
"In this work, we adopt pre-trained model XLNet (Yang et al., 2019) as the backbone of our approach because it naturally involves the concept of relative position 5 .",
"Pre-trained models capture rich contextual representations of words, which is helpful for our task which requires sentence-level reasoning.",
"Considering the aforementioned issues, we implement an approximate solution to trade off between the efficiency of implementation and the informativeness of the graph.",
"Specifically, we reorder evidence sentences with a topology sort algorithm with the intuition that closely linked nodes should exist in neighboring sentences.",
"This would prefer that neighboring sentences contain either parent nodes or sibling nodes, so as to better capture the semantic relatedness between different evidence sentences.",
"We present our implementation in Appendix A. The algorithm begins from nodes without incident relations.",
"For each node without incident relations, we recursively visit its child nodes in a depth-first searching way.",
"After obtaining graph-based relative position of words, we feed the sorted sequence into XLNet to obtain the contextual representations.",
"Meanwhile, we obtain the representation h ([ CLS ]) for a special token [ CLS ] , which stands for the joint representation of the claim and the evidence in Transformer-based architecture.",
"We have injected the graph information in Transformer and obtained h ([ CLS ]) , which captures the semantic interaction between the claim and the evidence at word level 6 .",
"As shown in our motivating example in Figure 1 and the constructed graph in Figure 3, the reasoning process needs to operate on span/argument-level, where the basic computational unit typically consists of multiple words like Rodney King riots and the most popular county in the USA .",
"To further exploit graph information beyond word level, we first calculate the representation of a node, which is a word span in the graph, by averaging the contextual representations of words contained in the node.",
"After that, we employ multilayer graph convolutional network (GCNs) (Kipf and Welling, 2016) to update the node representation by aggregating representations from their neighbors on the graph.",
"Formally, we denote G as the graph constructed by the previous graph construction method and make H RN v d a matrix containing representation of all nodes, where N v and d denote the number of nodes and the dimension of node representations, respectively.",
"Each row H i R d is the representation of node i .",
"We introduce an adjacency matrix A of graph G and its degree matrix D , where we add self-loops to matrix A and D ii = (cid:80) j A ij .",
"One-layer GCNs will aggregate information through one-hop edges, which is calculated as follows: H (1) i = ( (cid:101) AH i W 0 ) , (1) where H (1) i R d is the new d -dimension representation of node i , (cid:101) A = D 12 AD 12 is the normalized symmetric adjacency matrix, W 0 is a weight matrix, and is an activation function.",
"To exploit information from the multi-hop neighboring nodes, we stack multiple GCNs layers: H ( j +1) i = ( (cid:101) AH ( j ) i W j ) , (2) where j denotes the layer number and H 0 i is the initial representation of node i initialized from the contextual representation.",
"We simplify H ( k ) as H for later use, where H indicates the representation of all nodes updated by k -layer GCNs.",
"6 By word in word-level, we mean the basic computational unit in XLNet, and thus h ([ CLS ]) capture the sophisticated interaction between words via multi-layer multi-head attention operations.",
"The graph learning mechanism will be performed separately for claim-based and evidence-based graph.",
"Therefore, we denote H c and H e as the representations of all nodes in claim-based graph and evidence-based graphs, respectively.",
"Afterwards, we utilize the graph attention network to align the graph-level node representation learned for two graphs before making the final prediction.",
"We explore the related information between two graphs and make semantic alignment for final prediction.",
"Let H e RN ve d and H c RN vc d denote matrices containing representations of all nodes in evidence-based and claim-based graph respectively, where N ve and N vc denote number of nodes in the corresponding graph.",
"We first employ a graph attention mechanism (Velickovic et al., 2017) to generate a claim-specific evidence representation for each node in claim-based graph.",
"Specifically, we first take each h ic H c as query, and take all node representations h je H e as keys.",
"We then perform graph attention on the nodes, an attention mechanism a : R d R d R to compute attention coefficient as follows: e ij = a ( W c h ic , W e h je ) (3) which means the importance of evidence node j to the claim node i .",
"W c RF d and W e RF d is the weight matrix and F is the dimension of attention feature.",
"We use the dot-product function as a here.",
"We then normalize e ij using the softmax function: ij = softmax j ( e ij ) = exp ( e ij ) (cid:80) k N ve exp ( e ik ) (4) After that, we calculate a claim-centric evidence representation X = [ x 1 , . . . , x N vc ] using the weighted sum over H e : x i = (cid:88) j N ve ij h je (5) We then perform node-to-node alignment and calculate aligned vectors A = [ a 1 , . . . , a N vc ] by the claim node representation H c and the claim-centric evidence representation X , a i = f align ( h ic , x i ) , (6) where f align () denotes the alignment function.",
"Inspired by Shen et al. (2018), we design our alignment function as: f align ( x, y ) = W a [ x, y, x y, x (cid:12) y ] , (7) where W a R d 4 d is a weight matrix and (cid:12) is element-wise Hadamard product.",
"The final output g is obtained by the mean pooling over A .",
"We then feed the concatenated vector of g and the final hidden vector h ([ CLS ]) from XLNet through a MLP layer for the final prediction.",
"In this section, we briefly describe our document retrieval and evidence selection components to make the paper self contained.",
"The document retrieval model takes a claim and a collection of Wikipedia documents as the input, and returns m most relevant documents.",
"We mainly follow Nie et al. (2019), the top-performing system on the FEVER shared task (Thorne et al., 2018b).",
"The document retrieval model first uses keyword matching to filter candidate documents from the massive Wikipedia documents.",
"Then, NSMN (Nie et al., 2019) is applied to handle the documents with disambiguation titles, which are 10% of the whole documents.",
"Documents without disambiguation title are assigned with higher scores in the resulting list.",
"The input to the NSMN model includes the claim and candidate documents with disambiguation title.",
"At a high level, NSMN model has encoding, alignment, matching and output layers.",
"Readers who are interested are recommended to refer to the original paper for more details.",
"Finally, we select top10 documents from the resulting list.",
"Taking a claim and all the sentences from retrieved documents as the input, evidence selection model returns the topk most relevant sentences.",
"We regard evidence selection as a semantic matching problem, and leverage rich contextual representations embodied in pre-trained models like XLNet (Yang et al., 2019) and RoBERTa (Liu et al., 2019a) to measure the relevance of a claim to every evidence candidate.",
"Let's take XLNet as an example.",
"The input of the sentence selector is ce i = [ Claim, SEP, Evidence i , SEP, CLS ] where Claim and Evidence i indicate tokenized word-pieces of original claim and i th evidence candidate, d denotes the dimension of hidden vector, and SEP and CLS are symbols indicating ending of a sentence and ending of a whole input, respectively.",
"The final representation h ce i R d is obtained via extracting the hidden vector of the CLS token.",
"After that, we employ an MLP layer and a softmax layer to compute score s + ce i for each evidence candidate.",
"Then, we rank all the evidence sentences by score s + ce i .",
"The model is trained on the training data with a standard cross-entropy loss.",
"Following the official setting in FEVER, we select top5 evidence sentences.",
"The performance of our evidence selection model is shown in Appendix C. 5 Experiments We evaluate on FEVER (Thorne et al., 2018a), a benchmark dataset for fact extraction and verification.",
"Each instance in FEVER dataset consists of a claim, groups of ground-truth evidence from Wikipedia and a label (i.e., SUPPORTED , REFUTED or NOT ENOUGH INFO (NEI) ), indicating its veracity.",
"FEVER includes a dump of Wikipedia, which contains 5,416,537 pre-processed documents.",
"The two official evaluation metrics of FEVER are label accuracy and FEVER score, as described in Section",
"2. Label accuracy is the primary evaluation metric we apply for our experiments because it directly measures the performance of the claim verification model.",
"We also report FEVER score for comparison, which measures whether both the predicted label and the retrieved evidence are correct.",
"No evidence is required if the predicted label is NEI .",
"We compare our system to the following baselines, including three top-performing systems on FEVER shared task, a recent work GEAR (Zhou et al., 2019), and a concurrent work by Liu et al. (2019b).",
"Nie et al. (2019) employ a semantic matching neural network for both evidence selection and claim verification.",
"Yoneda et al. (2018) infer the veracity of each claim-evidence pair and make final prediction by aggregating multiple predicted labels.",
"Hanselowski et al. (2018) encode each claim-evidence pair separately, and use a pooling function to aggregate features for prediction.",
"GEAR (Zhou et al., 2019) uses BERT to obtain claim-specific representation for each evidence sentence, and applies graph network by regarding each evidence sentence as a node in the graph.",
"KGAT (Liu et al., 2019b) is concurrent with our work, which regards sentences as the nodes of a graph and uses Kernel Graph Attention Network to aggregate information.",
"Table 1 reports the performance of our model and baselines on the blind test set with the score showed on the public leaderboard 7 .",
"As shown in Table 1, in terms of label accuracy, our model significantly outperforms previous systems with 76.85% on the test set.",
"It is worth noting that, our approach, which exploits explicit graph-level semantic structure of evidence obtained by SRL, outperforms GEAR and KGAT, both of which regard sentences as the nodes and use model to learn the implicit structure of evidence 8 .",
"By the time our paper is submitted, our system achieves state-of-the-art performance in terms of both evaluation metrics on the leaderboard.",
"Table 2 presents the label accuracy on the development set after eliminating different components (in-cluding the graph-based relative distance ( 3.2) and graph convolutional network and graph attention network ( 3.3 and 3.4) separately in our model.",
"7 The public leaderboard for perpetual evaluation of FEVER is https://competitions.codalab.org/ competitions/18814#results .",
"DREAM is our user name on the leaderboard.",
"8 We don't overclaim that the superiority of our system to GEAR and KGAT only comes from the explicit graph structure, because we have differences in other components like sentence selection and the pre-trained model.",
"The last row in Table 2 corresponds to the baseline where all the evidence sentences are simply concatenated as a single string, where no explicit graph structure is used at all for fact verification.",
"As shown in Table 2, compared to the XLNet baseline, incorporating both graph-based modules brings 3.76% improvement on label accuracy.",
"Removing the graph-based distance drops 0.81% in terms of label accuracy.",
"The graph-based distance mechanism can shorten the distance of two closely-linked nodes and help the model to learn their dependency.",
"Removing the graph-based reasoning module drops 2.04% because graph reasoning module captures the structural information and performs deep reasoning about that.",
"Figure 5 gives a case study of our approach.",
"We randomly select 200 incorrectly predicted instances and summarize the primary types of errors.",
"The first type of errors is caused by failing to match the semantic meaning between phrases that describe the same event.",
"For example, the claim states Winter's Tale is a book , while the evidence states Winter 's Tale is a 1983 novel by Mark Helprin .",
"The model fails to realize that novel belongs to book and states that the claim is refuted.",
"Solving this type of errors needs to involve external knowledge (e.g. ConceptNet (Speer et al., 2017)) that can indicate logical relationships between different events.",
"The misleading information in the retrieved evidence causes the second type of errors.",
"For example, the claim states The Gifted is a movie , and the ground-truth evidence states The Gifted is an upcoming American television series .",
"However, the retrieved evidence also contains The Gifted is a 2014 Filipino dark comedy-drama movie , which misleads the model to make the wrong judgment.",
"In general, fact checking involves assessing the truthfulness of a claim.",
"In literature, a claim can be 1 Claim Text : Congressional Space Medal of Honor is the highest award given only to astronauts by NASA.",
"a text or a subject-predicate-object triple (Nakas-hole and Mitchell, 2014).",
"In this work, we only consider textual claims.",
"Existing datasets differ from data source and the type of supporting evidence for verifying the claim.",
"An early work by Vlachos and Riedel (2014) constructs 221 labeled claims in the political domain from POLITI-FACT.COM and CHANNEL4.COM, giving meta-data of the speaker as the evidence.",
"POLIFACT is further investigated by following works, including Ferreira and Vlachos (2016) who build Emergent with 300 labeled rumors and about 2.6K news articles, Wang (2017) who builds LIAR with 12.8K annotated short statements and six fine-grained labels, and Rashkin et al. (2017) who collect claims without meta-data while providing 74K news articles.",
"We study FEVER (Thorne et al., 2018a), which requires aggregating information from multiple pieces of evidence from Wikipedia for making the conclusion.",
"FEVER contains 185,445 annotated instances, which to the best of our knowledge is the largest benchmark dataset in this area.",
"The majority of participating teams in the FEVER challenge (Thorne et al., 2018b) use the same pipeline consisting of three components, namely document selection, evidence sentence selection, and claim verification.",
"In document selection phase, participants typically extract named entities from a claim as the query and use Wikipedia search API.",
"In the evidence selection phase, participants measure the similarity between the claim and an evidence sentence candidate by training a classification model like Enhanced LSTM (Chen et al., 2016) in a supervised setting or using string similarity function like TFIDF without trainable parameters.",
"Padia et al. (2018) utilizes semantic frames for evidence selection.",
"In this work, our focus is the claim classification phase.",
"Top-ranked three systems aggregate pieces of evidence through concatenating evidence sentences into a single string (Nie et al., 2019), classifying each evidence-claim pair separately, merging the results (Yoneda et al., 2018), and encoding each evidence-claim pair followed by pooling operation (Hanselowski et al., 2018).",
"Zhou et al. (2019) are the first to use BERT to calculate claim-specific evidence sentence representations, and then develop a graph network to aggregate the information on top of BERT, regarding each evidence as a node in the graph.",
"Our work differs from Zhou et al. (2019) in that (1) the construction of our graph requires understanding the syntax of each sentence, which could be viewed as a more fine-grained graph, and (2) both the contextual representation learning module and the reasoning module have model innovations of taking the graph information into consideration.",
"Instead of training each component separately, Yin and Roth (2018) show that joint learning could improve both claim verification and evidence selection.",
"In this work, we present a graph-based approach for fact checking.",
"When assessing the veracity of a claim giving multiple evidence sentences, our approach is built upon an automatically constructed graph, which is derived based on semantic role labeling.",
"To better exploit the graph information, we propose two graph-based modules, one for calculating contextual word embeddings using graph-based distance in XLNet, and the other for learning representations of graph components and reasoning over the graph.",
"Experiments show that both graph-based modules bring improvements and our final system is the state-of-the-art on the public leaderboard by the time our paper is submitted.",
"Evidence selection is an important component of fact checking as finding irrelevant evidence may lead to different predictions.",
"A potential solution is to jointly learn evidence selection and claim verification model, which we leave as a future work.",
"Xu, Jiahai Wang and",
"Ben Goodrich, Vinay Rao, Peter J Liu, and Mohammad Saleh.",
"2019.",
"Assessing the factual accuracy of generated text.",
"Thomas N Kipf and Max Welling.",
"2016.",
"Semi-supervised classification with graph convolutional networks.",
"arXiv preprint arXiv:1609.02907 .",
"Zhenghao Liu, Chenyan Xiong, and Maosong Sun.",
"2019b.",
"Kernel graph attention network for fact verification.",
"arXiv preprint arXiv:1910.09796 .",
"Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever.",
"2019.",
"Language models are unsupervised multitask learners.",
"OpenAI Blog , 1(8).",
"Dinghan Shen, Xinyuan Zhang, Ricardo Henao, and Lawrence Carin.",
"2018.",
"Improved semantic-aware network embedding with fine-grained word alignment.",
"arXiv preprint arXiv:1808.09633 .",
"Jian Yin are supported by the National Natural Science Foundation of China (U1711262, U1611264,U1711261,U1811261,U1811264, U1911203), National Key R&D Program of China (2018YFB1004404), Guangdong Basic and Applied Basic Research Foundation (2019B1515130001), Key R&D Program of Guangdong Province (2018B010107005).",
"The corresponding author is Jian Yin."
] | [
"abstain",
"method",
"result",
"objective",
"objective",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"method",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"objective",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"other",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"method",
"abstain",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"abstain",
"other",
"method",
"method",
"objective",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"This paper presents a novel crowd-sourced resource for multimodal discourse: our resource characterizes inferences in imagetext contexts in the domain of cooking recipes in the form of coherence relations.",
"Like previous corpora annotating discourse structure between text arguments, such as the Penn Discourse Treebank, our new corpus aids in establishing a better understanding of natural communication and common-sense reasoning, while our findings have implications for a wide range of applications, such as understanding and generation of multimodal documents.",
"Sometimes a picture is worth the proverbial thousand words; sometimes a few well-chosen words are far more effective than a picture Feiner and McKeown (1991).",
"Modeling how visual and linguistic information can jointly contribute to coherent and effective communication is a longstanding open problem with implications across cognitive science.",
"As Feiner and McKeown (1991) already observe, it is particularly important for automating the understanding and generation of textimage presentations.",
"Theoretical models have suggested that images and text fit together into integrated presentations via coherence relations that are analogous to those that connect text spans in discourse; see Alikhani and Stone (2018a) and Section 2. This paper follows up this theoretical perspective through systematic corpus investigation.",
"We are inspired by research on text discourse, which has led to large-scale corpora with information about discourse structure and discourse semantics.",
"The Penn Discourse Treebank (PDTB) is one of the most well-known examples (Milt-sakaki et al., 2004; Prasad et al., 2008).",
"However, although multimodal corpora increasingly include discourse relations between linguistic and nonlinguistic contributions, particularly for utterances and other events in dialogue (Cuayhuitl et al., 2015; Hunter et al., 2015), to date there has existed no dataset describing the coherence of textimage presentations.",
"In this paper, we describe the construction of an annotated corpus that fills this gap, and report initial analyses of the communicative inferences that connect text and accompanying images in this corpus.",
"As we describe in Section 2, our approach asks annotators to identify the presence of specific inferences linking text and images, rather than to use a taxonomy of coherence relations.",
"This enables us to deal with the distinctive discourse contributions of photographic imagery.",
"We describe our data collection process in Section 3, showing that our annotation scheme allows us to get reliable labels by crowdsourcing.",
"We present analyses in Section 4 that show that our annotation highlights a range of cases where text and images work together in distinctive and theoretically challenging ways, and discuss the implications of our work for the understanding and generation of multimodal documents.",
"We conclude in Section 5 with a number of problems for future research.",
"We begin with an example to motivate our approach and clarify its relationship to previous work.",
"Figure 1 shows two steps in an online recipe for a ravioli casserole from the RecipeQA data set (Yagcioglu et al., 2018).",
"The image of Figure 1a shows a moment towards the end of carrying out the covering action of the accompanying text; that of Figure 1b shows one instance of the result of the spooning actions of the text.",
"Cognitive scientists have argued that such images are",
"much like text contributions in the way their interpretation connects to the broader discourse.",
"In particular, inferences analogous to those used to interpret text seem to be necessary with such images to recognize their spatio-temporal perspective (Cumming et al., 2017), the objects they depict (Abusch, 2013), and their place in the arc of narrative progression (McCloud, 1993; Cohn, 2013).",
"In fact, such inferences seem to be a general feature of multimodal communication, applying also in the coherent relationships of utterance to co-speech gesture (Lascarides and Stone, 2009) or the coherent relationships of elements in diagrams (Alikhani and Stone, 2018b; Hiippala and Orekhova, 2018).",
"In empirical analyses of text corpora, researchers in projects such as the Penn Discourse Treebank (Miltsakaki et al., 2004; Prasad et al., 2008) have been successful at documenting such effects by annotating discourse structure and discourse semantics via coherence relations.",
"We would like to apply a similar strategy to text image documents like that shown in Figure 1. However, existing discourse annotation guidelines depend on the distinctive ways that coherence is signaled in text.",
"In text, we find syntactic devices such as structural parallelism, semantic devices such as negation, and pragmatic elements such as discourse connectives, all of which can help annotators to recognize coherence relations in text.",
"Images lack such features.",
"At the same time, characterizing the communicative role of imagery, particularly photographic imagery, involves a special problem: distinguishing the content that the author specifically aimed to depict from merely incidental details that happen to appear in the scene (Stone and Stojnic, 2015).",
"Thus, rather than start from a taxonomy of discourse relations like that used in PDTB, we characterize the different kinds of inferential relationships involved in interpreting imagery separately.",
"To characterize temporal relationships between imagery and text, we ask if the image gives information about the preparation, execution or results of the accompanying step.",
"To characterize the logical relationship of imagery to text, we ask if the image shows one of several actions described in the text, and if it depicts an action that needs to be repeated.",
"To characterize the significance of incidental detail, we ask a range of further questions (some relevant specifically to our domain of instructions), asking about what the image depicts from the text, what it leaves out from the text, and what it adds to the text.",
"Our approach is designed to elicit judgments that crowd workers can provide quickly and reliably.",
"This approach allows us to highlight a number of common patterns that we can think of as prototypical coherence relations between images and text.",
"Figure 1a, for example, instantiates a natural Depiction relation: the image shows the action described in the text in progress; the mechanics of the action are fully visible in the image, but the significant details in the imagery are all reported in the text as well.",
"Our approach also lets us recognize more sophisticated inferential relationships, like the fact that Figure 1b shows an Example:Result of the accompanying instruction.",
"Many of the relationships that emerge from our annotation effort involve newly-identified features of textimage presentations that deserve further investigation: particularly, the use of loosely-related imagery to provide background and motivation for a multimodal presentation as a whole, and depictions of action that seem simultaneously to give key information about the context, manner and result of an action.",
"Work on text has found that text genre heavily influences both the kinds of discourse relations one finds in a corpus and the way those relations are signalled (Webber, 2009).",
"Since our focus is on developing methodology for consistent annotation, we therefore choose to work within a single genre.",
"We selected instructional text because of its concrete, practical subject matter and because of its step-by-step organization, which makes it possible to automatically group together short segments of related text and imagery.",
"TextImage Pairs.",
"We base our data collection on an existing instructional dataset, RecipeQA (Yagcioglu et al., 2018).",
"This is the only publicly available large-scale dataset of multimodal instructions.",
"It consists of multimodal recipestextual instructions accompanied by one or more images.",
"We excluded documents that either have multiple steps without images or that have multiple images per set.",
"This was so that we could more easily study the direct relationship between an image and the associated text.",
"There are 1,690 documents with this characteristic in the RecipeQA train set.",
"To avoid overwhelming crowd workers, we further filtered those to retain only recipes with 70 or fewer words per step, for a final count of 516 documents (2,047 imagetext pairs).",
"Protocol.",
"We recruit participants through Amazon Mechanical Turk.",
"All subjects were US citizens, agreed to a consent form approved by Rut-gers's institutional review board, and were compensated at an estimated rate of USD 15 an hour.",
"Experiment Interface.",
"Given an image and the corresponding textual instruction from the dataset, participants were requested to answer the following 10 questions.",
"For Question 1, participants were asked to highlight the relevant part of the text.",
"For the others, we solicited True/False responses.",
"1. Highlight the part of the text that is most related to the image.",
"2. The image gives visual information about the step described in the text.",
"1 The dataset and the code for the machine learning experiments are available at https://github.com/malihealikhani/CITE 3. You need to see the image in order to be able to carry out the step properly.",
"4. The text provides specific quantities (amounts, measurements, etc.) that you would not know just by looking at the picture.",
"5. The image shows a tool used in the step but not mentioned in the text.",
"6. The image shows how to prepare before carrying out the step.",
"7. The image shows the results of the action that is described in the text.",
"8. The image depicts an action in progress that is described in the text.",
"9. The text describes several different actions but the image only depicts one.",
"10. One would have to repeat the action shown in the image many times in order to complete this step.",
"The interface is designed such that if the answer to Question 8 is TRUE , the subject will be prompted with Question 9 and 10. Otherwise, Question 8 is the last question in the list.",
"Agreement.",
"To assess the inter-rater agreement, we determine Cohen's and Fleiss's values.",
"For Cohen's , we randomly selected 150 imagetext pairs and assigned each to two participants, obtaining a Cohen's of 0.844, which indicates almost perfect agreement.",
"For Fleiss's (Fleiss and Cohen, 1973; Cocos et al., 2015; Banerjee et al., 1999), we randomly selected 50 textimage pairs, assigned them to five subjects, and computed the average .",
"We obtain a score of 0.736, which indicates substantial agreement (Viera et al., 2005).",
"Overall Statistics.",
"Table 1 shows the rates of true answers for questions Q2Q10.",
"Subjects reported that in 17% of cases the images did not give any information about the step described in the accompanying text.",
"Such images deserve further investigation to characterize their interpretive relationship to the document as a whole.",
"Our anecdotal experience is that such images sometimes provide context for the recipe, which may suggest that imagery, like real-world events (Hunter et al., 2015), creates more flexible discourse structures than linguistic segments on their own.",
"of cases.",
"This suggests that subjects construe imagery as backgrounded or peripheral to the document, much as speakers regard co-speech iconic gesture as peripheral to speech (Schlenker and Chemla, 2017).",
"Note, by contrast, that subjects characterized 12.7% of images as introducing a new tool: this includes many cases where the same subjects say the image is not required.",
"In other words, subjects' intuitions suggest that coherent imagery typically does not contribute instruction content, but rather serves as a visual signal that facilitates inferences that have to be made to carry out the instruction regardless.",
"Our annotated examples, where imagery is linked to specific kinds of inferences, provide materials to test this idea.",
"TEXT : Top with another layer of ravioli and the remaining sauce not all the ravioli may be needed.",
"Sprinkle with the Parmesan.",
"The Complex Coherence of Imagery.",
"Our annotation reveals cases where a single image does include more information than could be packaged into a single textual discourse unit (the proverbial thousand words).",
"In particular, such imagery participates in more complex coherence relationships than we find between text segments.",
"Multiple temporal relationships show this most clearly: 12% of images that have any temporal relation have more than one.",
"For example, many images depict the action that is described in the text, while also showing preparations that have already been made by displaying the scene in which the action is performed.",
"Figure 2 depicts the action and the result of the action.",
"It also shows how to prepare before carrying out the action.",
"Other images show an action in progress but nearing completion and thereby depict the result.",
"For instance, the image that accompanies mix well until blended can show both late-stage mixing and the blended result.",
"Looking at a few such cases closely, the circumstances and composition of the photos seem staged to invite such overlapping inferences.",
"Such cases testify to the richness of multimodal discourse, and help to justify our research methodology.",
"The True/False questions characterize the relevant features of interpretation without necessarily mapping to single discourse relations.",
"For instance, Q4 and Q5 indicate inferences in line with an Elaboration relation; Q9 and Q10 indicate inferences in line with an Exemplification relation, as information presented in images show just one case of a generalization presented in accompanying text.",
"However, our data shows that these inferences can be combined in productive ways, in keeping with the potentially complex relevant content of images.",
"Information across modalities.",
"We carried out machine learning experiments to assess what information images provide and what textual cues can guide image interpretation.",
"We use SVM classifiers for performance, and Multinomial Naive Bayes classifiers to explain classifier decision making, both with bag-of-words features.",
"Table 2 reports the F1 measure for instance classification with SVMs (with 5-fold cross valida-tion).",
"In many cases, machine learning is able to find cues that reliably help guess inferential pat-Q4.",
"terns.",
"Table 3 looks at two effective Naive Bayes classifiers, for Q4 (text has quantities) and Q8 (im-age depicts action in progress).",
"It shows the features most correlated with the classification decision and their log probability estimates.",
"For Q4, not surprisingly, numbers and units are positive instances.",
"More interestingly, verbs of movement and combination are negative instances, perhaps because such steps normally involve material that has already been measured.",
"For Q8, a range of physical action verbs are associated with actions in progress; negative features correlate with steps involved in actions that don't require ongoing attention (e.g., baking).",
"Table 4 reports top SVM with NB (NBSVM) (Wang and Manning, 2012) features for Q1 that asks subjects to highlight the part of the text that is most related to the image.",
"Action verbs are part of highlighted text, whereas adverbs and quantitative information that cannot be easily depicted in images are part of the remaining segments of the text.",
"Such correlations set a direction for designing or learning strategies to select when to include imagery.",
"In this paper, we have presented the first dataset describing discourse relations across text and imagery.",
"This data affords theoretical insights into the connection between images and instructional text, and can be used to train classifiers to support automated discourse analysis.",
"Another important Q1.",
"contribution of this study is that it presents a discourse annotation scheme for cross-modal data, and establishes that annotations for this scheme can be procured from non-expert contributors via crowd-sourcing.",
"Our paper sets the agenda for a range of future research.",
"One obvious example is to extend the approach to other genres of communication with other coherence relations, such as the distinctive coherence of images and caption text (Alikhani and Stone, 2019).",
"Another is to link coherence relations to the structure of multimodal discourse.",
"For example, our methods have not yet addressed whether imagetext relations have the same kinds of subordinating or coordinating roles that comparable relations have in structuring text discourse (Asher and Lascarides, 2003).",
"Ultimately, of course, we hope to leverage such corpora to build and apply better models of multimodal communication.",
"The research presented here is supported by NSF Award IIS-1526723 and through a fellowship from the Rutgers Discovery Informatics Institute.",
"Thanks to Gabriel Greenberg, Hristiyan Kourtev and the anonymous reviewers for helpful comments.",
"We would also like to thank the Mechanical Turk annotators for their contributions."
] | [
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"result",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"method",
"result",
"result",
"method",
"method",
"result",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"other"
] |
[
"A number of researchers have recently questioned the necessity of increasingly complex neural network (NN) architectures.",
"In particular, several recent papers have shown that simpler, properly tuned models are at least competitive across several NLP tasks.",
"In this work, we show that this is also the case for text generation from structured and unstructured data.",
"We consider neural table-to-text generation and neural question generation (NQG) tasks for text generation from structured and unstructured data, respectively.",
"Table-to-text generation aims to generate a description based on a given table, and NQG is the task of generating a question from a given passage where the generated question can be answered by a certain sub-span of the passage using NN models.",
"Experimental results demonstrate that a basic attention-based seq2seq model trained with the exponential moving average technique achieves the state of the art in both tasks.",
"Code is available at https://github.com/h-shahidi/ 2birds-gen .",
"Recent NLP literature can be characterized as increasingly complex neural network architectures that eke out progressively smaller gains over previous models.",
"Following a previous line of research (Melis et al., 2018; Mohammed et al., 2018; Adhikari et al., 2019), we investigate the necessity of such complicated neural architectures.",
"In this work, our focus is on text generation from structured and unstructured data by considering description generation from a table and question generation from a passage and a target answer.",
"More specifically, the goal of the neural table-to-text generation task is to generate biographies based on Wikipedia infoboxes (structured data).",
"An infobox is a factual table with a number of fields Target Output: Sir Bernard Augustus Keen FRS (5 September 1890 5 August 1981) was a British soil scientist and Fellow of University College London.",
"(e.g., name, nationality, and occupation) describing a person.",
"For this task, we use the WIKIBIO dataset (Lebret et al., 2016) as the benchmark dataset.",
"Figure 1 shows an example of a biographic infobox as well as the target output textual description.",
"Automatic question generation aims to generate a syntactically correct, semantically meaningful and relevant question from a natural language text and a target answer within it (unstructured data).",
"This is a crucial yet challenging task in NLP that has received growing attention due to its application in improving question answering systems (Duan et al., 2017; Tang et al., 2017, 2018), providing material for educational purposes (Heilman and Smith, 2010), and helping conversational systems to start and continue a conversation (Mostafazadeh et al., 2016).",
"We adopt the widely used SQuAD dataset (Rajpurkar et al., 2016) for this task.",
"Table 1 presents a sample (passage, answer, question) triple from this dataset.",
"Prior work has made remarkable progress on both of these tasks.",
"However, the proposed models utilize complex neural architectures to capture necessary information from the input(s).",
"In this paper, we question the need for such sophisticated NN models for text generation from inputs comprising structured and unstructured data.",
"Specifically, we adopt a bi-directional, attention-based seq2seq model (Bahdanau et al., 2015) equipped with a copy mechanism (Gu et al., 2016) for both tasks .",
"We demonstrate that this model, together with the exponential moving average (EMA) technique, achieves the state of the art in both neural table-to-text generation and NQG.",
"Interestingly, our model is able to achieve this result even without using any linguistic features.",
"Our contributions are two-fold: First, we propose a unified NN model for text generation from structured and unstructured data and show that training this model with the EMA technique leads to the state of the art in neural table-to-text generation as well as NQG.",
"Second, because our model is, in essence, the primary building block of previous models, our results show that some previous papers propose needless complexity, and that gains from these previous complex neural architectures are quite modest.",
"In other words, the state of the art is achieved by careful tuning of simple and well-engineered models, not necessarily by adding more complexity to the model, echoing the sentiments of Lipton and Steinhardt (2018).",
"In this section, we first discuss previous work for neural table-to-text generation and then NQG.",
"Recently, there have been a number of end-to-end trainable NN models for table-to-text generation.",
"Lebret et al. (2016) propose an n-gram statistical language model that incorporates field and position embeddings to represent the structure of a table.",
"However, their model is not effective enough to capture long-range contextual dependencies while generating a description for the table.",
"To address this issue, Liu et al. (2018) suggest a structure-aware seq2seq model with local and global addressing on the table.",
"While local addressing is realized by content encoding of the model's encoder and word-level attention, global addressing is accomplished by field encoding using a field-gating LSTM and field-level attention.",
"The field-gating mechanism incorporates field information when updating the cell memory of the LSTM units.",
"Liu et al. (2019b) utilize a two-level hierarchical encoder with coarse-to-fine attention to model the field-value structure of a table.",
"They also propose three joint tasks (sequence labeling, text auto-encoding, and multi-label classification) as auxiliary supervision to capture accurate semantic representations of the tables.",
"In this paper, similar to Lebret et al. (2016), we use both content and field information to represent a table by concatenating the field and position embeddings with the word embedding.",
"Unlike Liu et al. (2018), we don't separate local and global addressing by using specific modules for each, but rather adopt the EMA technique and let the bidirectional model accomplish this implicitly, exploiting the natural advantages of the model.",
"Previous NQG models can be classified into rule-based and neural-network-based approaches.",
"Du et al. (2017) propose a seq2seq model that is able to achieve better results than previous rule-based systems without taking the target answer into consideration.",
"Zhou et al. (2017) concatenate answer position indicators with the word embeddings to make the model aware of the target answer.",
"They also use lexical features (e.g., POS and NER tags) to enrich their model's encoder.",
"In addition, Song et al. (2018) suggest using a multi-perspective context matching algorithm to further leverage information from explicit interactions between the passage and the target answer.",
"More recently, Kim et al. (2019) use answer-separated seq2seq, which replaces the target answer in the passage with a unique token to avoid using the answer words in the generated question.",
"They also make use of a module called keyword-net to extract critical information from the target answer.",
"Similarly, Liu et al. (2019a) propose using a clue word predictor by adopting graph convolution networks to highlight the imperative aspects of the input passage.",
"Our model is architecturally more similar to Zhou et al. (2017), but with the following distinctions: (1) we do not use additional lexical features, (2) we utilize the EMA technique during training and use the averaged weights for evaluation, (3) we do not make use of the introduced maxout hidden layer, and (4) we adopt LSTM units instead of GRU units.",
"These distinctions, along with some hyperparameter differences, notably the optimizer and learning rate, have a considerable impact on the experimental results (see Section 5).",
"In this section, we introduce a simple but effective attention-based seq2seq model for both neural table-to-text generation and NQG.",
"Figure 2 provides an overview of our model.",
"Our encoder is a bi-directional LSTM (BiLSTM) whose input x t at time step t is the concatenation of the current word embedding e t with some additional task-specific features.",
"For neural table-to-text generation, additional features are field name f t and position information p t , following Lebret et al. (2016).",
"The position information itself is the concatenation of p + t , which is the position of the current word in its field when counting from the left, and p t , when counting from the right.",
"Considering the word University , in Figure 1, as an example, it is the first word from the left and the third word from the right in the Institutions field.",
"Hence, the structural information of this word would be { Institutions , 1, 3 } .",
"Thus, the input to the encoder at time step t for this task is x t = [ e t ; f t ; p + t ; p t ] , where [ . ; . ] denotes concatenation along the feature dimension.",
"For NQG, similar to Zhou et al. (2017), we use a single bit b t , indicating whether the t th word in the passage belongs to the target answer, as an additional feature.",
"Hence, the input at time step t is x t = [ e t ; b t ] .",
"Remarkably, unlike previous work (Song et al., 2018; Kim et al., 2019), we do not use a separate encoder for the target answer to have a unified model for both tasks.",
"Our decoder is an attention-based LSTM model (Bahdanau et al., 2015).",
"Due to the considerable overlap between input and output words, we use a copy mechanism (Gu et al., 2016) that integrates the attention distribution over the input words with the vocabulary distribution.",
"The exponential moving average (EMA) technique, also referred to as temporal averaging, was initially introduced to be used in optimization algorithms for better generalization performance and reducing noise from stochastic approximation in recent parameter estimates by averaging model parameters (Polyak and Juditsky, 1992; Moulines and Bach, 2011; Kingma and Ba, 2015).",
"In applying the technique, we maintain two sets of parameters: (1) training parameters that are trained as usual, and (2) evaluation parameters that are an exponentially weighted moving average of the training parameters.",
"The moving average is calculated using the following expression: + (1 ) (1) where is the decay rate.",
"Previous work (Szegedy et al., 2016; Merity et al., 2018; Adhikari et al., 2019; Liu et al., 2019a) has used this technique for different tasks to produce more stable and accurate results.",
"In Section 5, we show that using this simple technique considerably improves the performance of our model in both of the tasks.",
"In this section, we introduce the datasets first, then explain additional implementation details, and fi-nally describe the evaluation metrics.",
"We use the WIKIBIO dataset (Lebret et al., 2016) for neural table-to-text generation.",
"This dataset 3867 contains 728,321 articles from English Wikipedia and uses the first sentence of each article as the ground-truth description of the corresponding infobox.",
"The dataset has been divided into training (80%), validation (10%), and test (10%) sets.",
"For NQG, we use the SQuAD dataset v1.1 (Ra-jpurkar et al., 2016) in our experiments, containing 536 Wikipedia articles with over 100K question-answer pairs.",
"The test set of the original dataset is not publicly available.",
"Thus, Du et al. (2017) and Zhou et al. (2017) re-divide available data into training, validation, and test sets, which we call split-1 and split-2, respectively.",
"In this paper, we conduct experiments and evaluate our model on both of the data splits.",
"For the sake of reproducibility, we discuss implementation details for achieving the results shown in Tables 2 and 3.",
"We train the model using cross-entropy loss and retain the model that works best on the validation set during training for both tasks.",
"We replace unknown tokens with a word from the input having the highest attention score.",
"In addition, a decay rate of 0 .",
"9999 is used for the exponential moving average in both of the tasks.",
"For the neural table-to-text generation task, we train the model up to 10 epochs with three different seeds and a batch size of 32.",
"We use a single-layer BiLSTM for the encoder and a single-layer LSTM for the decoder and set the dimension of the LSTM hidden states to 500.",
"Optimization is performed using the Adam optimizer with a learning rate of 0.0005 and gradient clipping when its norm exceeds 5.",
"The word, field, and position embeddings are trainable and have a dimension of 400, 50, and 5, respectively.",
"The maximum position number is set to 30.",
"Any higher position number is therefore counted as 30.",
"The most frequent 20,000 words and 1,480 fields in the training set are selected as word vocabulary and field vocabulary, respectively, for both the encoder and the decoder.",
"Ultimately, we conduct greedy search to decode a description for a given input table.",
"For the NQG task, we use a two-layer BiLSTM for the encoder and a single-layer LSTM for the decoder.",
"We set the dimension of the LSTM hidden states to 350 and 512 for split-1 and split-2, respectively.",
"Optimization is performed using the AdaGrad optimizer with a learning rate of 0.3 and gradient clipping when its norm exceeds 5.",
"The word embeddings are initialized with pre-trained 300-dimensional GloVe embeddings (Pennington et al., 2014), which are frozen during training.",
"We train the model up to 20 epochs with five different seeds and a batch size of 50.",
"We further employ dropout with a probability of 0.1 and 0.3 for data split-1 and split-2, respectively.",
"Moreover, we use the vocabulary set released by Song et al. (2018) for both the encoder and the decoder.",
"During decoding, we perform beam search with a beam size of 20 and a length penalty weight of 1.75.",
"Following previous work, we use BLEU-4 (Pap-ineni et al., 2002), METEOR (Banerjee and Lavie, 2005), ROUGE-4, and ROUGE-L (Lin, 2004) to evaluate the performance of our model.",
"BLEU and METEOR were originally designed to evaluate machine translation systems, and ROUGE was designed to evaluate text summarization systems.",
"In this section, we present our experimental results for both neural table-to-text generation and NQG.",
"We report the mean and standard deviation of each metric across multiple seeds to ensure robustness against potentially spurious conclusions (Crane, 2018).",
"In Tables 2 and 3, we compare previous work with our results for NQG and neural table-to-text generation, respectively.",
"All results are copied from the original papers except for Liu et al. (2018) in Table 3, where Repl.",
"refers to scores from experiments that we conducted using the source code released by the authors, and Orig.",
"refers to scores taken from the original paper.",
"It is noteworthy that a similar version of our model has served as a baseline in previous papers (Liu et al., 2018; Kim et al., 2019; Liu et al., 2019a).",
"However, the distinctions discussed in Section 2, especially the EMA technique, enable our model to achieve the state of the art in all cases but BLEU-4 on the SQuAD split-2, where our score is very competitive; furthermore, Liu et al. (2019a) only report results from a single trial.",
"Our results indicate that a basic seq2seq model is able to effectively learn the underlying distribution of both datasets.",
"In this paper, we question the necessity of complex neural architectures for text generation from structured data (neural table-to-text generation) and",
"unstructured data (NQG).",
"We then propose a simple yet effective seq2seq model trained with the EMA technique.",
"Empirically, our model achieves the state of the art in both of the tasks.",
"Our results highlight the importance of thoroughly exploring simple models before introducing complex neural architectures, so that we can properly attribute the source of performance gains.",
"As a potential direction for future work, it would be interesting to investigate the use of the EMA technique on transformer models as well and conduct similar studies to examine needless architectural complexity in other NLP tasks.",
"This research was supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada."
] | [
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"result",
"objective",
"objective",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"objective",
"result",
"objective",
"abstain",
"other"
] |
[
"Probing has become an important tool for analyzing representations in Natural Language Processing (NLP).",
"For graphical NLP tasks such as dependency parsing, linear probes are currently limited to extracting undirected or unlabeled parse trees which do not capture the full task.",
"This work introduces DEPPROBE , a linear probe which can extract labeled and directed dependency parse trees from embeddings while using fewer parameters and compute than prior methods.",
"Leveraging its full task coverage and lightweight parametrization, we investigate its predictive power for selecting the best transfer language for training a full biaffine attention parser.",
"Across 13 languages, our proposed method identifies the best source treebank 94% of the time, outperforming competitive baselines and prior work.",
"Finally, we analyze the informativeness of task-specific subspaces in contextual embeddings as well as which benefits a full parser's non-linear parametrization provides.",
"Pre-trained, contextualized embeddings have been found to encapsulate information relevant to various syntactic and semantic tasks out-of-the-box (Tenney et al., 2019; Hewitt and Manning, 2019).",
"Quantifying this latent information has become the task of probes models which take frozen embeddings as input and are parametrized as lightly as possible (e.g. linear transformations).",
"Recent proposals for edge probing (Tenney et al., 2019) and structural probing (Hewitt and Manning, 2019) have enabled analyses beyond classification tasks, including graphical tasks such as dependency parsing.",
"They are able to extract dependency graphs from embeddings, however these are either undirected (Hewitt and Manning, 2019; Hall Maudslay et al., 2020) or unlabeled (Kulmizev et al., 2020), thereby capturing only a subset of the full task.",
"In this work, we investigate whether this gap can be filled and ask: Can we construct a lightweight probe which can produce fully directed and labeled dependency trees?",
"Using these trees, we further aim to study the less examined problem of transferability estimation for graphical tasks, extending recent work targeting classification and regression tasks (Nguyen et al., 2020; You et al., 2021).",
"Specifically: How well do our probe's predictions correlate with the transfer performance of a full parser across a diverse set of languages?",
"To answer these questions, we contribute DEPPROBE (Figure 1), the first linear probe to extract directed and labeled dependency trees while using fewer parameters than prior work and three orders of magnitude fewer trainable parameters than a full parser (Section 3).",
"As this allows us to measure labeled attachment scores (LAS), we investigate the degree to which our probe is predictive of cross-lingual transfer performance of a full parser across 13 typologically diverse languages, finding that our approach chooses the best transfer language 94% of the time, outperforming competitive baselines and prior work (Section 4).",
"Finally, we perform an in-depth analysis of which latent information is most relevant for dependency parsing as well as which edges and relations benefit most from the expressivity of the full parser (Section 5).",
"1 1 Code available at https://personads.me/x/acl-2022-code.",
"Given the ubiquitous use of contextualized embeddings (Devlin et al., 2019; Conneau et al., 2020; Xue et al., 2021), practitioners have turned to various methods for analyzing their linguistic features (Rogers et al., 2020).",
"Hewitt and Manning (2019) examine these intrinsic properties in greater detail for English dependency parsing using a structural probe , finding that undirected dependency graphs are recoverable from BERT by learning a linear transformation on its embeddings (Section 3.1).",
"Extending the structural probe of Hewitt and Manning (2019) to 12 languages, Chi et al. (2020) extract undirected dependency graphs from mBERT (Devlin et al., 2019), further showing that head-to-child difference vectors in the learned subspace cluster into relations from the Universal Dependencies taxonomy (de Marneffe et al., 2014).",
"Building on both the structural and tree depth probes (Hewitt and Manning, 2019), Kulmizev et al. (2020) extract directed dependency graphs from mBERT for 13 languages (Section 3.2).",
"Further variations to structural probing include regularization of the linear transformation (Limisiewicz and Marecek, 2021) as well as alternative objective functions (Hall Maudslay et al., 2020).",
"None of the proposed linear probing approaches so far are able to produce full dependency parse trees (i.e. directed and labeled), however the closer a probe approximates the full task, the better it quantifies relevant information (Hall Maudslay et al., 2020).",
"It would for example be desirable to estimate LAS for parsing a target treebank with a model trained on a different source without having to train a resource-intensive parser (e.g. Dozat and Manning, 2017) on each source candidate.",
"Although performance prediction methods for such scenarios exist, they typically do not cover graph prediction (Nguyen et al., 2020; You et al., 2021).",
"In order to bridge the gap between full parsers and unlabeled probes, in addition to the gap between full fine-tuning and lightweight performance prediction, this work proposes a linear probe which can extract labeled and directed dependency parse trees while using less compute than prior methods (Section 3).",
"We use our probe's LAS to evaluate its predictive power for full parser performance and leverage its linear nature to investigate how dependencies are represented in subspaces of contextual embeddings (Section 5).",
"In order to construct a directed and labeled dependency parse tree for a sentence s consisting of the words { w 0 , . . . , w N } , we require information on the presence or absence of edges between words, the directionality of these edges ( w i , w j ) , and the relationships { r 0 , . . . , r N } which they represent.",
"Using the contextualized embeddings { h 0 , . . . , h N } with h i R e , prior probing work has focused on the first step of identifying edges (Section 3.1) and later directionality (Section 3.2).",
"In this work, we propose a probe which completes the final relational step (Section 3.3) and simultaneously provides a more efficient method for identifying directionality (Section 3.4).",
"The structural probe introduced by Hewitt and Manning (2019) recovers the first piece of information (i.e. the undirected graph) remarkably well.",
"Here, the probe is a linear transformation B R e b with b < e which maps contextual embeddings into a subspace in which the distance measure d B ( h i , h j ) = (cid:113) ( B h i B h j ) T ( B h i B h j ) (1) between h i and h j is optimized towards the distance between two words in the dependency graph d P ( w i , w j ) , i.e. the number of edges between the words.",
"For each sentence, the loss is defined as the mean absolute difference across all word pairs: LB ( s ) = 1 N 2 N (cid:88) i =0 N (cid:88) j =0 (cid:12)(cid:12) d P ( w i , w j ) d B ( h i , h j ) (cid:12)(cid:12) .",
"In order to extract an undirected dependency graph, one computes the distances for a sentence's word pairs using d B and extracts the minimum spanning tree (Jarnk, 1930; Prim, 1957; MST).",
"Apart from the structural probe B , Hewitt and Manning (2019) also probe for tree depth.",
"Using another matrix C R e c , a subspace is learned in which the squared L 2 norm of a transformed embedding (cid:107) C h i (cid:107) 22 corresponds to a word's depth in the tree, i.e. the number of edges from the root.",
"Kulmizev et al. (2020) combine the structural and tree depth probe to extract directed graphs.",
"This directed probe (DIRPROBE ) constructs a score matrix M RN N for which each entry corresponds to a word pair's negative structural distance d B ( h i , h j ) .",
"The shallowest node in the depth subspace C is set as root.",
"Entries in M which correspond to an edge between w i and w j for which the word depths follow (cid:107) C h i (cid:107) 22 > (cid:107) C h j (cid:107) 22 are set to .",
"A word's depth in subspace C therefore corresponds to edge directionality.",
"The directed graph is built from M using Chu-Liu-Edmonds decoding (Chu and Liu, 1965; Edmonds, 1967).",
"DIRPROBE extracts directed dependency parse trees, however it would require additional complexity to label each edge with a relation (e.g. using an additional probe).",
"In the following, we propose a probe which can extract both directionality and relations while using fewer parameters and no dynamic programming-based graph-decoding algorithm.",
"The incoming edge of each word w i is governed by a single relation.",
"As such the task of dependency relation classification with l relations can be simpli-fied to a labeling task using a linear transformation L R e l for which the probability of a word's relation r i being of class l k is given by: p ( r i = l k | w i ) = softmax ( L h i ) k (3) and optimization uses standard cross-entropy loss given the gold label r i for each word w i : LL ( s ) = 1 NN (cid:88) i =0 ln p ( r i | w i ) .",
"Should dependency relations be encoded in contextualized embeddings, each dimension of the subspace L will correspond to the prevalence of information relevant to each relation, quantifiable using relation classification accuracy (RelAcc).",
"Combining structural probing (Section 3.1) and dependency relation probing (Section 3.3), we propose a new probe for extracting fully directed and labeled dependency trees (DEPPROBE ).",
"It combines undirected graphs and relational information in a computationally efficient manner, adding labels while requiring less parameters than prior unlabeled or multi-layer-perceptron-based approaches.",
"As outlined in Algorithm 1 and illustrated in Figure 1, DEPPROBE uses the distance matrix DB Algorithm 1: DEPPROBE Inference 1 input Distance matrix DB RN N , p ( l k | w i ) of relation label l k given w i 2 w r argmax w i p ( root | w i ) 3 T w { w r } , T e {} 4 while |T w | < N do 5 w i , w j argmin w i ,w j DB ( w i T w , w j ) 6 r j argmax l k p ( l k | w j ) with l k (cid:54) = root 7 T w T w { w j } 8 T e T e { ( w i , w j , r j ) } 9 end 10 return T e derived from the structural probe B in conjunction with the relation probabilities of the relational probe L (line 1).",
"The graph is first rooted using the word w r for which p ( root | w r ) is highest (line 2).",
"Iterating over the remaining words until all w j are covered in T w , an edge is drawn to each word w j from its head w i based on the minimum distance in DB .",
"The relation r j for an edge ( w i , w j , r j ) is determined by taking the relation label l k which maximizes p ( r j = l k | w j ) with l k (cid:54) = root (line 6).",
"The edge is then added to the set of labeled tree edges T e .",
"With edge directionality being inferred as simply pointing away from the root, this procedure produces a dependency graph that is both directed and labeled without the need for additional complexity, running in O ( n 2 ) while dynamic programming-based decoding such as DIRPROBE have runtimes of up to O ( n 3 ) (Stanojevic and Cohen, 2021).",
"Constructing dependency trees from untuned embeddings requires the matrices B and L , totaling e b + e l trainable parameters.",
"Optimization can be performed using gradient descent on the sum of losses LB + LL .",
"With l = 37 relations in UD, this constitutes a substantially reduced training effort compared to prior probing approaches (with subspace dimensionalities b and c typically set to 128) and multiple magnitudes fewer fine-tuned parameters than for a full biaffine attention parser.",
"Parsers In our experiments, we use the deep biaffine attention parser (BAP) by Dozat and Manning (2017) as implemented in van der Goot et al. (2021) as an upper bound for MLM-based pars-7713",
"ing performance.",
"As it is closest to our work, we further reimplement DIRPROBE (Kulmizev et al., 2020) with b = 128 and c = 128.",
"Note that this approach produces directed, but unlabeled dependency graphs.",
"Finally, we compare both methods to our directed and labeled probing approach, DEPPROBE with b = 128 and l = 37.",
"All methods use mBERT (Devlin et al., 2019) as their encoder ( e = 768).",
"For BAP, training the model includes fine-tuning the encoder's parameters, while for both probes they remain fixed and only the linear transformations are adjusted.",
"This results in 183M tuned parameters for BAP, 197k for DIRPROBE and 127k for DEPPROBE .",
"Hyper-parameters are set to the values reported by the authors, 2 while for DEPPROBE we perform an initial tuning step in Section 4.2.",
"Target Treebanks As targets, we use the set of 13 treebanks proposed by Kulmizev et al. (2019), using versions from Universal Dependencies v2.8 (Zeman et al., 2021).",
"They are diverse with respect to language family, morphological complexity and script (Appendix A).",
"This set further includes EN-EWT (Silveira et al., 2014) which has been used in prior probing work for hyperparameter tuning, allowing us to tune DEPPROBE on the same data.",
"Metrics We report labeled attachment scores (LAS) wherever possible (BAP, DEPPROBE ) and unlabeled attachment scores (UAS) for all methods.",
"For DEPPROBE 's hyperparameters, we evaluate undirected, unlabeled attachment scores (UUAS) as well as relation classification accuracy (RelAcc).",
"One notable difference to prior work is that we include punctuation both during training and evaluation contrary to prior probing work which excludes all punctuation (Hewitt and Manning, 2019; Kulmizev et al., 2020; Hall Maudslay et al., 2020) since we are interested in the full parsing task.",
"Training Each method is trained on each target treebank's training split and is evaluated on the test split.",
"For cross-lingual transfer, models trained on one language are evaluated on the test splits of all other languages without any further tuning.",
"For DEPPROBE tuning (Section 4.2) we use the development split of EN-EWT.",
"BAP uses the training schedule implemented in van der Goot et al. (2021) while DIRPROBE and 2 For better comparability, we use the best single layer reported by Kulmizev et al. (2020) instead of the weighted sum over all layers.",
"DEPPROBE use AdamW (Loshchilov and Hutter, 2019) with a learning rate of 10 3 which is reduced by a factor of 10 each time the loss plateaus (see also Hewitt and Manning, 2019).",
"Both probing methods are implemented using PyTorch (Paszke et al., 2019) and use mBERT as implemented in the Transformers library (Wolf et al., 2020).",
"Each model is trained with three random initializations of which we report the mean.",
"As prior work has repeatedly found that MLM layers encode different linguistic information, the layers which are most relevant for a probe's task are typically first identified (Tenney et al., 2019; Hewitt and Manning, 2019).",
"Following this paradigm, we train DEPPROBE on embeddings from each layer of mBERT.",
"Layer 0 is equivalent to the first, non-contextualized embeddings while layer 12 is the output of the last attention heads.",
"The probe is trained on EN-EWT and evaluated on its development split using UUAS for the structural transformation B (akin to Hewitt and Manning, 2019) as well as RelAcc for the relational transformation L .",
"Figure 2 shows that structure is most prevalent around layer 6 at 78 UUAS, corroborating the 68 range identified by prior work (Tenney et al., 2019; Hewitt and Manning, 2019; Chi et al., 2020).",
"Dependency relations are easiest to retrieve at around layer 7 with an accuracy of 86%.",
"The standard deviation across initializations is around 0.1 in both cases.",
"Based on these tuning results, we use layer 6 for structural probing and layer 7 for relational probing in the following experiments.",
"Figure 3 lists UAS for all methods and LAS for BAP and DEPPROBE both on target-language test data (=L) and zero-shot transfer targets ( L).",
"Table 3c further shows the mean results for each setting.",
"Unsurprisingly, the full parametrization of BAP performs best, with in-language scores of 88 LAS and 91 UAS.",
"For zero-shot transfer, these scores drop to 35 LAS and 52 UAS, with some language pairs seeing differences of up to 85 points: e.g. JA JA (93 LAS) versus AR JA (8 LAS) in Figure 3a.",
"This again confirms the importance of selecting appropriate source data for any given target.",
"Both probes, with their limited parametrization, fall short of the full parser's performance, but still reach up to 73 LAS and 79 UAS.",
"DIRPROBE has a mean in-language UAS which is 3 points higher than for DEPPROBE , attributable to the more complex decoder.",
"Due to DIRPROBE 's output structures being unlabeled, we cannot compare LAS.",
"DEPPROBE reaches a competitive 67 UAS despite its much simpler decoding procedure and appears to be more stable for zero-shot transfer as it outperforms DIRPROBE by around 2 UAS while maintaining a lower standard deviation.",
"Most importantly, it produces directed and labeled parses such that we can fully compare it to BAP.",
"Considering that DEPPROBE has more than three orders of magnitude fewer tunable parameters, a mean in-language LAS of 60 is considerable and highlights the large degree of latent dependency information in untuned, contextual embeddings.",
"For zero-shot transfer, the performance gap to BAP narrows to 13 LAS and 14 UAS.",
"Given that DEPPROBE provides a highly parameter-efficient method for producing directed, labeled parse trees, we next investigate whether its performance patterns are indicative of the full parser's performance and could aid in selecting an appropriate source treebank for a given target without having to train the 183 million parameters of BAP.",
"Setup Comparing UAS and LAS of BAP with respective scores of DEPPROBE and DIRPROBE , we compute the Pearson correlation coefficient and the weighted Kendall's w (Vigna, 2015).",
"The latter can be interpreted as corresponding to a cor-7715 MODELLAS UAS w w L2V .86 .72 .80 .70 DIRPROBE .91 .81 DEPPROBE .97 .88 .94 .85 Table 1: Transfer Correlation with BAP.",
"relation in [ 1 , 1] , and that given a probe ranking one source treebank over another, the probability of this higher rank corresponding to higher performance in the full parser is w +12 .",
"All reported correlations are significant at p < 0 .",
"001 .",
"Similarly, differences between correlation coefficients are also significant at p < 0 .",
"001 as measured using a standard Z-test.",
"In addition to the probes, we also compare against a method commonly employed by practitioners by using the cosine similarity of typological features from the URIEL database as represented in lang2vec (Littell et al., 2017; L2V) between our 13 targets (details in Appendix A).",
"Results Table 1 shows that the L2V baseline correlates with final parser performance, but that actual dependency parses yield significantly higher correlation and predictive power.",
"For UAS, we find that despite having similar attachment scores, DEPPROBE performance correlates higher with BAP than that of DIRPROBE , both with respect to predicting the ability to parse any particular language as well as ranking the best source to transfer from.",
"Using the labeled parse trees of DEPPROBE results in almost perfect correlation with BAP's LAS at = .97 as well as a w of .88, highlighting the importance of modeling the full task and including dependency relation information.",
"Using Kendall's w with respect to LAS, we can estimate that selecting the highest performing source treebank from DEPPROBE to train the full parser will be the best choice 94% of the time for any treebank pair.",
"Why does DEPPROBE predict transfer performance more accurately than DIRPROBE despite its simpler architecture?",
"As each probe consists only of two matrices optimized to extract tree structural, depth MODELLAS UAS w w SSA-STRUCT .68 .42 .60 .43 SSA-DEPTH .62 .34 .53 .35 SSA-REL .73 .55 .65 .53 Table 2: SSA Correlation with BAP.",
"or relational information, we can directly compare the similarity of all task-relevant parameters across languages against the full BAP's cross-lingual performance.",
"In order to measure the similarity of probe matrices from different languages, we use mean subspace angles (Knyazev and Argentati, 2002; SSA), similarly to prior probing work (Chi et al., 2020).",
"Intuitively, SSA quantifies the energy required to transform one matrix to another by converting the singular values of the transformation into angles between 0 and 90 .",
"SSAs are computed for the structural probe (SSA-STRUCT ) which is equivalent in both methods, DIRPROBE 's depth probe (SSADEPTH ) and DEPPROBE 's relational probe (SSAREL ).",
"We use Pearson and the weighted Kendall's w to measure the correlation between cross-lingual probe SSAs and BAP performance.",
"This allows us to investigate which type of information is most important for final parsing performance.",
"From Table 2, we can observe that SSAs between probes of different languages correlate less with transfer performance than UAS or LAS (Ta-ble 1), underlining the importance of extracting full parses.",
"Among the different types of dependency information, we observe that SSAs between the relational probes used by DEPPROBE correlate highest with final performance at .73 for LAS and .65 for UAS.",
"Structural probing correlates significantly both with BAP's LAS and UAS at .68 and .60 respectively, but to a lesser degree.",
"Probes for tree depth have the lowest correlation at .62 for LAS and .53 for UAS.",
"Despite tree depth being a distinctive syntactic feature for language pairs such as the agglutinative Turkish and the more function word-based English, depth is either not as relevant for BAP or may be represented less consistently in embeddings across languages, leading to lower correlation between SSAs and final performance.",
"In the following analysis we investigate performance differences between the full BAP and DEPPROBE across all 13 targets in order to identify finer-grained limitations of the linear approach and also which kinds of dependencies benefit from full parameter tuning and non-linear decoding.",
"Edge Length Figure 5 shows offsets between gold and predicted head positions.",
"The majority of heads are predicted correctly with a ratio of 92.1% for BAP and 69.7% for DEPPROBE .",
"Both methods are less accurate in predicting long-distance edges with length 150250, resulting in offsets of ca.",
"100 (aggregated into < and > in Figure 5).",
"Most likely, this is due to these edges' overall sparsity in the data (only 6.7% of edges cover a distance of more than 10 tokens) as well as their higher overall subjective difficulty.",
"Nonetheless, BAP is able to capture such dependencies more accurately as shown by its lower error rates for long edges compared to those of DEPPROBE .",
"In addition to very distant head nodes, BAP also seems to recover more of the nuanced edges in the [ 5 , 5] interval.",
"This range is particularly impactful for downstream performance as the edges in our target treebanks have a median length of 2 (mean length 3.62 with = 5.70).",
"The structural probing loss (Equation 2) and the simple linear parametrization of the probe are able to capture a large number of these edges as evidenced by overall low error rates, but lack the necessary expressivity in order to accurately capture all cases.",
"Relations Looking at RelAcc for each category in the UD taxonomy (de Marneffe et al., 2014) in Figure 4 allows us to identify where higher parametrization and more complex decoding are required for high parsing performance.",
"While we again observe that performance on all relations is higher for BAP than for DEPPROBE , a large subset of the relations is characterized by comparable or equivalent performance.",
"These include simple punctuation ( punct ), but also the majority of function word relations such as aux , case , clf , det and mark as well as coordination (e.g. cc , conj ).",
"We attribute the high performance of DEPPROBE on these relations to the fact that the words used to express them typically stem from closed classes and consequently similar embeddings: e.g., determiners the/a/an (EN), case markers di/da (IT).",
"open class words are also captured by the linear probe.",
"These include the modifiers advmod , amod and discourse as well as some nominal relations such as expl , nmod , nsubj and nummod .",
"As prior work has identified PoS information in untuned embeddings (Tenney et al., 2019), the modifiers are likely benefiting from the same embedding features.",
"The fact that DEPPROBE nonetheless identifies syntax-specific relations such as nsubj , and to a lesser degree obj and obl , indicates the presence of context-dependent syntactic information in addition to PoS.",
"The larger the set of possible words for a relation, the more difficult it is to capture with the probe.",
"The functional cop (copula) relation provides an informative example: In English (and related languages), it is almost exclusively assigned to the verb be resulting in 85% RelAcc, while in non-European languages such as Japanese it can be ascribed to a larger set which often overlaps with other relations (e.g. aux ) resulting in 65% RelAcc.",
"BAP adapts to each language by tuning all parameters while DEPPROBE , using fixed embeddings, reaches competitive scores on European languages, but performs worse in non-European settings (details in Appendix B).",
"Besides capturing larger variation in surface forms, BAP also appears to benefit from higher expressivity when labeling clausal relations such as ccomp , csubj .",
"These relations are often characterized not only by surface form variation, but also by PoS variation of head/child words and overlap with other relation types (e.g. clausal subjects stem from verbs or adjectives), making them difficult to distinguish in untuned embeddings.",
"Simultaneously, they often span longer edges compared to determiners or other function words.",
"Another relation of particular importance is root as it determines the direction of all edges predicted by DEPPROBE .",
"An analysis of the 14% RelAcc difference to BAP reveals that both methods most frequently confuse root with relations that fit the word's PoS, e.g. NOUN roots with nsubj or nmod .",
"For the majority PoS VERB (70% of all root ), we further observe that DEPPROBE predicts twice as many xcomp and parataxis confusions compared to BAP, likely attributable to their root -similar function in subclauses.",
"Since their distinction hinges on context, the full parser, which also tunes the contextual encoder, is better equipped to differentiate between them.",
"The last category in which BAP outperforms DEPPROBE includes rare, treebank-specific relations such as reparandum (reference from a corrected word to an erroneous one).",
"Again, the larger number of tunable parameters in addition to the non-linear decoding procedure of the full parser enable it to capture more edge cases while DEPPROBE 's linear approach can only approximate a local optimum for any relations which are represented non-linearly.",
"Efficiency When using a probe for performance prediction, it is important to consider its computational efficiency over the full parser's fine-tuning procedure.",
"In terms of tunable parameters, DEPPROBE has 36% fewer parameters than DIRPROBE and three orders of magnitude fewer parameters than BAP.",
"In practice, this translates to training times in the order of minutes instead of hours.",
"Despite its simple O ( n 2 ) decoding procedure compared to dynamic programming-based graph-decoding algorithms ( O ( n 3 ) ), DEPPROBE is able to extract full dependency trees which correlate highly with downstream performance while maintaining high efficiency (Section 4.4).",
"With DEPPROBE , we have introduced a novel probing procedure to extract fully labeled and directed dependency trees from untuned, contextualized embeddings.",
"Compared to prior approaches which extract structures lacking labels, edge directionality or both, our method retains a simple linear parametrization which is in fact more lightweight and does not require complex decoders (Section 3).",
"To the best of our knowledge, this is the first linear probe which can be used to estimate LAS from untuned embeddings.",
"Using this property, we evaluated the predictive power of DEPPROBE on cross-lingual parsing with respect to the transfer performance of a fully fine-tuned biaffine attention parser.",
"Across the considered 169 language pairs, DEPPROBE is surprisingly effective: Its LAS correlates significantly ( p < 0 . 001 ) and most highly compared with unlabeled probes or competitive language feature baselines, choosing the best source treebank in 94% of all cases (Section 4).",
"Leveraging the linearity of the probe to analyze structural and relational subspaces in mBERT embeddings, we find that dependency relation information is particularly important for parsing performance and cross-lingual transferability, compared to both tree depth and structure.",
"DEPPROBE , which models structure and relations, is able to recover many functional and syntactic relations with competitive accuracy to the full BAP (Section 5).",
"Finally, the substantially higher efficiency of DEPPROBE with respect to time and compute make it suitable for accurate parsing performance prediction.",
"As contemporary performance prediction methods lack formulations for graphical tasks and handcrafted features such as lang2vec are not available in all transfer settings (e.g. document domains, MLM encoder choice), we see linear approaches such as DEPPROBE as a valuable alternative.",
"We would like to thank the NLPnorth group for insightful discussions on this work in particular Elisa Bassignana and Mike Zhang.",
"Additional thanks to ITU's High-performance Computing Cluster team.",
"Finally, we thank the anonymous reviewers for their helpful feedback.",
"This research is supported by the Independent Research Fund Denmark (Danmarks Frie Forskningsfond; DFF) grant number 9063-00077B."
] | [
"abstain",
"abstain",
"method",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"method",
"abstain",
"result",
"abstain",
"abstain",
"result",
"other",
"other",
"other",
"other"
] |
[
"Multilingual neural machine translation has shown the capability of directly translating between language pairs unseen in training, i.e. zero-shot translation.",
"Despite being conceptually attractive, it often suffers from low output quality.",
"The difficulty of generalizing to new translation directions suggests the model representations are highly specific to those language pairs seen in training.",
"We demonstrate that a main factor causing the language-specific representations is the positional correspondence to input tokens.",
"We show that this can be easily alleviated by removing residual connections in an encoder layer.",
"With this modification, we gain up to 18.5 BLEU points on zero-shot translation while retaining quality on supervised directions.",
"The improvements are particularly prominent between related languages, where our proposed model outperforms pivot-based translation.",
"Moreover, our approach allows easy integration of new languages, which substantially expands translation coverage.",
"By thorough inspections of the hidden layer outputs, we show that our approach indeed leads to more language-independent representations.",
"1 1 Introduction Multilingual neural machine translation (NMT) system encapsulates several translation directions in a single model (Firat et al., 2017; Johnson et al., 2017).",
"These multilingual models have been shown to be capable of directly translating between language pairs unseen in training (Johnson et al., 2017; Ha et al., 2016).",
"Zero-shot translation as such is attractive both practically and theoretically.",
"Compared to pivoting via an intermediate language, the direct translation halves inference-time computa-1 Code and scripts available in: https://github.",
"tion and circumvents error propagation.",
"Considering data collection, zero-shot translation does not require parallel data for a potentially quadratic number of language pairs, which is sometimes impractical to acquire especially between low-resource languages.",
"Using less supervised data in turn reduces training time.",
"From a modeling perspective, zero-shot translation calls for language-agnostic representations, which are likely more robust and can benefit low-resource translation directions.",
"Despite the potential benefits, achieving high-quality zero-shot translation is a challenging task.",
"Prior works (Arivazhagan et al., 2019; Zhang et al., 2020a; Rios et al., 2020) have shown that standard systems tend to generate poor outputs, sometimes in an incorrect target language.",
"It has been further shown that the encoder-decoder model captures spurious correlations between language pairs with supervised data (Gu et al., 2019).",
"During training, the model only learns to encode the inputs in a form that facilitates translating the supervised directions.",
"The decoder, when prompted for zero-shot translation to a different target language, has to handle inputs distributed differently from what was seen in training, which inevitably degrades performance.",
"Ideally, the decoder could translate into any target language it was trained on given an encoded representation independent of input languages.",
"In practice, however, achieving a language-agnostic encoder is not straightforward.",
"In a typical Transformer encoder (Vaswani et al., 2017), the output has a strong positional correspondence to input tokens.",
"For example in the English sentence in Figure 1, encoder outputs h 1 , 2 , 3 correspond to a , big , cat respectively.",
"While this property is essential for tasks such as sequence tagging, it hinders the creation of language-independent representations.",
"Even assuming that the input embeddings were fully mapped on a lexical level (e.g. cat and gato have the same embedding vector), the resulting encoder outputs are still language-specific due to the word order differences.",
"In this light, we propose to relax this structural constraint and offer the model some free-dom of word reordering in the encoder already.",
"Our contributions are as follow: We show that the positional correspondence to input tokens hinders zero-shot translation.",
"We achieve considerable gains on zero-shot translation quality by only removing residual connections once in a middle encoder layer.",
"Our proposed model allows easy integration of new languages, which enables zero-shot translation between the new language and all other languages previously trained on.",
"Based on a detailed analysis of the model's intermediate outputs, we show that our approach creates more language-independent representations both on the token and sentence level.",
"Zero-shot inference relies on a model's generalizability to conditions unseen in training.",
"In the context of zero-shot translation, the input should ideally be encoded into an language-agnostic representation, based on which the decoder can translate into any target language required, similar to the notion of an interlingua.",
"Nevertheless, the ideal of any input language, same representation cannot be easily fulfilled with a standard encoder, as we have shown in the motivating example in Figure 1.",
"We observe that the encoder output has a positional correspondence to input tokens.",
"Formally, given input token embeddings ( x 1 , . . . , x n ) , in the encoder output ( h 1 , . . . , h n ) , the i -th hidden state h i mostly contains information about x i .",
"While this structure is prevalent and is indeed necessary in many tasks such as contextual embedding and sequence tagging, it is less suitable when considering language-agnostic representations.",
"As a sentence SA Q K V FF residual ...",
"in different languages are likely of varying lengths and word orders, the same semantic meaning will get encoded into different hidden state sequences.",
"There are two potential causes of this positional correspondence: residual connections and encoder self-attention alignment.",
"We further hypothesize that, by modifying these two components accordingly, we can alleviate the positional correspondence.",
"Specifically, we set one encoder layer free from these constraints, so that it could create its own output ordering instead of always following a one-to-one mapping with its input.",
"In the original Transformer architecture from Vaswani et al. (2017), residual connections (He et al., 2016) are applied in every layer, for both the multihead attention and the feed-forward layer.",
"By adding the input embeddings to the layer outputs, the residual connections are devised to facilitate gradient flow to bottom layers of the network.",
"However, since the residual connections are present throughout all layers, they strictly impose a one-to-one alignment between the inputs and outputs.",
"For the encoder, this causes the outputs to be position-ally corresponding to the input tokens.",
"We propose to relax this condition, such that the encoder outputs becomes less positionand hence language-specific.",
"Meanwhile, to minimize the impact on the model architecture and ensure gradient flow, we limit this change to only one encoder layer, and only its multihead attention layer.",
"Figure",
"2(b) gives a visualization of this change in comparison to the original encoder in Figure",
"2(a).",
"Besides the residual connections, another potential reason for the positional correspondence is the encoder self-attention alignment.",
"Via the self-attention transform, each position is a weighted sum from all input positions.",
"While the weights theoretically can distribute over all input positions, they are often concentrated locally, particularly with output position i focusing on input position i .",
"Previous works on various sequence tasks (Yang et al., 2020; Zhang et al., 2020b) have shown heavy weights on the diagonal of the encoder self-attention matrices.",
"In this light, the motivation of our method starts with the formation of the self-attention weight matrix: score ( Q , K ) = QKT , where Q and K and the query and key matrices.",
"This n n matrix encapsulates dot product at each position against all n positions.",
"Since the dot product is used as a similarity measure, we hypothesize that when Q and K are similar, the matrix will have heavy weights on the diagonal, thereby causing the positional correspondence.",
"Indeed, Q and K are likely similar since they are projections from the same input.",
"We therefore propose to reduce this similarity by replacing the projection base of the self-attention query by a set of sinusoidal positional encodings.",
"Moreover, to avoid possible interaction with positional information retained in K , we use a wave length for this set of sinusoidal encodings that is different from what is added onto encoder input embeddings.",
"Figure",
"2(c) contrasts our position-based attention query with the original model in Figure",
"2(a), where the key, query, value are all projected from the input to the self-attention layer.",
"Our experiments cover highand low-resource languages and different data conditions.",
"We choose an English-centered setup, where we train on X en parallel data, and test the zero-shot translation between all non-English languages.",
"This scenario is particularly difficult for zero-shot translation, as half of the target-side training data is in English.",
"Indeed, recent works (Fan et al., 2020; Rios et al., 2020) have outlined downsides of the English-centered configuration.",
"Nevertheless, intrigued by the potential of covering N 2 translation directions by training on 2 N directions, we still explore this scenario.",
"Our datasets originate from three sources: IWSLT 2017 (Cettolo et al., 2017), Europarl v7 (Koehn, 2005), and PMIndia (Haddow and Kirefu, 2020).",
"The IWSLT and Europarl data are taken from the MMCR4NLP corpus (Dabre and Kurohashi, 2017).",
"An overview of the datasets is in Table 1.",
"To investigate the role of training data diversity, we construct two conditions for Europarl, where one is fully multiway aligned, and the other has no multiway alignment at all.",
"Both are subsets of the full dataset with 1M parallel sentences per direction.",
"Moreover, we study the challenging case of PMIndia with little training data, distinct writing systems, and a large number of agglutinate languages that are specially difficult to translate into.",
"Table 2 outlines the languages in our experiments.",
"Training Details By default we use Transformer (Vaswani et al., 2017) with 5 encoder and decoder layers.",
"For the Europarl datasets with more training data, we enlarge the model to 8 encoder and decoder layers.",
"To control the output language, we use a target-language-specific begin-token as well as language embeddings concatenated with decoder word emebeddings 2 , similar to Pham et al. (2019).",
"We use 8 attention heads, embedding size of 512, inner size of 2048, dropout rate of 0.2, label smoothing rate of 0.1.",
"We use the learning rate schedule from Vaswani et al. (2017) with 8,000 warmup steps.",
"The source and target word embeddings are shared.",
"Furthermore, in the decoder, the parameters of the projection from hidden states to the vocabulary are tied with the transposition of the word lookup table.",
"Moreover, we include variational dropout (Gal and Ghahramani, 2016) as a comparison since it was used in a previous work on zero-shot translation (Pham et al., 2019) instead of the standard element-wise dropout.",
"With variational dropout, all timesteps in a layer output share the same mask.",
"This differs from the standard dropout, where each element in each timestep is dropped according to the same dropout rate.",
"We hypothesize that this technique helps reduce the positional correspondence with input tokens by preventing the model from relying on specific word orders.",
"We train for 64 epochs and average the weights of the 5 best checkpoints ordered by dev loss.",
"By default, we only include the supervised translation directions in the dev set.",
"The only exception is the Europarl-full case, where we also include the zero-shot directions in dev set for early stopping.",
"When analyzing model hidden representations through classification performance (Subsection 5.1 and 5.2), we freeze the trained encoder-decoder weights and train the classifier for 5 epochs.",
"The classifier is a linear projection from the encoder hidden dimension to the number of classes, followed by softmax activation.",
"As the classification task is lightweight and convergence is fast, we reduce the warmup steps to 400 while keeping the learning rate schedule unchanged.",
"self-2 The concatenation of language embedding and decoder word embedding is then projected down to the embedding dimension to form the input embedding to the decoder.",
"attention layer in a middle encoder layer.",
"Specifically, we choose the 3-rd and 5-th layer of the 5-and 8-layer models respectively.",
"We use Resid-ual to indicate residual removal and Query the position-based attention query.",
"For the projection basis of the attention query, we use positional encoding with wave length 100.",
"Zero-Shot vs. Pivoting We compare the zero-shot translation performance with pivoting, i.e. directly translating the unseen direction X Y vs. using English as an intermediate step, as in X English Y. The pivoting is done by the baseline multilingual model, which we expect to have similar performance to separately trained bilingual models.",
"For a fair comparison, in the Europarl-full case, pivoting is done by a baseline model trained till convergence with only supervised dev data rather than the early-stopped one.",
"For the languages with Latin script, we first apply the Moses tokenizer and truecaser, and then learn byte pair encoding (BPE) using subword-nmt (Sennrich et al., 2016).",
"For the Indian languages, we use the IndicNLP library 3 and SentencePiece (Kudo and Richardson, 2018) for tokenization and BPE respectively.",
"We choose 40K merge operations and only use tokens with minimum frequency of 50 in the training set.",
"For IWSLT, we use the of-ficial tst2017 set.",
"For PMIndia, as the corpus does not come with dev and test sets, we partition the dataset ourselves by taking a multiway subset of all languages, resulting in 1,695 sentences in the dev and test set each.",
"For Europarl, we use the test sets in the MMCR4NLP corpus (Dabre and Kurohashi, 2017).",
"The outputs are evaluated by sacreBLEU 4 (Post, 2018).",
"To simulate the case of later adding a new language, we learn a new BPE model for the new language and keep the previous model unchanged.",
"Due to the increased number of unique tokens, the vocabulary 3 https://github.com/anoopkunchukuttan/ indic_nlp_library 4 We use BLEU+case.mixed+numrefs.1+smooth",
".exp+tok.13a+version.1.4.12 by default.",
"On PMIndia, we use the SPM tokenizer ( tok.spm instead of tok.13a ) for better tokenization of the Indic languages.",
"At the time of publication, the argument tok.spm is only available as a pull request to sacreBLEU: https://github.",
"com/mjpost/sacrebleu/pull/118 .",
"We applied the pull request locally to use the SPM tokenizer.",
"of the previously-trained model is expanded.",
"In this case, for the model weights related to the word lookup table size, we initialize them as the average of existing embedding perturbed by random noise.",
"Our approach substantially improves zero-shot translation quality, as summarized in Table 3.",
"The first observation is that modification in residual connections is essential for zero-shot performance 6 .",
"We gain 6.9 and up to 18.5 BLEU points over the baseline on IWSLT and Europarl (Row 1 to 4 ) respectively.",
"When inspecting the model outputs, we see that the baseline often generates off-target translation in English, in line with observations from prior works (Arivazhagan et al., 2019; Zhang et al., 2020a).",
"Our proposed models are not only consistent in generating the required target languages in zero-shot conditions, but also show competitive performance to pivoting via English.",
"The effects are particularly prominent between related languages.",
"As shown in Table 4, on Europarl, zero-shot outperforms the pivoting when translating between 5 Due to the large number of languages, we report the BLEU scores averaged over all directions here, and refer the readers to the appendix for detailed results.",
"6 We also experimented with:",
"1) removing the residual in more layers, but observed large negative impact on convergence;",
"2) replacing the residual connections by meanpooled sentence embeddings, but the gains on zero-shot directions were less than removing the residual connections.",
"languages from the same families.",
"This is an attractive property especially when the computation resource is limited at inference time.",
"In the very challenging case of PMIndia (Row 5 ), while removing residual does improve the zero-shot performance, the score of 2.3 indicates that the outputs are still far from being useful.",
"Nonetheless, we are able to remedy this by further regularization as we will present in Subsection 4.1.",
"Contrary to the large gains by removing residual connections, the attention query modification is not effective when combined with residual removal.",
"This suggests that the primary source of position-specific representation is the residual connections.",
"Moreover, by contrasting Row 2 and 3 of Table 3, we show the effect of training data diversity.",
"In real-life, the parallel data from different language pairs are often to some degree multiway.",
"Multiway data could provide an implicit bridging that facilitates zero-shot translation.",
"With non-overlapping data, gains can come from training with a larger variety of sentences.",
"Given these two opposing hypotheses, our results suggest that the diverse training data is more important for both supervised and zero-shot performance.",
"With non-overlapping data, we first obverse improved supervised translation performance by around 1.5 points for all three model configurations (Baseline, Residual, Resid-ual+Query).",
"Meanwhile, the zero-shot score also increases from 26.1 to 26.7 points with our model (Residual).",
"The baseline, on the contrary, loses from 11.3 to 8.2 points.",
"This suggests that our model can better utilize the diverse training data than the baseline under zero-shot conditions.",
"In Subsection 3.2, we hypothesized that variational dropout helps reduce position-specific representation.",
"Table 5 shows the outcome of replacing the standard dropout by this technique.",
"First, variational dropout also improves zero-shot performance over the baseline, yet not as strongly as residual removal.",
"On IWSLT and Europarl, there is no additive gain by combining both techniques.",
"On PMIndia, however, combining our model and variational dropout is essential for achieving reasonable zero-shot performance, as shown by the increase from 2.4 to 14.3 points.",
"Why is the picture different on PMIndia?",
"We identify two potential reasons:",
"1) the low lexical overlap 7 among the languages (8 different scripts in the 9 Indian languages);",
"2) the extreme low-resource condition (30K sentences per translation direction on average).",
"To understand this phenomenon, we create an artificial setup based on IWSLT with",
"1) no lexical overlap by appending a language tag before each token;",
"2) extremely low resource by taking a subset of 30K sentences per translation direction.",
"The scores in Table 6 show the increasing benefit of variational dropout given very low amount of training data and shared lexicon.",
"We interpret this through the lens of generalizable representations: With low data amount or lexical overlap, the model tends to represent its input in a highly language-specific way, hence hurting zero-shot performance.",
"7 We also tried mapping the 9 Indian languages into the Devanagari script, but got worse zero-shot performance compared to the current setup.",
"So far our model has shown promising zero-shot performance.",
"Here we extend the challenge of zero-shot translation by integrating a new language.",
"Specifically, we finetune a trained English-centered many-to-many system with a new language using a small amount of X new English parallel data.",
"At test time, we perform zero-shot translation between X new and all non-English languages previously involved in training.",
"This practically simulates the scenario of later acquiring parallel data between a low-resource language and the central bridging language in an existing system.",
"After finetuning with the new data, we can potentially increase translation coverage by 2 N directions, with N being the number of languages originally in training.",
"We finetune a trained system on IWSLT (Row 1 in Table",
"3) using a minimal amount of de en data with 14K sentences.",
"When finetuning we include the original X old en training data, as otherwise the model would heavily overfit.",
"This procedure is relatively lightweight, since the model has already converged on the original training data.",
"In Table 7, our model outperforms the baseline on zero-shot translation, especially when translating from the new language (X new ).",
"When inspecting the outputs, we see the baseline almost always translate into the wrong language (English), causing the low score of 1.8.",
"We hypothesize that the baseline overfits more on the supervised direction (X new en), where it achieves the higher score of 18.5.",
"In contrast, our model is less susceptible to this issue and consistently stronger under zero-shot conditions.",
"To see beyond BLEU scores, we first analyze how much positionand language-specific information is retained in the encoder hidden representations before and after applying our approaches.",
"We then study circumstances where zero-shot translation tends to outperform its pivoting-based counterpart.",
"To validate whether the improvements in zero-shot performance indeed stem from less positional correspondence to input tokens, we assess the difficulty of recovering input positional information before and after applying our proposed method.",
"Specifi-cally, we train a classifier to predict the input token ID's (which word it is) or position ID's (the word's absolute position in a sentence) based on encoder outputs.",
"Such prediction tasks have been used to analyze linguistic properties of encoded representation (Adi et al., 2017).",
"Our classifier operates on each timestep and uses a linear projection from the embedding dimension to the number of classes, i.e. number of unique tokens in the vocabulary or number of maximum timesteps.",
"Table 8 compares the classification accuracy of the baseline and our model.",
"First, the baseline encoder output has an exact one-to-one correspondence to the input tokens, as evidenced by the nearly perfect accuracy when recovering token ID's.",
"This task becomes much more difficult under our model.",
"We see a similar picture when recovering the position ID's.",
"We also try to recover the position ID's based on the outputs from each layer.",
"As shown in Figure 3, the accuracy drops sharply at the third layer, where the residual connection is removed.",
"This shows that the devised transition point at a middle encoder layer is effective.",
"To test whether our model leads to more language-independent representations, we assess the similarity of encoder outputs on the sentence and token level using the two following methods:",
"SVCCA The singular vector canonical correlation analysis (SVCCA; Raghu et al., 2017) measures similarity of neural network outputs, and has been used to assess representational similarity in NMT (Kudugunta et al., 2019).",
"As SVCCA operates on fixed-size inputs, we meanpool the encoder outputs and measure similarity on a sentence level.",
"Language Classification Accuracy Since more similar representations are more difficult to distinguish, poor performance of a language classifier indicates high similarity.",
"Based on a trained model, we learn a token-level linear projection from the encoder outputs to the number of classes (languages).",
"Findings As shown in Table 9, our model consistently achieves higher SVCCA scores and lower classification accuracy than the baseline, indicating more language-independent representations.",
"When zooming into the difficulty of classifying the languages, we further notice much higher confusion (therefore similarity) between related languages.",
"For instance, Figure 4 shows the confusion matrix when classifying the 8 source languages in Europarl.",
"After residual removal, the similarity is much higher within the Germanic and Romance family.",
"This also corresponds to cases where our model outperforms pivoting (Table 4).",
"Moreover, we compare the SVCCA scores after each encoder layer, as shown in Figure 5.",
"Confirm-ing our hypotheses, the model outputs are much more similar after the transition layer, as shown by the sharp increase at layer 3.",
"This contrasts the baseline, where similarity increases nearly linearly.",
"Given these findings and previous analyses in Subsection 5.1, we conclude that our devised changes in a middle encoder layer allows higher cross-lingual generalizability in top layers while retaining the language-specific bottom layers.",
"In Subsection 4 we have shown that between related languages zero-shot translation surpasses pivoting performance.",
"Here we manually inspect some pivoting translation outputs (nl en de) and compare them to zero-shot outputs (de en).",
"In general, we observe that the translations without pivoting are much more similar to the original sentences.",
"For instance in Table 4, when pivoting, the Dutch sentence geven het voorbeeld (give the example) is first translated to set the example, then to setzen das Beispiel (set the example) in German, which is incorrect as the verb setzen (set) cannot go together with the noun Beispiel (example).",
"The zero-shot outputs, on the other hand, directly translates geven (give; Dutch) to geben (give; German), resulting in a more natural pairing with Beispiel (example).",
"With this example, we intend to showcase the potential of bypassing the pivoting step and better exploiting language similarity.",
"In our main experiments, all proposed modifications take place in a middle encoder layer.",
"After comparing the effects of residual removal in each of the encoder layers, our first observation is that the bottom encoder layer should remain fully position-aware.",
"Removing the residual connections in the first encoder layer degrades zero-shot performance by 2.8 BLEU on average on IWSLT.",
"Secondly, leaving out residual connections in top encoder layers (fourth or fifth layer of the five layers) slows down convergence.",
"When keeping the number of training epochs unchanged from our main experiments, it comes with a loss of 0.4 BLEU on the supervised directions.",
"This is likely due to the weaker gradient flow to the bottom layers.",
"The two observations together support our choice of using the middle encoder layer as a transition point.",
"While we use fixed trigonometric positional encodings in our main experiments, we also validate our findings with learned positional embeddings on the IWSLT dataset.",
"First, the baseline still suffers from off-target zero-shot translation (average BLEU scores on supervised directions: 29.6; zero-shot: 4.8).",
"Second, removing the residual connection in a middle layer is also effective in this case (supervised: 29.1; zero-shot: 17.1).",
"These findings suggest that our approach is robust to the form of positional embedding.",
"Although learned positional embeddings are likely more language-agnostic by seeing more languages, as we still present source sentences as a sequence of tokens, the residual connections, when present in all layers, would still enforce a one-to-one mapping to the input tokens.",
"This condition allows our motivation and approach to remain applicable.",
"Initial works on multilingual translation systems already showed some zero-shot capability (Johnson et al., 2017; Ha et al., 2016).",
"Since then, several works improved zero-shot translation performance by controlling or learning the level of parameter sharing between languages (Lu et al., 2018; Platan-ios et al., 2018).",
"Recently, models with full parameter sharing have gained popularity, with massively multilingual systems showing encouraging results (Aharoni et al., 2019; Zhang et al., 2020a; Fan et al., 2020).",
"Besides advantages such as compactness and ease of deployment, the tightly-coupled model components also open up new questions.",
"One question is how to form language-agnostic representations at a suitable abstraction level.",
"In this context, one approach is to introduce auxiliary training objectives to encourage similarity between the representations of different languages (Arivazhagan et al., 2019; Pham et al., 2019).",
"In this work we took a different perspective: Instead of introducing additional objectives, we relax some of the pre-defined structure to facilitate language-independent representations.",
"Another line of work on improving zero-shot translation utilizes monolingual pretraining (Gu et al., 2019; Ji et al., 2020) or synthetic data for the zero-shot directions by generated by backtransla-tion (Gu et al., 2019; Zhang et al., 2020a).",
"With both approaches, the zero-shot directions must be known upfront in order to train on the corresponding languages.",
"In comparison, our adaptation procedure offers more flexibility, as the first training step remains unchanged regardless of which new language is later finetuned on.",
"This could be suitable to the practical scenario of later acquiring data for the new language.",
"Our work is also related to adaptation to new languages.",
"While the existing literature mostly focused on adapting to one or multiple supervised training directions (Zoph et al., 2016; Neubig and Hu, 2018; Zhou et al., 2019; Murthy et al., 2019; Bapna and Firat, 2019), our focus in this work is to rapidly expand translation coverage via zero-shot translation.",
"While our work concentrates on an English-centered data scenario, another promising direction to combat zero-shot conditions is to enrich available training data by mining parallel data between non-English languages (Fan et al., 2020; Freitag and Firat, 2020).",
"On a broader scope of sequence-to-sequence tasks, Dalmia et al. (2019) enforced encoder-decoder modularity for speech recognition.",
"The goal of modular encoders and decoders is analogous to our motivation for zero-shot translation.",
"In this work, we show that the positional correspondence to input tokens hinders zero-shot translation.",
"Specifically, we demonstrate that:",
"1) the encoder outputs retain word orders of source languages;",
"2) this positional information reduces cross-lingual generalizability and therefore zero-shot translation quality;",
"3) the problems above can be easily alleviated by removing the residual connections in one middle encoder layer.",
"With this simple modification, we achieve improvements up to 18.5 BLEU points on zero-shot translation.",
"The gain is especially prominent in related languages, where our proposed model outperforms pivot-based translation.",
"Our approach also enables integration of new languages with little parallel data.",
"Similar to interlingua-based models, by adding two translation directions, we can increase the translation coverage by 2 N language pairs, where N is the original number of languages.",
"In terms of model representation, we show that the encoder outputs under our proposed model are more language-independent both on a sentence and token level.",
"This work is supported by a Facebook Sponsored Research Agreement.",
"We thank Yuqing Tang for helpful comments, and Ngoc Quan Pham for sharing the training details of (Pham et al., 2019).",
"We proposed approaches to improve zero-shot translation, which is especially suitable to low-resource scenarios with no training data available between some languages.",
"We also validated our approaches on actual low-resource languages.",
"However, as the models are trained on single domains, when facing out-of-domain test sentences, they could suffer from hallucination, i.e. produce translations unrelated to the input sentences."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"other",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"result",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"objective",
"other",
"objective",
"method",
"abstain",
"other",
"objective",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"method",
"objective",
"other",
"other",
"abstain",
"abstain",
"abstain"
] |
[
"The Lottery Ticket Hypothesis suggests that an over-parametrized network consists of lottery tick-ets, and training a certain collection of them (i.e., a subnetwork) can match the performance of the full model.",
"In this paper, we study such a collection of tickets, which is referred to as winning tickets, in extremely over-parametrized models, e.g., pre-trained language models.",
"We observe that at certain compression ratios, the generalization performance of the winning tickets can not only match but also exceed that of the full model.",
"In particular, we observe a phase transition phenomenon: As the compression ratio increases, generalization performance of the winning tickets first improves then deteriorates after a certain threshold.",
"We refer to the tickets on the threshold as super tickets.",
"We further show that the phase transition is task and model dependent as the model size becomes larger and the training data set becomes smaller, the transition becomes more pronounced.",
"Our experiments on the GLUE benchmark show that the super tickets improve single task fine-tuning by 0 .",
"9 points on BERT-base and 1 .",
"0 points on BERT-large, in terms of task-average score.",
"We also demonstrate that adaptively sharing the super tickets across tasks benefits multi-task learning 1 .",
"The Lottery Ticket Hypothesis (LTH, Frankle and Carbin (2018)) suggests that an over-parameterized network consists of lottery tickets, and training a certain collection of them (i.e., a subnetwork) can 1) match the performance of the full model; and 2)",
"Work was done at Microsoft Azure AI.",
"1 Our codes are available at https://github.com/cliang1453/super-structured-lottery-tickets .",
"outperform randomly sampled subnetworks of the same size (i.e., random tickets).",
"The existence of such a collection of tickets, which is usually referred to as winning tickets, indicates the potential of training a smaller network to achieve the full model's performance.",
"LTH has been widely explored in across various fields of deep learning (Frankle et al., 2019; Zhou et al., 2019; You et al., 2019; Brix et al., 2020; Movva and Zhao, 2020; Girish et al., 2020).",
"Aside from training from scratch, such winning tickets have demonstrated their abilities to transfer across tasks and datasets (Morcos et al., 2019; Yu et al., 2019; Desai et al., 2019; Chen et al., 2020a).",
"In natural language processing, Chen et al. (2020b); Prasanna et al. (2020) have shown existence of the winning tickets in pre-trained language models.",
"These tickets can be identified when fine-tuning the pre-trained models on downstream tasks.",
"As the pre-trained models are usually extremely over-parameterized (e.g., BERT Devlin et al. (2019), GPT-3 Brown et al. (2020), T5 Raffel et al. (2019)), previous works mainly focus on searching for a highly compressed subnetwork that matches the performance of the full model.",
"However, behavior of the winning tickets in lightly compressed subnetworks is largely overlooked.",
"In this paper, we study the behavior of the winning tickets in pre-trained language models, with a particular focus on lightly compressed subnetworks.",
"We observe that generalization performance of the winning tickets selected at appropriate compression ratios can not only match, but also exceed that of the full model.",
"In particular, we observe a phase transition phenomenon (Figure 1): The test accuracy improves as the compression ratio grows until a certain threshold (Phase I); Passing the threshold, the accuracy deteriorates, yet is still better than that of the random tickets (Phase II).",
"In Phase III, where the model is highly compressed, B i a s V a r i a n ce Percent of Weight Remaining Phase I Phase II Phase IIIB i a s V a r i a n ce Percent of Weight Remaining Phase I Phase II Phase III 0.8 0.6 0.4 1.0 Percent of Weight Remaining Phase I Phase II Phase III Winning Random A cc u r ac y 0.8 0.6 0.4 1.0 B i a s V a r i a n ce Percent of Weight Remaining Phase I Phase II Phase III Percent of Weight Remaining Phase I Phase II Phase III WinningRandom A cc u r ac y 0.8 0.6 0.4 1.0 0.8 0.6 0.4 1.0 Percent of Weight Remaining Phase I Phase II Phase III Figure 1: Illustrations of the phase transition phenomenon.",
"We interpret the phase transition in the context of trade-offs between model bias and variance (Fried-man et al., 2001, Chapter 7).",
"It is well understood that an expressive model induces a small bias, and a large model induces a large variance.",
"We classify the tickets into three categories: non-expressive tickets, lightly expressive tickets, and highly expressive tickets.",
"The full model has a strong expressive power due to over-parameterization, so that its bias is small.",
"Yet its variance is relatively large.",
"In Phase I, by removing non-expressive tickets, variance of the selected subnetwork reduces, while model bias remains unchanged and the expressive power sustains.",
"Accordingly, generalization performance improves.",
"We enter Phase II by further increasing the compression ratio.",
"Here lightly expressive tickets are pruned.",
"Consequently, model variance continues to decrease.",
"However, model bias increases and overturns the benefit of the reduced variance.",
"Lastly for Phase III, in the highly compressed region, model bias becomes notoriously large and reduction of the variance pales.",
"As a result, training breaks down and generalization performance drops significantly.",
"We conduct systematic experiments and analyses to understand the phase transition.",
"Our experiments on multiple natural language understanding (NLU) tasks in the GLUE (Wang et al., 2018) benchmark show that the super tickets can be used to improve single task fine-tuning by 0 .",
"9 points over BERT-base (Devlin et al., 2019) and 1 .",
"0 points over BERT-large, in terms of task-average score.",
"Moreover, our experiments show that the phase transition phenomenon is task and model dependent.",
"It becomes more pronounced as a larger model is used to fit a task with less training data.",
"In such a case, the set of super tickets forms a compressed network that exhibits a large performance gain.",
"The existence of super tickets suggests potential benefits to applications, such as Multi-task Learning (MTL).",
"In MTL, different tasks require different capacities to achieve a balance between model bias and variance.",
"However, existing methods do not specifically balance the bias and variance to accommodate each task.",
"In fact, the fine-tuning performance on tasks with a small dataset is very sensitive to randomness.",
"This suggests that model variance in these tasks are high due to over-parameterization.",
"To reduce such variance, we propose a tickets sharing strategy.",
"Specifically, for each task, we select a set of super tickets during single task fine-tuning.",
"Then, we adaptively share these super tickets across tasks.",
"Our experiments show that tickets sharing improves MTL by 0 .",
"9 points over MT-DNNBASE (Liu et al., 2019) and 1 .",
"0 points over MT-DNNLARGE , in terms of task-average score.",
"Tickets sharing further benefits downstream fine-tuning of the multi-task model, and achieves a gain of 1 .",
"0 task-average score.",
"In addition, the multi-task model obtained by such a sharing strategy exhibits lower sensitivity to randomness in downstream fine-tuning tasks, suggesting a reduction in variance.",
"We summarize our contributions as follows: Our result is the first to identify the phase transition phenomenon in pruning large neural language models.",
"Our result is the first to show that pruning can improve the generalization when the models are lightly compressed, which has been overlooked by previous works.",
"Our analysis paves the way for understanding the connection between model compression and generalization.",
"Motivated by our observed phase transition, we further propose a new pruning approach for multi-task fine-tuning of neural language models.",
"The Transformer (Vaswani et al., 2017) encoder is composed of a stack of identical Transformer layers.",
"Each layer consists of a multi-head attention module (MHA) followed by a feed-forward module (FFN), with a residual connection around each.",
"The vanilla single-head attention operates as Att ( Q, K, V ) = Softmax (cid:18) QK (cid:62) d (cid:19) V, where Q, K, V R l d are d -dimensional vector representations of l words in sequences of queries, keys and values.",
"In MHA, the h -th attention head is parameterized by W Qh , W Kh , W Vh R d d h as H h ( q , x , W { Q,K,V } h ) = Att ( q W Qh , x W Kh , x W Vh ) , where q R l d and x R l d are the query and key/value vectors.",
"In MHA, H independently parameterized attention heads are applied in parallel, and the outputs are aggregated by W Oh R d h d : MHA ( q , x )= H (cid:88) h H h ( q , x , W { Q,K,V } h ) W Oh .",
"LTH (Frankle and Carbin, 2018) has been widely explored in various applications of deep learning (Brix et al., 2020; Movva and Zhao, 2020; Girish et al., 2020).",
"Most of existing results focus on finding unstructured winning tickets via iterative magnitude pruning and rewinding in randomly initialized networks (Frankle et al., 2019; Renda et al., 2020), where each ticket is a single parameter.",
"Recent works further investigate learning dynamics of the tickets (Zhou et al., 2019; Frankle et al., 2020) and efficient methods to identify them (You et al., 2019; Savarese et al., 2020).",
"Besides training from scratch, researchers also explore the existence of winning tickets under transfer learning regimes for over-parametrized pre-trained models across various tasks and datasets (Morcos et al., 2019; Yu et al., 2019; Desai et al., 2019; Chen et al., 2020a).",
"For example, Chen et al. (2020b); Prasanna et al. (2020) have shown the existence of winning tickets when fine-tuning BERT on downstream tasks.",
"There is also a surge of research exploring whether certain structures, e.g., channels in convolutional layers and attention heads in Transformers, exhibit properties of the lottery tickets.",
"Compared to unstructured tickets, training with structured tickets is memory efficient (Cao et al., 2019).",
"Liu et al. (2018); Prasanna et al. (2020) suggest that there is no clear evidence that structured winning tickets exist in randomly initialized or pre-trained weights.",
"Prasanna et al. (2020) observe that, in highly compressed BERT (e.g., the percent of weight remaining is around 50% ), all tickets perform equally well.",
"However, Prasanna et al. (2020) have not investigated the cases where the percent of weight remaining is over 50% .",
"We identify winning tickets in BERT through structured pruning of attention heads and feed-forward layers.",
"Specifically, in each Transformer layer, we associate mask variables h to each attention head and to the FFN (Prasanna et al., 2020): MHA ( Q, x ) = H (cid:88) h h H h ( Q, x , W { Q,K,V } h ) W Oh , FFN ( z ) = FFN ( z ) .",
"Here, we set h , { 0 , 1 } , and a 0 value indicates that the corresponding structure is pruned.",
"We adopt importance score (Michel et al., 2019) as a gauge for pruning.",
"In particular, the importance score is defined as the expected sensitivity of the model outputs with respect to the mask variables.",
"Specifically, in each Transformer layer, I h MHA = E x D x (cid:12)(cid:12)(cid:12)(cid:12) L ( x ) h (cid:12)(cid:12)(cid:12)(cid:12) , IFFN = E x D x (cid:12)(cid:12)(cid:12)(cid:12) L ( x ) (cid:12)(cid:12)(cid:12)(cid:12) , where L is a loss function and D x is the data distribution.",
"In practice, we compute the average over the training set.",
"We apply a layer-wise (cid:96) 2 normalization on the importance scores of the attention heads (Molchanov et al., 2016; Michel et al., 2019).",
"The importance score is closely tied to expressive power.",
"A low importance score indicates that the corresponding structure only has a small contribution towards the output.",
"Such a structure has low expressive power.",
"On the contrary, a large importance score implies high expressive power.",
"We compute the importance scores for all the mask variables in a single backward pass at the end of fine-tuning.",
"We perform one-shot pruning of the same percent of heads and feed-forward layers with the lowest importance scores.",
"We conduct pruning multiple times to obtain subnetworks, or winning tickets, at different compression ratios.",
"We adopt the weight rewinding technique in Renda et al. (2020): We reset the parameters of the winning tickets to their values in the pre-trained weights, and subsequently fine-tune the subnetwork with the original learning rate schedule.",
"The super tickets are selected as the winning tickets with the best rewinding validation performance.",
"In multi-task learning, the shared model is highly over-parameterized to ensure a sufficient capacity for fitting individual tasks.",
"Thus, the multi-task model inevitably exhibits task-dependent redundancy when being adapted to individual tasks.",
"Such redundancy induces a large model variance.",
"We propose to mitigate the aforementioned model redundancy by identifying task-specific super tickets to accommodate each task's need.",
"Specifically, when viewing an individual task in isolation, the super tickets can tailor the multi-task model to strike an appealing balance between the model bias and variance (recall from Section 3 that super tickets retain sufficient expressive power, yet keep the model variance low).",
"Therefore, we expect that deploying super tickets can effectively tame the model redundancy for individual tasks.",
"Given the super tickets identified by each task, we exploit the multi-task information to reinforce fine-tuning.",
"Specifically, we propose a tickets sharing algorithm to update the parameters of the multitask model: For a certain network structure (e.g., an attention head), if it is identified as super tickets by multiple tasks, then its weights are jointly updated by these tasks; if it is only selected by one specific task, then its weights are updated by that task only; otherwise, its weights are completely pruned.",
"See Figure 2 for an illustration.",
"In more detail, we denote the weight parameters in the multi-task model as .",
"Suppose there are N tasks.",
"For each task i { 1 , . . . , N } , we denote i = { ih,(cid:96) } H,Lh =1 ,(cid:96) =1 (cid:83) { i(cid:96) } L(cid:96) =1 as the collection of the mask variables, where (cid:96) is the layer index and h is the head index.",
"Then the parameters to be updated in task i are denoted as i = M ( , i ) , where M ( , i ) masks the pruned parameters according to i .",
"We use stochastic gradient descent-type algorithms to update i .",
"Note that the task-shared and task-specific parameters are encoded by the mask variable i .",
"The detailed algorithm is given in Algorithm 1.",
"Tickets sharing has two major difference compared to Sparse Sharing (Sun et al., 2020): 1) Sun et al. (2020) share winning tickets, while our strategy focuses on super tickets, which can better generalize and strike a sensible balance between model bias and variance.",
"2) In tickets sharing, tickets are structured and chosen from pre-trained weight parameters.",
"It does not require Multi-task Warmup , which is indispensable in Sun et al. (2020) to stabilize the sharing among unstructured tickets selected from randomly initialized weight parameters.",
"General Language Understanding Evaluation (GLUE, Wang et al. (2018)) is a standard benchmark for evaluating model generalization performance.",
"It contains nine NLU tasks, including question answering, sentiment analysis, text similarity Algorithm 1 Tickets Sharing Input: Pre-trained base model parameters .",
"and textual entailment.",
"Details about the benchmark are deferred to Appendix A.1.1.",
"We fine-tune a pre-trained BERT model with task-specific data to obtain a single task model.",
"We append a task-specific fully-connected layer to BERT as in Devlin et al. (2019).",
"BERT-base/large followed by a task-specific layer.",
"SuperT BASE/LARGE is initialized with the chosen set of super tickets in BERT-base/large followed by a task-specific layer.",
"Specifically, we prune BERT-base/large in unit of 10% heads and 10% feed-forward layers (FFN) at 8 different sparsity levels (10% heads and 10% FFN, 20% heads and 20% FFN, etc).",
"Among them, the one with the best rewinding validation result is chosen as the set of super tickets.",
"We randomly sample 10% GLUE development set for tickets selection.",
"Our implementation is based on the MT-DNN code base 3 .",
"We use Adamax (Kingma and Ba, 2014) as our optimizer.",
"We tune the learning rate in { 5 10 5 , 1 10 4 , 2 10 4 } and batch size in { 8 , 16 , 32 } .",
"We train for a maximum of 6 epochs with early-stopping.",
"All training details are summarized in Appendix A.1.2.",
"and 2 show the averaged evaluation results on the GLUE development and test sets, respectively.",
"We remark that the gain of SuperT BASE/LARGE over ST-DNN BASE/LARGE is statistically significant.",
"All the results 4 have passed a paired student t-test with p-values less than 0 .",
"05 .",
"More validation statistics are summarized in Appendix A.1.3.",
"1) In all the tasks, SuperT consistently achieves better generalization than ST-DNN.",
"The task-averaged improvement is around 0 .",
"9 over ST-DNNBASE and 1 .",
"0 over ST-DNNLARGE .",
"2) Performance gain of the super tickets is more significant in small tasks.",
"For example, in Table 1, we obtain 3 .",
"3 points gain on RTE (2.5k data), but only 0 .",
"4 / 0 .",
"3 on QQP (364k data) in the SuperT BASE experiments.",
"Furthermore, from Figure 3, note that the super tickets are more heavily compressed in small tasks, e.g., for SuperT BASE , 83% weights remaining for RTE, but 93% for QQP.",
"These observations suggest that for small tasks, model variance is large, and removing nonexpressive tickets reduces variance and improves generalization.",
"For large tasks, model variance is low, and all tickets are expressive to some extent.",
"3) Performance of the super tickets is related to model size.",
"Switching from SuperT BASE to SuperT LARGE , the percent of weights remaining shrinks uniformly across tasks, yet the generalization gains persist (Figure 3).",
"This suggests that in large models, more non-expressive tickets can be pruned without performance degradation.",
"Phase transitions are shown in Figure 4.",
"We plot the evaluation results of the winning, the random, and the losing tickets under 8 sparsity levels using BERT-base and BERT-large.",
"The winning tickets contain structures with the highest importance scores.",
"The losing tickets are selected reversely, i.e., the structures with the lowest importance scores are selected, and high-importance structures are pruned.",
"The random tickets are sampled uniformly across the network.",
"We plot the averaged scores over 5 trails using different random seeds 5 .",
"Phase transitions of all the GLUE tasks are in Appendix A.5.",
"We summarize our observations: 1) The winning tickets are indeed the winners.",
"In Phase I and early Phase II, the winning tickets perform better than the full model and the random tickets.",
"This demonstrates the existence of struc-5 Except for MNLI, where we plot 3 trails as the there are less variance among trails.",
"tured winning tickets in lightly compressed BERT models, which Prasanna et al. (2020) overlook.",
"2) Phase transition is pronounced over different tasks and models.",
"Accuracy of the winning tickets increases up till a certain compression ratio (Phase I); Passing the threshold, the accuracy decreases (Phase II), until its value intersects with that of the random tickets (Phase III).",
"Note that Phase III agrees with the observations in Prasanna et al. (2020).",
"Accuracy of the random tickets decreases in each phase.",
"This suggests that model bias increases steadily, since tickets with both low and high expressive power are discarded.",
"Accuracy of the losing tickets drops significantly even in Phase I, suggesting that model bias increases drastically as highly expressive tickets are pruned.",
"3) Phase transition is more pronounced in large models and small tasks.",
"For example, in Figure 4, the phase transition is more noticeable in BERT-large than in BERT-base, and is more pronounced in RTE (2.5k) and MRPC (3.7k) than in SST (67k) and MNLI (393k).",
"The phenomenon becomes more significant for the same task when we only use a part of the data, e.g., Figure 5 vs. Figure 4 (bottom left).",
"We adopt the MT-DNN architecture proposed in Liu et al. (2020).",
"The MT-DNN model consists of a set of task-shared layers followed by a set of task-specific layers.",
"The task-shared layers take in the input sequence embedding, and generate shared semantic representations by optimizing multi-task objectives.",
"Our implementation is based on the MT-DNN code base.",
"We follow the same training settings in Liu et al. (2020) for multi-task learning, and in Section 5.2 for downstream fine-tuning.",
"More details are summarized in Appendix A.2.",
"MT-DNN BASE/LARGE .",
"An MT-DNN model refined through multi-task learning, with task-shared layers initialized by pre-trained BERT-base/large.",
"MT-DNN BASE/LARGE + ST Fine-tuning.",
"A single task model obtained by further fine-tuning MT-DNN on an individual downstream task.",
"Ticket-Share BASE/LARGE .",
"An MT-DNN model refined through the ticket sharing strategy, with task-shared layers initialized by the union of the super tickets in pre-trained BERT-base/large.",
"Ticket-Share BASE/LARGE + ST Fine-tuning.",
"A fine-tuned single-task Ticket-Share model.",
"Table 3 summarizes experimental results.",
"The fine-tuning results are averaged over 5 trails using different random seeds.",
"We have several observations: 1) Ticket-Share BASE and Ticket-Share LARGE achieve 0 .",
"9 and 1 .",
"0 gain in task-average score over MT-DNNBASE and MT-DNNLARGE , respectively.",
"In some small tasks (RTE, MRPC), Ticket-Share achieves better or on par results compared to MT-DNN+Fine-tuning.",
"This suggests that by balancing the bias and variance for different tasks, the multitask model's variance is reduced.",
"In large tasks (QQP, QNLI and MNLI), Ticket-Share behaves equally well with the full model.",
"This is because task-shared information is kept during pruning and still benefits multi-task learning.",
"2) Ticket-Share BASE +Fine-tuning and Ticket-Share LARGE +Fine-tuning achieve 1 .",
"0 and 0 .",
"7 gains in task-average score over MT-DNNBASE +Fine-tuning and MT-DNNLARGE +Fine-tuning, respectively.",
"This suggests that reducing the variance in the multi-task model benefits fine-tuning downstream tasks.",
"To demonstrate that super tickets can quickly generalize to new tasks/domains, we conduct few-shot domain adaptation on out-of-domain NLI datasets.",
"We briefly introduce the target domain datasets.",
"The data and training details are summarized in Appendix A.3.1 and A.3.2, respectively.",
"SNLI .",
"The Stanford Natural Language Inference dataset (Bowman et al., 2015) is one of the most widely used entailment dataset for NLI.",
"It contains 570 k sentence pairs, where the premises are drawn from the captions of the Flickr30 corpus and hypotheses are manually annotated.",
"SciTail is a textual entailment dataset derived from a science question answering (SciQ) dataset (Khot et al., 2018).",
"The hypotheses are created from science questions, rendering SciTail challenging.",
"We consider domain adaptation on both single task and multi-task super tickets.",
"Specifically, we adapt SuperT BASE and ST-DNNBASE from MNLI to SNLI/SciTail, and adapt the shared em-beddings generated by Ticket-Share BASE and by MT-DNNBASE to SNLI/SciTail.",
"We adapt these models to 0 .",
"1% , 1% , 10% and 100% SNLI/SciTail training sets 6 , and evaluate the transferred models on SNLI/SciTail development sets.",
"Table 4 shows the domain adaptation evaluation results.",
"As we can see, SuperT and Ticket-Share can better adapt to SNLI/SciTail than ST-DNN and MT-DNN, especially under the few shot setting.",
"6 We use the subsets released in MT-DNN code base.",
"Sensitivity to Random Seed.",
"To better demonstrate that training with super tickets effectively reduces model variance, we evaluate models' sensitivity to changes in random seeds during single task fine-tuning and multi-task downstream fine-tuning.",
"In particular, we investigate fitting small tasks with highly over-parametrized models (vari-ance is often large in these models, see Section 5 and 6).",
"As shown in Table 5, SuperT LARGE and Ticket-Share LARGE induce much smaller standard deviation in validation results.",
"Experimental details and further analyses are deferred to Appendix A.4.",
"Tickets Importance Across Tasks.",
"We analyze the importance score of each ticket computed in different GLUE tasks.",
"For each ticket, we compute the importance score averaged over tasks as the Ticket Importance , and the proportion of the task-specific importance score out of the sum of all tasks' scores as the Task Share , as illustrated in Figure 6. We observe that many tickets exhibit almost equal Task Share s for over 5 out of 8 tasks (Fig-ure",
"6(a)(b)).",
"While these tickets contribute to the knowledge sharing in the majority of tasks, they are considered non-expressive for tasks such as SST-2 (see Figure",
"6(a)(c)(d)).",
"This explains why SST-2 benefits little from tickets sharing.",
"Furthermore, a small number of tickets are dominated by a single task, e.g., CoLA (Figure",
"6(c)), or dominated jointly by two tasks, e.g., CoLA and STS-B (Figure",
"6(d)).",
"This suggests that some tickets only learn task-specific knowledge, and the two tasks may share certain task-specific knowledge.",
"Structured Lottery Tickets.",
"LTH hypothesizes that a subset of unstructured parameters can be trained to match the full model's performance.",
"Instead, we question whether a subset of structured weight matrices, e.g., FFN layers and attention heads, can also be trained to match the full model's performance.",
"This question is more practically important than the unstructured one: training and inference on structured matrices are better optimized for hardware acceleration.",
"Our results give a positive answer to this question, while previous works show that the structured tickets do not exist in highly compressed models (Prasanna et al., 2020).",
"Searching Better Generalized Super Tickets.",
"We select winning tickets according to the sensitivity of the model outputs with respect to the mask variables of each structure (Michel et al., 2019; Prasanna et al., 2020), as this measure is closely tied to the structure's expressive power (Section 3).",
"In addition, we conduct an one-shot pruning for computational simplicity.",
"We leave other importance measures and pruning schedules, which may help identifying better generalized super tickets, for future works (Voita et al., 2019; Behnke and Heafield, 2020; Wang et al., 2019; Fan et al., 2019; Zhou et al., 2020; Sajjad et al., 2020).",
"Searching Super Tickets Efficiently.",
"Determining the compression ratio of the super tickets requires rewinding models at multiple sparsity levels.",
"To leverage super tickets in practice, a potential direction of research is to find heuristics to determine this ratio prior or early-on in training.",
"We leave this for future works.",
"We study the behaviors of the structured lottery tickets in pre-trained BERT.",
"We observe that the generalization performance of the winning tickets exhibits a phase transition phenomenon, suggesting pruning can improve generalization when models are lightly compressed.",
"Based on the observation, we further propose a tickets sharing strategy to improve multi-task fine-tuning.",
"Our analysis paves the way for understanding the connection between model compression and generalization.",
"This paper studies the behavior of the structured lottery tickets in pre-trained language models.",
"Our investigation neither introduces any social/ethical bias to the model nor amplifies any bias in the data.",
"We do not foresee any direct social consequences or ethical issues.",
"Furthermore, our proposed method improves performance through model compression, rendering it energy efficient."
] | [
"abstain",
"method",
"result",
"objective",
"abstain",
"result",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain"
] |
Subsets and Splits