sentences
sequence
labels
sequence
[ "In this paper we present a method to learn word embeddings that are resilient to misspellings.", "Existing word embeddings have limited applicability to malformed texts, which contain a non-negligible amount of out-of-vocabulary words.", "We propose a method combining FastText with subwords and a supervised task of learning misspelling patterns.", "In our method, misspellings of each word are embedded close to their correct variants.", "We train these embeddings on a new dataset we are releasing publicly.", "Finally, we experimentally show the advantages of this approach on both intrinsic and extrinsic NLP tasks using public test sets.", "Word embeddings constitute a building block of many practical applications across NLP and related disciplines.", "Techniques such as Word2Vec (Mikolov et al., 2013a,b) and GloVe (Pennington et al., 2014) have been extensively used in practice.", "One of their drawbacks, however, is that they cannot provide embeddings for words that have not been observed at training time, i.e. Out-Of-Vocabulary (OOV) words.", "In real-world tasks, the input text is often generated by people and misspellings, a common source of OOV words, are frequent (e.g. (Cucerzan and Brill, 2004) report that misspellings appear in up to 15% of web search queries).", "As a consequence, the quality of downstream applications of word embeddings in real-world scenarios diminishes.", "Simply allowing the inclusion of misspellings into corpora and vocabularies in existing methodologies might not provide satisfactory results.", "The sparsity of misspellings would most likely prevent This work was carried out when the author was working as an employee at Facebook London.", "their embeddings from demonstrating any interesting properties.", "Trying to balance the representation of misspellings with the representation of correctly spelled variants in training data by artificially introducing misspelled variants for every word in the corpus would on the other hand cause up to an exponential growth in the size of the training data, making training of the models infeasible.", "To address this deficiency, we propose Misspelling Oblivious (word) Embeddings ( MOE ), a new model combining FastText (Bojanowski et al., 2017) with a supervised task which embeds misspellings close to their correct variants.", "We carry out experiments on well established tasks and on their variants adapted to the misspellings problem.", "We also propose new methods of evaluating embeddings specifically designed to capture their quality on misspelled words.", "We train MOE embeddings on a new dataset we are releasing publicly.", "Finally, we experimentally show the advantages of this approach on both intrinsic and extrinsic NLP tasks using public test sets.", "Summarizing, we propose the following contributions: a novel problem and a non-trivial solution to building word embeddings resistant to misspellings; a novel evaluation method specifically suitable for evaluating the effectiveness of MOE ; a dataset of misspellings 1 to train MOE .", "The reminder of this paper is structured as follows.", "Section 2 gives an overview of the word embeddings literature.", "In Section 3.1 we introduce Word2Vec and FastText models.", "We introduce the MOE model in Section 3.2.", "Section 4 contains the descriptions of datasets we trained on and section 5 contains the description of experiments we conducted and their results.", "In Section 6 we present our conclusions and plans for further research.", "One of the first works to introduce the concept of a distributed representation for symbolic data was (Hinton, 1986).", "Later, the Information Retrieval community proposed techniques of embedding words into a vector space.", "Latent Semantic Indexing (Deerwester et al., 1990) was one of the most influential works in this area.", "(Bengio et al., 2003) introduced the first neural language model which jointly learned word embeddings.", "Although such a language model was outperforming the baselines, it was not practical because of long training time requirements.", "(Collobert and Weston, 2008) proposed new neural architectures for word embeddings and showed that pre-trained word embeddings can be very valuable for some downstream NLP tasks.", "Word2Vec (Mikolov et al., 2013b,a) got very popular both because of its effectiveness and its ability to train models on very large text corpora efficiently.", "(Levy and Goldberg, 2014) showed that Word2Vec's skip-gram with negative sampling model (SGNS) is implicitly equivalent to word co-occurrence matrix factorization.", "Besides neural approaches, (Penning-ton et al., 2014) proposed an SVD based architecture which gained a lot of attention because it allowed to effectively consider the popularity of each word in the model definition.", "FastText (Bojanowski et al., 2017) is a popular, recent proposal in the area of word embeddings.", "FastText introduces subword-level features to the Word2Vec framework which enables building embeddings for OOV words (see details in Section 3.1).", "An alternative approach, also capable of yielding representations for OOV words, is MIMIK (Pinter et al., 2017).", "MIMICK learns a function from input words to embeddings by minimizing the distance between embeddings produced by a char-based approach and the pre-trained embeddings.", "As opposed to MOE , MIMICK does not support misspellings explicitly and it requires a set of pre-trained embeddings as input.", "We consider MIMICK to be a viable alternative to FastText which deserves future work exploring its performance on misspelled text.", "skip-gram with negative sampling (SGNS) architecture, proposed as a part of the Word2Vec framework.", "In this section we will briefly discuss major additions to SGNS introduded by FastText.", "Let V be a vocabulary of words and T = w 1 , w 2 , . . . , w | T | be a text corpus, represented as a sequence of words from V .", "We define the context of a word w i V as C i = { w i l , . . . , w i 1 , w i +1 , . . . , w i + l } for some l set as a hyperparameter.", "In the SGNS model, a word w i is represented by a single embedding vector v i equivalent to the input vector of a simple feed-forward neural network, trained by optimizing the following loss function: LW 2 V := | T | (cid:88) i =1 (cid:88) w c C i [ (cid:96) ( s ( w i , w c ))+ (cid:88) w n N i,c (cid:96) ( s ( w i , w n ))] (1) where (cid:96) denotes the logistic loss function (cid:96) ( x ) = log (1 + e x ) and N i,c is a set of negative samples drawn for the current word w i and its context w c C i .", "s is the scoring function, which for SGNS is defined as the the dot product v Ti u c , where u c is an output vector associated with the word w c and v i is an input vector associated with the word w i .", "Therefore, s ( w i , w c ) = v Ti u c .", "In FastText, we additionally embed subwords (also referred to as character n -grams) and use them to construct the final representation of w i .", "Formally, given hyperparameters m and M denoting a minimum and a maximum length of an n -gram respectively, the FastText model embeds all possible character n -grams of the word such that m n M .", "E.g. given m = 3 , M = 5 and the word banana , the set of n -grams we consider is ban, ana, nan, bana, anan, nana, banan, anana .", "Now, let G w i denote the set of all n -grams of a word w i V plus the word itself (e.g. G banana is the set defined in the example above plus the word banana itself).", "Given G w i , FastText scoring function for a word w i and a context w c is defined as follows: s FT ( w i , w c ) := (cid:88) v g ,g G wi v gT u c (2) Therefore, the representation of w i is expressed through the sum of the representations of each of the n -grams derived from w i plus the representation of w i itself.", "FastText optimizes the loss function in Eq.1, but uses the scoring function s FT defined in Eq.2.", "An extensive experimentation has shown that FastText improves over the original Word2Vec skip-gram model.", "The loss function of FastText will be referred to as LFT throughout the rest of this work.", "As was shown empirically in the FastText paper, the n -grams which impact the final representation of a word the most in FastText correspond to morphemes.", "Based on this observation, we hypothesize that although FastText can capture morphological aspects of text, it may not be particularly resistant to misspellings which can occur also withing the dominant morphemes.", "In this section, we present the architecture of our model MOE or Misspelling Oblivious (word) Embeddings.", "MOE holds the fundamental properties of FastText and Word2Vec while giving explicit importance to misspelled words.", "Loss Function.", "The loss function of MOE is a weighted sum of two loss functions: LFT and LSC .", "LFT is the loss function of FastText which captures semantic relationships between words.", "LSC or the spell correction loss aims to map embeddings of misspelled words close to the embeddings of their correctly spelled variants in the vector space.", "We define LSC as follows: LSC := (cid:88) ( w m ,w e ) M [ (cid:96) ( s ( w m , w e )) + (cid:88) w n N m,e (cid:96) ( s ( w m , w n ))] (3) where M is a set of pairs of words ( w m , w e ) such that w e V is the expected (correctly spelled) word and w m is its misspelling.", "N m,e is a set of random negative samples from V \\ { w m , w e } .", "LSC makes use of the logistic function (cid:96) ( x ) = log (1 + e x ) introduced in Section 3.1.", "The scoring function s is defined as follows: s ( w m , w e ) = (cid:88) v g ,g G w m v gT v e (4) where G w m := G w m \\ { w m } .", "and the input vector of w e .", "Formally, the term (cid:96) ( s ( w m , w e )) enforces predictability of w e given w m .", "Intuitively, optimizing LSC pushes the representation of a misspelling w m closer to the representation of the expected word w e .", "It is also worth mentioning that embeddings for w m and w e share the same parameters set.", "The complete loss function of MOE , LMOE , is defined as follows: LMOE := (1 ) LFT + | T | | M | LSC (5) Optimizing the loss functions LFT and LSC concurrently is not a straightforward task.", "This is because two different loss functions iterate over two different datasets: the text corpus T , and the misspellings dataset M .", "The optimization process should be agnostic to the sizes of T and M in order to prevent results from being severely affected by those sizes.", "Therefore, we scale LSC with the coefficient | T | / | M | .", "This way the importance of a single Stochastic Gradient Descent (SGD) update for LFT becomes equivalent to a single SGD update for LSC .", "Moreover, is the hyperparameter which sets the importance of the spell correction loss LSC with respect to LFT thus making MOE a generalization of FastText.", "As mentioned in Section 3.2, MOE jointly optimizes two loss functions, each of which iterates over a separate dataset a corpus of text for the FastText loss LFT and a set of pairs (misspelling, correction) for the spell correction loss LSC .", "In this section, we will briefly discuss how we obtain each of these datasets.", "We use an English Wikipedia dump 2 as the text corpus T to optimize LFT .", "The baseline FastText model is also trained on this dataset.", "Matt Ma-honey's perl script 3 is used for pre-processing the raw Wikipedia dump.", "After pre-processing, the training corpus consists of | T | = 4 , 341 , 233 , 424 words.", "When generating the vocabulary V based on the corpus, we apply a frequency threshold of 5 .", "After deduplication and thresholding, the size of the vocabulary for our corpus is | V | = 2 , 746 , 061 words.", "We also apply progressive subsampling of 2 dumps.wikimedia.org 3 http://mattmahoney.net/dc/textdata frequent words in order to not assign too much importance to highly frequent words.", "The misspellings dataset M consists of a set of pairs ( w m , w e ) , where w e V represents a (pre-sumably correctly spelled) word from the vocabulary and w m is a misspelling of w e .", "Given the size of V , we opt for generating misspellings in an automated fashion, using an in-house misspellings generation script.", "The script is based on a simple error model, which captures the probability of typing a character p m when a character p e is expected (note that it's possible to have p m == p e ), given a context of previously typed characters.", "The script is capable of generating misspellings of targeted edit distance for an input word w i .", "In the reminder of this section, we'll discuss details of the script implementation.", "Error model.", "In order to create the error model, we first mine query logs of a popular search engine 4 and identify cases where a query was manually corrected by a searcher.", "We then pivot on the modified character and for each such modification we save a triplet ( c, p m , p e ) , where p m is the pivot character before modification, p e is the the target character after modification and c represents up to 3 characters preceding the pivot in the original query.", "E.g. given a query hello worjd corrected to hello world , we would generate four triplets: [( wor, j, l ) , ( or, j, l ) , ( r, j, l ) , ( (cid:15), j, l ))] , where (cid:15) represents an empty word.", "Similarly, we create triplets by pivoting on characters which have not been modified.", "After processing all available logs, we count each unique triplet.", "For each unique pair ( c, p m ) of a context and a pivot, we then create a target list consisting of all possible targets p e , each associated with its probability calculated based on counts.", "We then sort each target list in the order of decreasing probability.", "Injecting misspellings.", "Let's consider a word w i V that we want to misspell.", "For each character p w i , we take it's longest possible context c (up to 3 characters) and we look up the target list corresponding to ( c, p ) .", "We then proceed along the target list , summing up the probabilities of subsequent targets until the sum is greater or equal to a randomly selected target probability tp [0 . 0 , 1 . 0] .", "We then choose the corresponding 4 https://www.facebook.com target t as a replacement for p (note that in the majority of the cases t == p ).", "We repeat this process for every word from V .", "In order to respect real distributions of words in the text corpus T , we set the number of misspellings generated for each word w i V to be equal to the square root of the number of appearances of w i in T .", "The total size of misspellings dataset generated in this fashion is | M | = 20 , 068 , 964 pairs.", "We make the dataset of misspellings publicly available at https:// bitbucket.org/bedizel/moe .", "In this section, we describe the experimental set up used for training our models and the experiments we conducted.", "We use FastText 5 as a baseline for comparison since it can generate embeddings for OOV words which makes it potentially suitable for dealing with misspellings.", "We train the baseline model using the default hyperparameters provided by the authors.", "We consider character n -grams of lengths between m = 3 and M = 6 , and we use 5 negative samples for each positive sample.", "Training MOE requires optimizing two loss functions LFT and LSC jointly.", "For optimizing LFT , we use the same parameters as in the baseline.", "Additionally, to optimize LSC , we experiment with 5 negative samples per positive sample.", "We sweep over a range of values for the coefficient combining the two losses: { 0 .", "01 , 0 .", "05 , 0 .", "1 , 0 .", "5 , 0 .", "25 , 0 .", "5 , 0 .", "75 , 0 .", "95 , 0 .", "99 } .", "Both FastText and MOE are trained using Stochastic Gradient Descent with a linearly decaying learning rate for 5 epochs to learn vectors with 300 dimensions.", "We evaluate the performance of MOE on the following tasks: (intrinsic) Word Similarity, Word Analogy and Neighborhood Validity; (extrinsic) POS Tagging of English sentences.", "We report the overlap between the misspellings seen at training time and misspellings present in tests in Table 1. 5.2 Intrisic Evaluation We evaluate MOE on two classic intrinsic tasks, namely Word Similarity and Word Analogy and 5 https://fasttext.cc/ Test set % of unseen WS353 r = 0 .", "on a novel intrinsic task evaluating the distance between vector embeddings of misspellings and their correctly spelled variants.", "Word Similarity.", "In the word similarity task, we evaluate how well embeddings generated by MOE can capture the semantic similarity between two words.", "For this purpose, we use two datasets:", "(i) WS353 (Finkelstein et al., 2001), and", "(ii) Rare Words (RW) (Luong et al., 2013).", "Both datasets contain pairs of words w a , and w b annotated with a real value in the range [0 , 10] representing the degree of similarity between w a and w b as perceived by human judges.", "In order to evaluate how resilient our method is to spelling errors, for each pair of words ( w a , w b ) in the dataset, we provide a respective pair of misspellings ( m a , m b ) .", "The misspellings are mined from search query logs of a real-world online search service.", "When desired misspellings are not available in the logs, we synthetically generate them using the same script we used to generate the set M (see Section 4 for details).", "We create 3 misspelled variants of both WS353 and RW datasets.", "In each variant we limit the ratio between the edit distance (Levenshtein, 1966) of the word and the misspelling d e ( w i , m i ) and the length of the word by a constant r , where r { 0 .", "125 , 0 .", "250 , 0 .", "375 } , with r = 0 representing the original dataset.", "More precisely for each r we look for a misspelling which satisfies the following condition d e ( w i , m i ) = (cid:98) r len ( w i ) (cid:99) .", "Effectively, if a word is too short to satisfy the condition, we preserve the original word (then w i = m i ).", "Histograms in Figure 1 show the actual distribution of edit distances and lengths of words.", "As expected, edit distance increases steeply with the in-crease of r value.", "Edit distances are higher for the 0 5 10 15 0 200 400 r =0.125 0 5 10 15 r =0.25 0 5 10 15 r =0.375 d e ( w i , m i ) len ( w i ) f r equen cy 0 5 10 15 20 0 500 1000 0 5 10 15 20 0 5 10 15 20 f r equen cy Figure 1: Distribution of edit distances d e ( w i , m i ) and lenghts of words len ( w i ) for WS353 variants (Top) and RW variants (Bottom).", "RW dataset since in average the length of words in RW is higher than on average length of words in WS353.", "Also, we observe that for r = 0 .", "125 , a significant portion of the words is not changed.", "We conduct experiments for different values of the hyperparameter which sets the trade-off between LFT and LSC , i.e. the importance assigned to semantic loss and misspelling loss.", "In the experiments, results corresponding to = 0 represents our baseline, FastText, since for = 0 the loss LMOE is equal to LFT .", "We measure the Spearman's rank correlation (Spearman, 1904) between the distance of an input pair of words and the human judgment score both for the original and the misspelled pair.", "Figure 2 demonstrates the results of the word similarity task on the WS353 dataset.", "We observe that MOE is improving over FastText for WS353 variants with r = 0 .", "25 , and r = 0 .", "375 , and degrading performance when r = 0 , and r = 0 .", "125 , where the majority of the words is not changed (see Figure 1 for the edit distance distribution).", "As we expected, larger values of , corresponding to more attention given to misspellings during training, result in improvements for highly misspelled datasets.", "For the RW dataset (Figure 2), we observe that for all the values of r , MOE improves over the FastText baseline when we set = 0 .", "05 .", "More specifically, when r { 0 , 0 .", "125 } and when < 0 .", "1 , the proposed method improves over the baseline.", "When the amount of misspellings is higher, i.e., r { 0 .", "25 , 0 .", "375 } , MOE improves the results over the baseline for all of the values.", "These results suggest that FastText may be a good baseline for dealing with low edit distance misspellings, however our model is better at capturing semantic relationships on higher edit distance misspellings.", "This is in line with our hypothesis presented in 0.0 0.2 0.4 0.6 0.8 1.0 20 0 20 40 60 80 r=0 r=0.125 r=0.25 r=0.375 0.0 0.2 0.4 0.6 0.8 1.0 5 15 25 35 45 55 s pea r m an r an k Figure 2: Experimental results for word similarity task for WS353 (Left) and RW (Right).", "Word Analogy.", "In addition to the word similarity, we also test the performance of MOE on the popular word analogy task introduced by (Mikolov et al., 2013a).", "This task attempts to measure how good the embeddings model is at preserving relationships between words.", "A single test sample from the word analogy dataset consists of four words A, B, C, D , forming two pairs A, B and C, D , remaining in analogous relationships (\" A is to B like C is to D \").", "There are two types of relationships:", "(i) syntactic , related to the structure of words; and", "(ii) semantic , related to their meanings.", "banana, bananas, cat, cats is an example of a syntactic test sample.", "In both pairs the fact that the second word is a plural version of the first constitutes a relationship between them.", "Athens, Greece, Berlin, Germany is an example of a semantic test sample.", "The relationship which is being tested in this case is that between the capital of a country and the country itself.", "In addition to analyzing the canonical variant of the word analogies test, we also introduce a modification which is suitable specifically to the misspellings use-case.", "Given a line A, B, C, D from the original analogies dataset, we misspell the first pair of words, obtaining a line A (cid:48) , B (cid:48) , C, D , where A (cid:48) is a misspelling of A and B (cid:48) is a misspelling of B .", "We want to test if the misspelled pair A (cid:48) , B (cid:48) preserves the relationship of the pair C, D .", "When generating misspellings we use a procedure similar to the one used for word similarities.", "We create one variant of the misspelled dataset, constraining the edit distance to r = 0 .", "25 .", "Experimental results for the canonical version of the word analogy task, presented in Figure 3, show that MOE performs worse than FastText on the semantic analogy task.", "On the other hand, MOE performs better than the baseline on the syntactic analogies task.", "The results for the misspelled variant of the task show that, the overall performance of both the baseline and MOE is worse than on the canonical variant.", "For low values of { 0 .", "01 , 0 .", "05 } , MOE outperforms the baseline on the semantic task, achieving an over 67% better score than FastText for = 0 .", "01 .", "MOE outperforms the baseline on the syntactic task for all tested values of , improving by over 80% for = 0 .", "75 .", "For = 0 .", "01 , which achieved the best semantic result, the improvement on the syntactic task is over 33%.", "The trends that we observe both in the canonical and the misspelled variant of the word analogies task seem to validate our choice of the loss function for the MOE model.", "It is clear that the FastText component of the loss is indispensable to learn the semantic relationships between words.", "In fact, it is the only component of the loss function which attempts to learn these relationships.", "Therefore, decreasing it's importance (by increasing the value of ) is reflected by a de-cay in the semantic analogies score.", "The spell-correction component of the loss function, on the other hand, leverages the relationship between correctly spelled words and their misspellings.", "As a side effect, it also adds additional subword information into the model.", "This explains our good performance on the syntactic analogies task.", "As our results on the misspelled variant of the task show, we improve over the baseline in understanding analogies on the misspelled words, which was one of the design principles for MOE .", "Neighborhood Validity.", "One of the explicit objectives of MOE is to embed misspellings close to their correct variants in the vector space.", "In order to validate this hypothesis, we check where in the neighborhood of a misspelling the correct word is situated.", "Formally, for a pair ( w m , w e ) of a misspelling and its correction, we pick k nearest neighbors of the misspelling w m in the embedding space using cosine similarity as a distance metric.", "We then evaluate the position of the correct word w e within the neighborhood of w m using two metrics: We use MRR (Voorhees et al., 1999) to score the neighborhood of the embeddings of misspellings (we assign a score of 0 if the correct word is not present).", "We also compute the neighborhood coverage defined as the percentage of misspellings for which the neighborhood contains the correct version.", "The test set contains 5 , 910 pairs ( w m , w e ) sampled from a collection of data coming from a real-world online service 6 .", "Figure 4 shows experimen-6 www.facebook.com tal results for Neighbor Validity task.", "We remind that = 0 denotes the FastText baseline.", "The test results confirm our hypothesis.", "We observe that MRR increases when more importance is given to the LSC component of the loss for any size of the neighborhood k { 5 , 10 , 50 , 100 } .", "A similar trend can be observed for the neighborhood coverage task.", "We conclude that, on average, we're more likely to surface the correction using MOE than with FastText.", "What is more, whenever we are able to surface the correct version of a misspelled word, its position in the ranking is higher for MOE than for the FastText baseline.", "POS Tagging.", "Finally, we evaluate MOE on a Part-of-Speech (POS) tagging task 7 .", "To assess the impact of misspellings we artificially inject misspellings in the dataset.", "We train MOE on three different dataset variants: a non-misspelled dataset, to verify that MOE does not jeopardize the performance on correct words; a dataset where 10% of words contain a misspelling, to simulate a realistic environment where some of the words are misspelled; and finally on a dataset where 100% of words contain misspellings, to simulate a highly distorted environment.", "We use a state-of-the-art POS tagger (Ma and Hovy, 2016) consisting of a Conditional Random Fields (CRF) model where embeddings of the words in a sentence constitute observations and the tags to assign constitute the latent variables.", "This model adds a dependency on both layers of a Bi-LSTM component to the tag variables in the CRF.", "We evaluate the F1 score of the system for the three dataset variants we describe above.", "We test two different representations as input to the CRF: FastText (our baseline), and MOE embeddings.", "Our results are reported in Table 2. We make the following observations based on the results of our experiments.", "Firstly, in the two extreme cases of the 100% misspelled test and correct training and the correct test and 100% misspelled training, MOE improves the F1 by 2 and 3 .", "5 points respectively with respect to the FastText baseline.", "When the test data is 100% misspelled, MOE always beats the baseline by up to 2 .", "3 points of F1.", "Also, in this case the loss in F1 with respect to the case where both the training and the test are 7 http://universaldependencies.org/ conll17/data.html Test Data 100% Misspelled Original Training Data Original 100% Miss. 10% Miss.", "correct is much less then when the training data does not contain misspellings.", "To be remarked is the F1 score difference in the more realistic case consisting of training data that is 10% misspelled.", "In this case MOE attains a sensitive improvement of 2 .", "3% points of F1.", "Finally, MOE does not reduce the effectiveness of the CRF POS Tagger with respect to the FastText baseline when neither the training nor the test set are misspelled.", "All in all, we have shown that MOE does not affect the effectiveness of the POS Tagger in the case of correctly misspelled words and improves sensitively the quality of the POS tagger on misspellings.", "One of the most urgent issues of word embeddings is that they are often unable to deal with malformed words which is a big limitation in the real-world applications.", "In this work, we proposed a novel model called MOE , which aims to solve a long-standing problem: generating high quality, semantically valid embeddings for misspellings.", "In the experiments section, in the neighborhood validity task, we show that MOE maps embeddings of misspellings close to embedding of the corresponding correctly spelled word.", "Moreover, we show that MOE is performing significantly better than the FastText baseline for the word similarity task when misspellings are involved.", "For the canonical versions of the word similarity tasks, where misspellings are not involved, we show that MOE doesn't worsen the quality significantly for the WS353 dataset and improves over baseline for the RW dataset.", "In the word analogy task, MOE is able to preserve the quality of the semantic analogies similar to the baseline, while improving on the syntactic analogies.", "In the variant of the test where misspellings are involved, MOE outperforms the baseline on both semantic and syntactic questions.", "Finally, we have shown that MOE does not affect the effectiveness of the POS Tagger in the case of correctly spelled words and improves sensitively the quality of the POS tagger on misspellings.", "In the future, we will test different ways of training embeddings for misspellings including the extension of the same technique to multi-lingual embeddings.", "We are going to test deep architectures to combine the n -grams in misspellings to better capture various interdependencies of n -grams and correct versions of words.", "Finally, we will assess the robustness of both character-based (Kim et al., 2016) and context-dependent embeddings (Devlin et al., 2018), (Peters et al., 2018) with respect to misspellings." ]
[ "method", "abstain", "objective", "method", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "result", "objective", "method", "abstain", "abstain", "abstain", "result", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "other", "abstain", "method", "abstain", "other", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "result", "result", "result", "abstain", "abstain", "result", "abstain", "method", "abstain" ]
[ "Despite the prominence of neural abstractive summarization models, we know little about how they actually form summaries and how to understand where their decisions come from.", "We propose a two-step method to interpret summarization model decisions.", "We first analyze the model's behavior by ablating the full model to categorize each decoder decision into one of several generation modes: roughly, is the model behaving like a language model, is it relying heavily on the input, or is it somewhere in between?", "After isolating decisions that do depend on the input, we explore interpreting these decisions using several different attribution methods.", "We compare these techniques based on their ability to select content and reconstruct the model's predicted token from perturbations of the input, thus revealing whether highlighted attributions are truly important for the generation of the next token.", "While this machinery can be broadly useful even beyond summarization, we specifically demonstrate its capability to identify phrases the summarization model has memorized and determine where in the training pipeline this memorization happened, as well as study complex generation phenomena like sentence fusion on a per-instance basis.", "Transformer-based neural summarization models (Liu and Lapata, 2019; Stiennon et al., 2020; Xu et al., 2020b; Desai et al., 2020), especially pretrained abstractive models like BART (Lewis et al., 2020) and PEGASUS (Zhang et al., 2020), have made great strides in recent years.", "These models demonstrate exciting new capabilities in terms of abstraction, but little is known about how these models work.", "In particular, do token generation decisions leverage the source text, and if so, which parts?", "Or do these decisions arise based primarily on knowledge from the language model (Jiang et al., 2020; Carlini et al., 2020), learned during pre-training or fine-tuning?", "Having tools to analyze these models is crucial to identifying and forestalling problems in generation, such as toxicity (Gehman et al., 2020) or factual errors (Kryscinski et al., 2020; Goyal and Durrett, 2020, 2021).", "Although interpreting classification models for NLP has been widely studied from perspectives like feature attribution (Ribeiro et al., 2016; Sundararajan et al., 2017) and influence functions (Koh and Liang, 2017; Han et al., 2020), summarization specifically introduces some additional elements that make these techniques hard to apply directly.", "First, summarization models make sequential decisions from a very large state space.", "Second, encoder-decoder models have a special structure, featuring a complex interaction of decoder-side and encoder-side computation to select the next word.", "Third, pre-trained LMs blur the distinction between relying on implicit prior knowledge or explicit instance-dependent input.", "This paper aims to more fully interpret the stepwise prediction decisions of neural abstractive summarization models.", "1 First, we roughly bucket generation decisions into one of several modes of generation.", "After confirming that the models we use are robust to seeing partial inputs, we can probe the model by predicting next words with various model ablations : a basic language model with no input (LM ), a summarization model with no input (S ), with part of the document as input (S part ), and with the full document as input (S full ).", "These ablations tell us when the decision is context-independent (generated in an LM-like way), when it is heavily context-dependent (generated from the context), and more.", "We map these regions in Figure 2 and can use these maps to coarsely analyze model behavior.", "For example, 17.6% of the decisions on 1 Code and visualization are available at https:// github.com/jiacheng-xu/sum-interpret Conclusion: These doc tokens impacted prediction the most (according to int. grad.) Conclusion: Higher difference means higher dependence on context Ablation mayoral Cameron 0.01 0.01 0.56 0.99 Di between LM and full model for Khan 0.96 0.99 0.99 0.99 Input Article Speaking at a rally for Tory candidate Zac Goldsmith, the prime minister warned about the dangers of a Labour victory for the capital's economy.", "XSum are in the lower-left corner (LM-like), which means they do not rely much on the input context.", "Second, we focus on more fine-grained attribution of decisions that arise when the model does rely heavily on the source document.", "We carefully examine interpretations based on several prior techniques, including occlusion (Zeiler and Fergus, 2014), attention, integrated gradients (Sundarara-jan et al., 2017), and input gradients (Hechtlinger, 2016).", "In order to evaluate and compare these methods, we propose a comprehensive evaluation based on presenting counterfactual, partial inputs to quantitatively assess these models' performance with different subsets of the input data.", "Our two-stage analysis framework allows us to (1) understand how each individual decision depends on context and prior knowledge (Sec 3), (2) find suspicious cases of memorization and bias (Sec 4), (3) locate the source evidence for context dependent generation (Sec 5).", "The framework can be used to understand more complex decisions like sentence fusion (Sec 6).", "A seq2seq neural abstractive model first encodes an input document with m sentences ( s 1 , , s m ) and n tokens ( w 1 , w 2 , , w n ) , then generates a sequence of tokens ( y 1 , , y T ) as the summary.", "At each time step t in the generation phase, the model encodes the input document and the decoded summary prefix and predicts the distribution over tokens as p ( y t | w 1 , w 2 , . . . , w m , y <t ) .", "We investigate the English-language CNN/DM (Hermann et al., 2015) and XSum (Narayan et al., 2018) datasets, which are commonly used to fine tune pre-trained language models like BART, PEGASUS and T5.", "As shown in past work (Narayan et al., 2018; Chen et al., 2020b; Xu et al., 2020a), XSum has significantly different properties from CNN/DM, so these datasets will show a range of model behaviors.", "We will primarily use the development sets for our analysis.", "We focus on BART (Lewis et al., 2020), a state-of-the-art pre-trained model for language modeling and text summarization.", "Specifically, we adopt bart-large' as the language model MLM , bart-large-xsum' as the summarization model MSUM for XSum, and bart-large-cnn' for CNN/DM, made available by Wolf et al. (2019).", "BART features separate LM and summarization model sharing the same subword tokenization method.", "2 Our approach focuses on teasing apart these different modes of decisions.", "We first run the full model to get the predicted summary ( y 1 , , y T ) .", "We then analyze the distribution placed by the full model S full to figure out what contributes towards the generation of the next token.", "2 Our analysis can generalize to other pre-trained models, but past work has shown BART and PEGASUS to be roughly similar in terms of behavior (Xu et al., 2020a), so we do not focus on this here.", "the ablation stage, we compare the predictions of different model and input configurations.", "The goal of this stage is to coarsely determine the mode of generation.", "Here, for and Khan are generated in an LM-like way: the model already has a strong prior that Sadiq should be Sadiq Khan and the source article has little impact on this decision.", "Cameron , by contrast, does require the source in order to be generated.", "And mayoral is a complex case, where the model is not strictly copying this word from anywhere in the source, but instead using a nebulous combination of information to generate it.", "In the attribution stage, we interpret such decisions which require more context using a more fine-grained approach.", "Given the predicted prefix (like David ), target prediction (like Cameron ), and the model, we use attribution techniques like integrated gradients (Sundararajan et al., 2017) or LIME (Ribeiro et al., 2016) to track the input which contributes to this prediction.", "The configurations we use are listed in Table 1 and defined as follows:", "LM is a pre-trained language model only taking the decoded summary prefix as input.", "We use this model to estimate what a pure language model will predict given the prefix.", "We denote the prediction distribution as PLM = P ( y t | y <t ; MLM ) .", "S is the same BART summarization model as S full , but without the input document as the input.", "That is, it uses the same parameters as the full model, but with no input document fed in.", "We use the prediction of this model to estimate how strong an effect the in-domain training data has, but still treating the model as a decoder-only language model.", "It is denoted as P = P ( y t | y <t ; MSUM ) .", "Figure 1 shows how this can effectively identify cases like Khan that surprisingly do not rely on the input document.", "S part is a further step closer to the full model: this is the BART summarization model conditioned on the decoder prefix and part of the input document, denoted as P part = P ( y t | y <t , { s i } ; MSUM ) where { w i } is a subset of tokens of the input document.", "The selected content could be a continuous span, or a sentence, or a concatenation of several spans or sentences.", "Although MSUM is designed and trained to condition on input document, we find that the model also works well with no input, little input and incomplete sentences.", "As we will show later, there are many cases that this scheme successfully explains; we formalize our assumption as follows: Assumption 1 If the model executed on partial input nearly reproduces the next word distribution of the full model, then we view that partial context as a sufficient (but perhaps not necessary) input to explain the model's behavior.", "Here we define partial input as either just the decoded summary so far or the summary and partial context.", "In practice, we see two things.", "First, when considering just the decoder context (i.e., behaving as an LM), the partial model may reproduce the full model's behavior (e.g., Khan in Figure 1).", "We do not focus on explaining these cases in further detail.", "While conceivably the actual conditional model might internally be doing something different (a risk noted by Rudin (2019)), this proves the existence of a decoder-only proxy model that reproduces the full model's results, which is a criterion used in past work (Li et al., 2020).", "Second, when considering partial inputs, the model frequently requires one or two specific sentences to reproduce the full model's behavior, suggesting that the given contexts are both necessary and sufficient .", "Because these analyses involve using the model on data significantly different than that which it is trained on, we want another way to quantify the importance of a word, span, or sentence.", "This brings us to our second assumption: Assumption 2 In order to say that a span of the input or decoder context is important to the model's prediction, it should be the case that this span is demonstrated to be important in counterfactual settings.", "That is, modified inputs to the model that include this span should yield closer predictions than those that don't.", "This criterion depends on the set of counterfactuals that we use.", "Rather than just word removal (Ribeiro et al., 2016), we will use a more comprehensive set of counterfactuals (Miller, 2019; Jacovi and Goldberg, 2020) to quantify the importance of input tokens.", "Throughout this work, we rely on measuring the distance between distributions over tokens.", "Although KL divergence is a popular choice, we found it to be very unstable given the large vocabulary size, and two distributions that are completely different would have very large values of KL.", "We instead use the L 1 distance between the two distributions: D ( P, Q ) = (cid:80) i | p i q j | .", "This is similar to using the Earth Mover's Distance (Rubner et al., 1998) over these two discrete distributions, with an identity transportation flow since the distributions are defined over the same set of tokens.", "Based on Assumption 1, we can take a first step towards understanding these models based on the partial models described in Section 2.3.", "Previous work (See et al., 2017; Song et al., 2020) has studied model behavior based on externally-visible properties of the model's generation, such as identifying novel words, differentiating copy and generation, and prediction confidence, which provides some insight about model's behavior (Xu et al., 2020a).", "However, these focus more on shallow comparison of the input document, the generated summary, and the reference summary, and do not focus as strongly on the model.", "We propose a new way of mapping the prediction space, with maps 3 for XSum and CNN/DM shown in Figure", "2. Each point in the map is a single subword token being generated by the decoder on the development set at inference time; that is, each point corresponds to a single invocation of the model.", "This analysis does not depend on the reference summary at all.", "The x -axis of the map shows the distance between LM and S full , using the metric defined in Section 2.4 which ranges from 0 to", "2. The y -axis shows the distance between S and S full .", "Other choices of partial models for the axes are possible (or more axes), but we believe these show two important factors.", "The x -axis captures how much the generic pre-trained language model agrees with the full model's predictions .", "The histogram on the sides of the map show counts along with each vertical or horizontal slice.", "Modes of decisions We break these maps into a few coarse regions based on the axis values.", "We list the coordinates of the bottom left corner and the upper right corner.", "These values were chosen by inspection and the precise boundaries have little effect on our analysis, as many of the decisions fall into the corners or along sides.", "LM ([0, 0], [0.5, 0.5]) contains the cases where LM and S both agree with S full .", "These decisions are easily made using only decoder information, even without training or knowledge of the input document.", "These are cases that follow from the constraints of language models, including function words, common entities, or idioms.", "decoder-only model can model these decisions.", "FT ([1.5, 0], [2, 0.5]) captures cases where the fine-tuned decoder-only model is a close match but the pre-trained model is not.", "This happens more often on XSum and reflects memorization of training summaries, as we discuss later.", "PT ([0, 1.5], [0.5, 2]) is the least intuitive case, where LM agrees with S full but S does not; that is, fine-tuning a decoder-only model causes it to work less well .", "This happens more often on CNN/DM and reflects memorization of data in the pre-training corpus.", "While the map highlights some useful trends, there are many examples that do rely heavily on the context that we would like to further analyze.", "Some examples depend on the context in a sophisticated way, but other tokens like parts of named entities or noun phrases are simply copied from the source article in a simple way.", "Highlighting this contrast, we additionally subdivide the cases by how they depend on the context.", "We conduct a sentence-level presence probing experiment to further characterize the generation decisions.", "For a document with m sentences, we run the S part model conditioned on each of the sentences in isolation.", "We can obtain a sequence of scalars P sent = ( P part ( s i ); i [1 , m ]) .", "We define CTX-Hd (context-hard) cases as ones where max( P sent ) is low; that is, where no single sentence can yield the token, as in the case of sentence fusion.", "These also reflect cases of high entropy for S full , where any perturbation to the input may cause a big distribution shift.", "The first, second and third quartile of max( P sent ) is [0 . 69 , 0 . 96 , 1 . 0] and [0 . 95 , 1 . 0 , 1 . 0] on XSum and on CNN/DM.", "To roughly characterize the words generated in different regions of the map, in Table 2, we show the percentage of examples falling to each region and the top 3 POS tags for each region on the XSum map.", "From the frequency of these categories, we can tell more than two-thirds of the decisions belong to the Context category.", "17.6% of cases are in LM, the second-largest category.", "In the LM region, ADP and DET account for nearly half of the data points, confirming that these are largely function Cat Freq(%) Top 3 POS Tags w/ Freq(%) ADP DET NOUNLM 17.6% 28.6% 21.1% 13.5% NOUN VERB PROPNCTX 69.6% 20.3% 15.9% 15.6% PROPN NOUN ADPPT 2.5% 37.0% 13.0% 13.0% AUX NOUN PROPNFT 2.1% 31.6% 23.7% 15.8% NOUN PROPN ADP ALL 100.0% 18.9% 14.3% 13.9% Table 2: Percentage of examples falling into each region and the top POS tags for each regions in the XSum map.", "words.", "Nouns are still prevalent, accounting for 13.5% of the category.", "After observing the data, we found that these points represent commonsense knowledge or common nouns or entities, like Na-tions following United or Obama following Barack where the model generates these without relying on the input.", "Around 8% of cases fall into gaps between these categories.", "Only 2.5% and 2.1% of the generations fall into the PT and FT , respectively.", "These are small but significant cases, as they clearly show the biases from the pretraining corpus and the fine-tuning corpus.", "We now describe the effects we observe here.", "One benefit of mapping the predictions is to detect predictions that are suspiciously likely given one language model but not the other, specifically those in the PT and FT regions.", "CNN/DM has more cases falling into PT than XSum so we focus on CNN/DN for PT and XSum for FT .", "PT : Bias from the Pretraining Corpus The data points falling into the PT area are those where LM prediction is similar to S full prediction but the S prediction is very different from S full .", "We present a set of representative examples from the PT region of the CNN/DM map in Table", "3. For the first example, match is assigned high probability by LM and S full , but not by the no-input summarization models.", "The cases in this table exhibit a suspiciously high probability assigned to the correct answer in the base LM: its confidence about Kylie Jenner vs. Kyle Min(ogue) is uncalibrated with what the true probabilities of these seem likely to be to our human eyes.", "One explanation which we investigate is whether the validation and test sets of benchmark datasets Prefix Target Relevant Context LM S S XS full Danny Welbeck was named man of the match [...] , the booming PA system kicked in and proclaimed that Danny Welbeck was England's man of the match.", "like CNN/DM are contained in the pre-training corpus, which could teach the base LM these patterns.", "Several web crawls have been used for different models, including C4 (Raffel et al., 2020), OpenWebText (Radford et al., 2019), CC-News (Liu et al., 2019).", "Due to the availability of the corpus, we only check OpenWebText, which, as part of C4, is used for models like GPT-2, PEGASUS and T5.", "According to Hermann et al. (2015), the validation and test sets of CNN/DM come from March and April 2015, respectively.", "We extract the March to May 2015 dump of OpenWebText and find that 4.46% (512 out of 11,490) test examples and 3.31% (442 out of 13,368) validation examples are included in OpenWebText.", "4 Our matching criteria is more than three 7-gram word overlaps between the pre-training document and reference summaries from the dataset; upon inspection, over 90% of the cases flagged by this criterion contained large chunks of the reference summary.", "language model has likely memorized certain articles and their summaries.", "Other factors could be at play: other types of knowledge in the language model (Petroni et al., 2019; Shin et al., 2020; Tal-mor et al., 2020) such as key entity cooccurrences, could be contributing to these cases as well and simply be forgotten during fine-tuning.", "However, as an analysis tool, ablation suggested a hypothesis 4 This is an approximation since we cannot precisely verify the pre-training datasets for each model, but it is more likely to be an underestimate than an overestimate.", "We only extract pre-training documents from cnn.com and dailymail.", "co.uk from a limited time range, so we may fail to detect snippets of reference summaries that show up in other time ranges of the scrape or in other news sources, whether through plagiarism or re-publishing.", "about data overlap which we were able to partially confirm, which supports its utility for understanding summarization models.", "FT : Bias from Fine-tuning Data We now examine the data points falling in the bottom right corner of the map, where the fine-tuned LM matches the full model more closely than the pre-trained LM.", "In Table 4, we present some model-generated bigrams found in the FT region of XSum and compare the frequency of these patterns in the XSum and CNN/DM training data.", "Not every generation instance of these bigrams falls into the FT region, but many do.", "Table 4 shows the relative probabilities of these counts in XSum and CNN/DM, showing that these cases are all very common in XSum training summaries.", "The aggregate over all decisions in this region (the last line) shows this pattern as well.", "These can suggest larger patterns: the first three come from the common phrase in our series of letters from African journalists (starts 0.5% of Target w attr DISPTOKRMTOK n = 0 1 n = 0 1 Cameron minister 0.01 0.90 0.99 0.99 for Labour 0.96 0.94 0.98 0.91 mayoral 100 0.01 0.01 0.57 0.57 S(adiq) Khan 0.01 0.01 0.97 0.38 Khan Jeremy 0.99 0.99 0.99 0.99 Table 5: Examples of DISPTOK and RMTOK .", "summaries in XSum).", "Other stylistic markers, such as ways of writing currency, are memorized too.", "As shown in Table 2, more than two thirds of generation steps actually do rely heavily on the context.", "Here, we focus specifically on identifying which aspects of the input are important for cases where the input does influence the decision heavily using attribution methods.", "Each of the methods we explore scores each word w i in the input document with a score i .", "The score can be a normalized distribution, or a probability value ranging from 0 to", "1. For each method, we rank the tokens in descending order by score.", "To confirm that the tokens highlighted are meaningfully used by the model when making its predictions, we propose an evaluation protocol based on a range of counterfactual modifications of the input document, taking care to make these compatible with the nature of subword tokenization.", "Our evaluation focuses on the following question: given a budget of tokens or sentences, how well does the model reconstruct the target token y t when shown the important content selected by the attribution method?", "Our metric is the cross entropy loss of predicting the model-generated next token given different subsets of the input.", "5 Methods based on adding or removing single tokens have been used to evaluate before (Nguyen, 2018).", "However, for summarization, showing the model partial or ungrammatical inputs in the source 5 The full model is not a strict bound on this; restricting the model to only see salient content could actually increase the probability of what was generated.", "However, because we have limited ourselves to CTX examples and are aggregating across a large corpus, we do not observe this in our metrics.", "may significantly alter the model's behavior.", "To address this, we use four methods to evaluate under a range of conditions, where in each case the model has a specific budget.", "Our conditions are:", "1. DISPTOK selects n tokens as the input.", "2. RMTOK shows the document with n tokens masked instead of deleted.", "6 3. DISPSENT selects n sentences as the input, based on cumulative attribution over the sentence.", "4. RMSENT removes n sentences from the document as the input.", "Table 5 shows examples of these methods applied to the examples from Figure", "1. These highlight the impact of key tokens in certain generation cases, but not all.", "We describe the details of how we feed or mask the tokens in TOK in Appendix.", "C. The sentence-level methods are guaranteed to return grammatical input.", "Token-based evaluation is more precise which helps locating the exact feature token, but the trade-off is that the input is not fully natural.", "We use two baseline methods: Random , which randomly selects tokens or sentences to display or remove, and Lead , which selects tokens or sentences according to document position, along with several attribution methods from prior work.", "Occlusion (Zeiler and Fergus, 2014) involves iteratively masking every single token or remove each sentence in the document and measuring how the prediction probability of the target token changes.", "Although attention has been questioned (Jain and Wallace, 2019), it still has some value as an explanation technique (Wiegreffe and Pinter, 2019; Serrano and Smith, 2019).", "We pool the attention heads from the last layer of the Transformer inside our models, ignoring special tokens like SOS.", "Finally, we use two gradient-based techniques (Bastings and Filippova, 2020).", "Input Gradient is a saliency based approach taking the gradient of the target token with respect to the input and multiplying by the input feature values.", "Integrated Gradients Sundararajan et al. (2017) computes gradients of the model input at a number of points interpolated between a reference baseline (typi-cally an all-MASK input) and the actual input.", "This computes a path integral of the gradient.", "6 Note that we do not directly remove the tokens because this approach typically makes the sentence ungrammatical.", "Token masks are a more natural type of input to models that are pre-trained with these sorts of masks anyway.", "Attribution Aggregation for Sentence-level Evaluation We have described the six methods we use for token-level evaluation.", "To evaluate these methods on the sentence level benchmark, we aggreagate the attributions in each sentence attr ( s i ) = (cid:80) dj =0 attr ( w j ) /d .", "Hence we can obtain a ranking of sentences by their aggregated attribution score.", "In Figure 3, we show the token-level and sentence-level comparison of the attribution methods on the CTX examples in XSum.", "IntGrad is the best technique overall, with InpGrad achieving similar performance.", "Interestingly, occlusion underperforms other techniques when more tokens are removed, despite our evaluation being based on occlusion; this indicates that single-token occlusion is not necessarily the strongest attribution method.", "We also found that all of these give similar results, regardless of whether they present the model with a realistic input (sentence removal) or potentially ungrammatical or unrealistic input (isolated tokens added/removed).", "Our evaluation protocol shows better performance from gradient-based techniques.", "The combination of four settings tests a range of counterfactual inputs to the model and increases our confidence in these conclusions.", "We now present a case study of the sort of analysis that can be undertaken using our two-stage interpretation method.", "We conduct an analysis driven by sentence fusion, a particular class of CTX-Hd cases.", "Sentence fusion is an exciting capability of abstractive models that has been studied previously (Barzilay and McKeown, 2005; Thadani and McKeown, 2013; Lebanoff et al., 2019, 2020).", "We broadly identify cases of cross-sentence information fusion by first finding cases in CTX-Hd where the max( P sent ) < 0 .", "5 , but two sentences combined enable the model to predict the word.", "We search over all (cid:0) m 2 (cid:1) combinations of sentences ( m is the total number of sentences) and run the S part model on each pair of sentences.", "We identify 16.7% and 6.0% of cases in CNN/DM and XSum, respectively, where conditioning on a pair of sentences increases the probability of the model's generation by at least 0.5 over any sentence in isolation.", "In Table 6, we show two examples of sentence fusion on XSum in this category, additionally analyzed using the DISPSENT attribution method.", "In the first example, typical in XSum, the model has to predict the event name UCI without actually seeing it.", "The model's reasoning appears distributed over the document: it consults entity and event descriptions like world champion and France , perhaps to determine this is an international event.", "In the second example, we see the model again connects several pieces of information.", "The generated text is factually incorrect: the horse is retiring, and not Dujardin.", "Nevertheless, this process tells us some things that are going wrong (the model disregards the horse in the generation process), and could potentially be useful for fine-grained factuality evaluation using recent techniques (Tian et al., 2019; Kryscinski et al., 2020; Goyal and Durrett, 2020; Maynez et al., 2020).", "The majority of the fusion cases we investigated actually reflect content selection at the beginning of the generation.", "Other cases we observe fall more cleanly into classic sentence fusion or draw on coreference resolution.", "Model interpretability for NLP has been intensively studied in the past few years (Ribeiro et al., 2016; Alvarez-Melis and Jaakkola, 2018; Jacovi et al., 2018; Chen et al., 2020a; Jacovi and Goldberg, 2020; DeYoung et al., 2020; Pruthi et al., 2020; Ye et al., 2021).", "However, many of these techniques are tailored to classification tasks like sentiment.", "For post-hoc interpretation of generation, most work has studied machine translation (Ma et al.; Li et al., 2020; Voita et al., 2020).", "Li et al. (2020) focus on evaluating explanations by finding surrogate models that are similar to the base MT model; this is similar to our evaluation approach in Section 5, but involves an extra distillation step.", "Compared to Voita et al. (2020), we are more interested in highlighting how and why changes in the source article will change the summary ( counterfactual explanations ).", "To analyze summarization more broadly, Xu et al. (2020a) provides a descriptive analysis about models via uncertainty.", "Previous work (Kedzie et al., 2018; Zhong et al., 2019; Kryscinski et al., 2019; Zhong et al., 2019) has conducted comprehensive examination of the limitations of summarization models.", "Filippova (2020) ablates model input to control the degree of hallucination.", "Miao et al. (2021) improves the training of MT by comparing the prediction of LM and MT model.", "Finally, this work has focused chiefly on abstractive summarization models.", "We believe interpreting extractive (Liu and Lapata, 2019) or compressive (Xu and Durrett, 2019; Xu et al., 2020b; Desai et al., 2020) models would be worthwhile to explore and could leverage similar attribution techniques, although ablation does not apply as discussed here.", "We recommend a few methodological takeaways that can generalize to other conditional generation problems as well.", "First, use ablation to analyze generation models.", "While removing the source forms inputs not strictly on the data manifold, ablation was remarkably easy, robust, and informative in our analysis.", "Constructing our maps only requires querying three models with no retraining required.", "Second, to understand an individual decision, use feature attribution methods on the source only .", "Including the target context often muddies the interpretation since recent words are always relevant, but looking at attributions over the source and target together doesn't accurately convey the model's decision-making process.", "Finally, to probe attributions more deeply, consider adding or removing various sets of tokens.", "The choice of counterfactuals to explain is an ill-posed problem, but we view the set used here as realistic for this setting (Ye et al., 2021).", "Taken together, our two-step framework allows us to identify generation modes and attribute generation decisions to the input document.", "Our techniques shed light on possible sources of bias and can be used to explore phenomena such as sentence fusion.", "We believe these pave the way for future studies of targeted phenomena, including fusion, robustness, and bias in text generation, through the lens of these interpretation techniques.", "Thanks to the members of the UT TAUR lab for helpful discussion, especially Tanya Goyal, Ya-sumasa Onoe, and Xi Ye for constructive suggestions.", "This work was partially supported by a gift from Salesforce Research and a gift from Amazon.", "Thanks as well to the anonymous reviewers for their helpful comments." ]
[ "method", "objective", "objective", "objective", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "result", "method", "method", "other", "abstain", "method", "abstain", "objective", "result", "abstain", "other", "other", "method", "other", "method", "method", "method", "other", "method", "objective", "objective", "method", "method", "other", "other", "other", "other", "method", "method", "method", "other", "method", "method", "other", "other", "method", "other", "other", "other", "other", "method", "objective", "method", "method", "other", "method", "other", "other", "method", "objective", "other", "method", "method", "method", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "result", "abstain", "abstain", "result", "result", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "other", "other", "abstain", "abstain", "other", "other", "other", "other", "method", "abstain", "result", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "objective", "method", "other", "other", "other" ]
[ "This paper seeks to model human language by the mathematical framework of quantum physics.", "With the well-designed mathematical formulations in quantum physics, this framework unifies different linguistic units in a single complex-valued vector space, e.g. words as particles in quantum states and sentences as mixed systems.", "A complex-valued network is built to implement this framework for semantic matching.", "With well-constrained complex-valued components, the network admits interpretations to explicit physical meanings.", "The proposed complex-valued network for matching (CNM) 1 achieves comparable performances to strong CNN and RNN baselines on two benchmarking question answering (QA) datasets.", "There is a growing concern on the interpretability of neural networks.", "Along with the increasing power of neural networks comes the challenge of interpreting the numerical representation of network components into human-understandable language.", "Lipton (2018) points out two important factors for a model to be interpretable, namely post-hoc interpretability and transparency .", "The former refers to explanations of why a model works after it is executed, while the latter concerns self-explainability of components through some mechanisms in the designing phase of the model.", "We seek inspirations from quantum physics to build transparent and post-hoc interpretable networks for modeling human language.", "The emerging research field of cognition suggests that there exist quantum-like phenomena in human cognition (Aerts and Sozzo, 2014), especially language understanding (Bruza et al., 2008).", "Intuitively, a Equal Contribution Corresponding Author 1 https://github.com/wabyking/qnn.git sentence can be treated as a physical system with multiple words (like particles), and these words are usually polysemous (superposed) and correlated (entangled) with each other.", "Motivated by these existing works, we aim to investigate the following Research Question (RQ).", "Towards this question, we build a novel quantum-theoretic framework for modeling language, in an attempt to capture the quantum-ness in the cognitive aspect of human language.", "The framework models different linguistic units as quantum states with the adoption of quantum probability (QP), which is the mathematical framework of quantum physics that models uncertainly on a uniform Semantic Hilbert Space (SHS).", "Complex values are crucial in the mathematical framework of characterizing quantum physics.", "In order to preserve physical properties, the linguistic units have to be represented as complex vectors or matrices.", "This naturally gives rise to another research question: RQ2 : Can we benefit from the complex-valued representation of human language in a real natural language processing (NLP) scenario?", "To this end, we formulate a linguistic unit as a complex-valued vector, and link its length and direction to different physical meanings: the length represents the relative weight of the word while the direction is viewed as a superposition state.", "The superposition state is further represented in an amplitude-phase manner, with amplitudes corresponding to the lexical meaning and phases implicitly reflecting the higher-level semantic aspects such as polarity, ambiguity or emotion.", "In order to evaluate the above framework, we implement it as a complex-valued network (CNM) for semantic matching.", "The network is applied to the question answering task, which is the most typical matching task that aims at selecting the best answer for a question from a pool of candidates.", "In order to facilitate local matching with n-grams of a sentence pair, we design a local matching scheme in CNM.", "Most of State-of-the-art QA models are mainly based on Convolution Neural Network (CNN), Recurrent Neural Network (RNN) and many variants thereof (Wang and Nyberg, 2015; Yang et al., 2016; Hu et al., 2014; Tan et al., 2015).", "However, with opaque structures of convolutional kernels and recurrent cells, these models are hard to understand for humans.", "We argue that our model is advantageous in terms of interpretability.", "Our proposed CNM is transparent in that it is designed in alignment with quantum physics.", "Experiments on benchmarking QA datasets show that CNM has comparable performance to strong CNN and RNN baselines, whilst admitting post-hoc interpretations to human-understandable language.", "We therefore answer RQ1 by claiming that it is possible to model human language with the proposed quantum-theoretical framework in this paper.", "Furthermore, an ablation study shows that the complex-valued word embedding performs better than its real counterpart, which allows us to answer RQ2 by claiming that we benefit from the complex-valued representation of natural language on the QA task.", "Quantum probability provides a sound explanation for the phenomena and concepts of quantum mechanics, by formulating events as subspaces in a vector space with projective geometry.", "Quantum Superposition is one of the fundamental concepts in Quantum Physics, which describes the uncertainty of a single particle.", "In the micro world, a particle like a photon can be in multiple mutual-exclusive basis states simultaneously with a probability distribution.", "In a two-dimensional example, two basis vectors are denoted as | 0 (cid:105) and | 1 (cid:105) 2 .", "2 We here adopt the widely used Dirac notations in quantum probability, in which a unit vector (cid:126) and its transpose (cid:126) T are denoted as a ket | u (cid:105) and a bra (cid:104) u | respectively.", "Superposition is implemented to model a general state which is a linear combination of basis vectors with complex-valued weights such that | (cid:105) = 0 | 0 (cid:105) + 1 | 1 (cid:105) , (1) where 0 and 1 are complex scalars satisfying 0 | 0 | 2 1 , 0 | 1 | 2 1 and | 0 | 2 + | 1 | 2 = 1 .", "It follows that | (cid:105) is defined over the complex field.", "When 0 and 1 are non-zero values, the state | (cid:105) is said to be a superposition of the states | 0 (cid:105) and | 1 (cid:105) , and the scalars 0 and 1 denote the probability amplitudes of the superposition.", "The uncertainty of an ensemble system with multiple particles is encapsulated as a mixed state, represented by a positive semi-definite matrix with unitary trace called density matrix : = (cid:80) mi | i (cid:105) (cid:104) i | , where {| i (cid:105)} mi =0 are pure states like Eq.", "1. In order to infer the probabilistic properties of in the state space, Gleason's theorem (Glea-son, 1957; Hughes, 1992) is used to calculate probability to observe x through projection measurements | x (cid:105) (cid:104) x | that is a rank-one projector denoted as a outer product of | x (cid:105) .", "The measured probability p x ( ) is a nonnegative real-valued scalar, since both and | x (cid:105) (cid:104) x | are Hermitian.", "The unitary trace property guarantees (cid:80) x X p x ( ) = 1 for X being a set of orthogonal basis states.", "Based on the density matrices representation for documents in information retrieval (Van Rijsber-gen, 2004; Sordoni et al., 2013), Zhang et al. (2018a) built a neural network with density matrix for question answering.", "This Neural Network based Quantum Language Model (NNQLM) embeds a word as a unit vector and a sentence as a real-valued density matrix.", "The distance between a pair of density matrices is obtained by extracting features of their matrix multiplication in two ways: NNQLM-I directly takes the trace of the resulting matrix, while NNQLM-II applies convolutional structures on top of the matrix to determine whether the pair of sentences match or not.", "NNQLM is limited in that it does not make proper use of the full potential of probabilistic property of a density matrices.By treating density matrices as ordinary real vectors (NNQLM-I) or matrices (NNQLM-II), the full potential with complex-valued formulations is largely ignored.", "Meanwhile, adding convolutional layers on top of a density matrix is more of an empirical workaround than an implementation of a theoretical framework.", "In contrast, a complex-valued matching network is built on top of a quantum-theoretical framework for natural language.", "In particular, an indirect way to measure the distance between two density matrices through trainable measurement operations, which makes advantage of the probabilistic properties of density matrices and also provides flexible matching score driven by training data.", "Here we introduce the Semantic Hilbert Space H defined on a complex vector space C n , and three different linguistic units, namely sememes, words and word combinations on the space.", "The concept of semantic measurement is introduced at last.", "Sememes.", "We assume H is spanned by the set of orthogonal basis states {| e j (cid:105)} nj =1 for sememes , which are the minimum semantic units of word meanings in language universals (Goddard and Wierzbicka, 1994).", "The unit state | e j (cid:105) can be seen as a one-hot vector, i.e., the j -th element in | e j (cid:105) is one while other elements are zero, in order to obtain a set of orthogonal unit states.", "Semantic units with larger granularities are based on the set of sememe basis.", "Words.", "Words are composed of sememes in superposition.", "Each word w is a superposition over all sememes {| e j (cid:105)} nj =1 , or equivalently a unit-length vector on H : | w (cid:105) = n (cid:88) j =1 r j e i j | e j (cid:105) , (3) i is the imaginary number with i 2 = 1 .", "In the above expression, { r j } nj =1 are non-negative real-valued amplitudes satisfying (cid:80) nj =1 r j 2 =1 and j [ , ] are the corresponding complex phases.", "In comparison to Eq.", "1, { r j e i j } nj =0 are the polar form representation of the complex-valued scalars { j } 1 j =0 .", "Word Combinations.", "We view a combination of words (e.g. phrase, n -gram, sentence or document) as a mixed system composed of individual words, and its representation is computed as follows: = m (cid:88) j 1 m | w j (cid:105) (cid:104) w j | , (4) where m is the number of words and | w j (cid:105) is word superposition state in Eq.", "3, allowing multiple occurrences.", "Eq.", "4 produces a density matrix for semantic composition of words.", "It also describes a non-classical distribution over the set of sememes: the complex-valued off-diagonal elements describes the correlations between sememes, while the diagonal entries (guaranteed to be real by its original property) correspond to a standard probability distribution.", "The off-diagonal elements provide our framework some potentials to model the possible interactions between the basic sememe basis, which was usually considered mutually independent with each other.", "Semantic Measurements.", "The high-level features of a sequence of words are extracted through measurements on its mixed state.", "Given a density matrix of a mixed state, a rank-one projector P , which is the outer product of a unit complex vector, i.e. P = | x (cid:105) (cid:104) x | , is applied as a measurement projector.", "It is worth mentioning that | x (cid:105) could be any pure state in this Hilbert space (not only limited to a specific word w ).", "The measured probability is computed by Gleasons Theorem in Eq.", "2. 4 Complex-valued Network for Matching We implemented an end-to-end network for matching on the Semantic Hilbert Space.", "Fig. 1 shows the overall structure of the proposed Complex-valued Network for Matching (CNM).", "Each component of the network is further discussed in this section.", "On the Semantic Hilbert Space, each word w is embedded as a complex-valued vector (cid:126)w .", "Here we link its length and direction to different physical meanings: the length of a vector represents the relative weight of the word while the vector direction is viewed as a superposition state.", "Each word w adopts a normalization into a superposition state | w (cid:105) and a word-dependent weight ( w ) : | w (cid:105) = (cid:126)w || (cid:126)w || , ( w ) = || (cid:126)w || , (5) where || (cid:126)w || denotes the 2-norm length of (cid:126)w .", "( w ) is used to compute the relative weight of a word in Figure 1: Architecture of Complex-valued Network for Matching.", "a local context window, which we will elaborate in Section 4.2.", "A sentence is modeled as a combination of individual words in it.", "NNQLM (Zhang et al., 2018a) models a sentence as a global mixture of all words, which implicitly assumes a global interaction among all sentence words.", "This seems to be unreasonable in practice, especially for a long text segment such as a paragraph or a document, where the interaction between the first word and the last word is often negligible.", "Therefore, we address this limitation by proposing a local mixture of words, which tends to capture the semantic relations between neighboring words and undermine the long-range word dependencies.", "As is shown in Fig. 2, a sliding window is applied and a density matrix is constructed for a local window of length l (e.g. 3).", "Therefore, a sentence is composed of a sequence of density matrices for l -grams.", "The representation of a local l -gram window is obtained by an improved approach over Eq.", "4.", "In Eq.", "4, each word is assigned with the same weight, which does not hold from an empirical point of view.", "In this study, we take the L 2 -norm of the word vector as the relative weight in a local context window for a specific word, which could be updated during training.", "To some extent, L 2 -norm is a measure of semantic richness of a word, i.e. the longer the vector the richer the meaning.", "The Figure 2: Architecture of local mixture component.", "density matrix of an l -gram is computed as follows: = l (cid:88) i p ( w i ) | w i (cid:105) (cid:104) w i | , (6) where the relative importance of each word p ( w i ) in an l -gram is the soft-max normalized word-dependent weight: p ( w i ) = e ( wi ) (cid:80) lj e ( wj ) , where ( w i ) is the word-dependent weight.", "By converting word-dependent weights to a probability distribution, a legal density matrix is produced, because (cid:80) li p ( w i ) = 1 gives tr ( ) = 1 .", "Moreover, the weight of a word also depends on its neighboring words in a local context.", "In quantum information, there have been works trying to estimate a quantum state from the results of a series of measurements ( Rehacek et al., 2001; Lvovsky, 2004).", "Inspired by these works, we introduce trainable measurements to extract density matrix features and match a pair of sentences.", "Suppose a pair of sentences with length L are represented as two sets of density matrices { 1 j } Lj =1 and { 2 j } Lj =1 respectively.", "The same set of K semantic measurement operators {| v k (cid:105)} Kk =1 are applied to both sets, producing a pair of k byL probability matrix p 1 and p 2 , where p 1 jk = (cid:104) v k | 1 j | v k (cid:105) and p 2 jk = (cid:104) v k | 2 j | v k (cid:105) for k { 1 , ..., K } and j { 1 , ..., L } .", "A classical vector-based distances between p 1 and p 2 can be computed as the matching score of the sentence pair.", "By involving a set of semantic measurements, the properties of density matrix are taken into consideration in computing the density matrix distance.", "We believe that this way of computing density matrix distance is both theoretically sound and applicable in practice.", "The trace inner product of density matrices (Zhang et al., 2018a) breaks the basic axioms of metric, namely the non-negativity, identity of indiscernibles and triangle inequality.", "The CNN-based feature extraction (Zhang et al., 2018a) for density matrix multiplication loses the property of density matrix as a probability distribution.", "Nielsen and Chuang (2010) introduced three measures namely trace distance, fidelity, and VN-divergence.", "However, it is computationally costly to compute these metrics and propagate the loss in an end-to-end training framework.", "We set the measurements to be trainable so that the matching of question and answering can be integrated into the whole neural network, and identify the discriminative semantic measurements in a data-driven manner.", "From the perspective of linear discriminant analysis (LDA) (Fisher, 1936), this approach is intended to find a group of finite discriminative projection directions for a better di-vision of different classes, but in a more sound framework inspired by quantum probability with complex-valued values.", "From an empirical point of view, the data-driven measurements make it flexible to match two sentences.", "The experiments were conducted on two benchmarking question answering datasets for question answering (QA), namely TREC QA (Voorhees and Tice, 2000) and WikiQA (Yang et al., 2015).", "TREC QA is a standard QA dataset in the Text REtrieval Conference (TREC).", "WikiQA is released by Microsoft Research on open domain question answering.", "On both datasets, the task is to select the most appropriate answer from the candidate answers for a question, which requires a ranking of candidate answers.", "After removing the questions with no correct answers, the statistics of the cleaned datasets are given in the Tab.", "1. Two common rank-based metrics, namely mean average precision (MAP) and mean reciprocal rank (MRR), are used to measure the performance of models.", "We conduct a comprehensive comparison across a wide range of models.", "On TREC QA the experimented models include Bigram-CNN (Yu et al., 2014), three-layered Long Short-term Memory (LSTM) in combination with BM25 (LSTM-3L-BM25) (Wang and Nyberg, 2015), attention-based neural matching model (aNMM) (Yang et al., 2016), Multi-perspective CNN (MP-CNN) (He et al., 2015), CNTN (Qiu and Huang, 2015), attention-based LSTM+CNN model (LSTM-CNN-attn) (Tang et al., 2015) and pairwise word interaction modeling (PWIM) (He and Lin, 2016).", "On WikiQA dataset, we involve the following models into comparison: Bigram-CNN (Yu et al., 2014), CNN with word count information (CNN-Cnt) (Yang et al., 2015), QA-BILSTM (Santos et al., 2016), BILSTM with attentive pooling (AP-BILSTM) (Santos et al., 2016), and LSTM with attention (LSTM-attn) (Miao et al., 2015).", "On both datasets, we report the results of quantum language model (Sordoni et al., 2013) and two models NNQLM-I, NNQLM-II by (Zhang et al., 2018a) for comparison.", "The parameters in the network are = { R, , {| v i (cid:105)} ki =1 } , in which R and denote the lookup tables for amplitudes and complex phases of each word, and {| v i (cid:105)} ki =1 denotes the set of semantic measurements.", "We use 50-dimension complex word embedding.", "The amplitudes are initialized with 50-dimension Glove vectors (Penning-ton et al., 2014) and L2-norm regularized during training.", "The phases are randomly initialized under a normal distribution of [ , ] .", "The semantic measurements {| v i (cid:105)} ki =1 } are initialized with orthogonal real-valued one-hot vectors, and each measurement is constrained to be of unit length during training.", "We perform max pooling over the sentence dimension on the measurement probability matrices, resulting in a k -dim vector for both a question and an answer.", "We concatenate the vectors for l = 1 , 2 , 3 , 4 for questions and answers, and the larger size of windows are also tried.", "We will use a longer sliding window in datasets with longer sentences.", "The cosine similarity is used as the distance metric of measured probabilities.", "We use triplet hinge loss and set the margin = 0 .", "1 .", "A dropout layer is built over the embedding layer and measurement probabilities with a dropout rate of 0.9.", "A grid search is conducted over the parameter pools to explore the best parameters.", "The parameters under exploration include { 0 .", "01 , 0 .", "05 , 0 .", "1 } for the learning rate, { 1 e 5 , 1 e 6 , 1 e 7 , 1 e 8 } for the L2-normalization of complex word embeddings, { 8 , 16 , 32 } for batch size, and { 50 , 100 , 300 , 500 } for the number of semantic measurements.", "The proposed CNM has a limited scale of parameters.", "Apart from the complex word embed-dings which are | V | 2 n by size, the only set of parameters are {| v i (cid:105)} ki =1 which is k 2 n , with | V | , k, n being the vocabulary size, number of semantic measurements and the embedding dimension, respectively.", "In comparison, a single-layered CNN has at least l k n additional parameters with l being the filter width, while a single-layered LSTM is 4 k ( k + n ) by the minimum parameter scale.", "Although we use both amplitude part and phase part for word embedding, lower dimensions of embedding are adopted, namely 50, with Model MAP MRR Bigram-CNN 0.5476 0.6437 LSTM-3L-BM25 0.7134 0.7913 LSTM-CNN-attn 0.7279 0.8322 aNMM 0.7495 0.8109 MP-CNN 0.7770 0.8360 CNTN 0.7278 0.7831 PWIM 0.7588 0.8219 QLM 0.6780 0.7260 NNQLM-I 0.6791 0.7529 NNQLM-II 0.7589 0.8254 CNM 0.7701 0.8591 Over NNQLM-II 1.48% 4.08% Table 2: Experiment Results on TREC QA Dataset.", "Therefore, our network scales better than the advanced models on the CNN or LSTM basis.", "Tab.", "2 and 3 show the experiment results on TREC QA and WikiQA respectively, where bold values are the best performances out of all models.", "Our model achieves 3 best performances out of the 4 metrics on TREC QA and WikiQA, and performs slightly worse than the best-performed models on the remaining metric.", "This illustrates the effectiveness of our proposed model from a general perspective.", "Specifically, CNM outperforms most CNN and LSTM-based models, which have more complicated structures and a relatively larger parameters scale.", "Also, CNM performs better than existing quantum-inspired QA models, QLM and NNQLM on both datasets, which means that the quantum theoretical framework gives rise to better performs model.", "Moreover, a significant improvement over NNQLM-1 is observed on these two datasets, supporting our claim that the trace inner product is not an effective distance metric of two density matrices.", "An ablation test is conducted to examine the influ-ence of each component on our proposed CNM.", "The following models are implemented in the ablation test.", "FastText-MaxPool adopt max pooling over word-embedding, just like FastText (Joulin et al., 2016).", "CNM-Real replaces word embed-dings and measurements with their real counterparts.", "CNM-Global-Mixture adopts a global mixture of the whole sentence, in which a sentence is represented as a single density matrix, leading to a probability vector for the measurement result.", "CNM-trace-inner-product replaces the trainable measurements with trace inner product like NNQLM.", "For the real-valued models, we replace the embedding with double size of dimension, in order to eliminate the impact of the parameter scale on the performance.", "Due to limited space, we only report the ablation test result on TREC QA, and WikiQA has similar trends.", "The test results in Tab.", "4 demonstrate that each component plays a crucial role in the CNM model.", "In particular, the comparison with CNM-Real and FastText-MaxPool shows the effectiveness of introducing complex-valued components, the increase in performance over CNM-Global-Mixture reveals the superiority of local mixture, and the comparison with CNM-trace-inner-product confirms the usefulness of trainable measurements.", "This section aims to investigate the proposed research questions mentioned in Sec", "1. For RQ1, we explain the physical meaning of each component in term of the transparency (Sec. 6.1), and design some case studies for the post-hoc interpretability (Sec. 6.2).", "For RQ2, we argue that the complex-valued representation can model different aspects of semantics and naturally address the non-linear semantic compositionality, as discussed in Sec. 6.3.", "CNM aims to unify many semantic units with different granularity e.g. sememes, words, phrases (or N-gram) and document in a single complex-valued vector space, as shown in Tab.", "5.", "In particular, we formulate atomic sememes as a group of complete orthogonal basis states and words as superposition states over them.", "A linguistic unit with larger-granularity e.g. a word phrase or a sentence is represented as a mixed system over the words (with a density matrix, i.e. a positive semi-definite matrix with unit trace).", "More importantly, trainable projection measurements are used to extract high-level representation for a word phrase or a sentence.", "Each measurement is also directly embedded in this unified Hilbert space, as a specific unit state (like words), thus making it easily understood by the neighbor words near this specific state.", "The corresponding trainable components in state-of-art neural network architectures, namely, kernels in CNN and cells in RNN, are represented as arbitrary real-valued without any constraints, lead to difficulty to be understood.", "The post-hoc Interpretability is shown in three groups of case studies, namely word weight scheme, matching pattern, and discriminative semantic measurements.", "tant ones.", "The importance of words is based on the L2-norm of its learned amplitude embedding according to Eq.", "5.", "It is consistent with the intuition that, the important words are more about specific topics or discriminative nouns, while the unimportant words include meaningless numbers or super-high frequency words.", "Note that some special form (e.g. plural form in the last row ) of words are also identified as unimportant words, since we commonly did not stem the words.", "Tab.", "7 shows the match schema with local sliding windows.", "In a local context window, we visualize the relative weights (i.e. the weights after normalized by softmax ) for each word with darkness degrees.", "The table illustrates that our model is capable of identifying true matched local windows of a sentence pair.", "Even some words are replaced with similar forms (e.g. commit and committing in the last case) or meanings (e.g. change and new in the fourth case), it could be robust to get a relatively high matching score.", "From an empirical point of view, our model outperforms other models in situations where specific matching pattern are crucial to the sentence meaning, such as when two sentences share some unordered bag-of-word combinations.", "To some extent, it is robust up to replacement of words with similar ones in the Semantic Hilbert Space.", "The semantic measurements are performed through rank-one projectors {| x (cid:105) (cid:104) x |} .", "From a classical point of view, each projector is associated with a superposition of fundamental sememes, which is not necessarily linked to a particular word.", "Since the similarity metric in the Semantic Hilbert Space can be used to indicate semantic relatedness, we rely on the nearby words of the learned measurement projectors to understand what they may refer to.", "Essentially, we identified the 10 most similar words to a measurement based on the cosine sim-Selected neighborhood words for a measurement vector 1 andes, nagoya, inter-american, low-caste 2 cools, injection, boiling,adrift 3 andrews, paul, manson, bair 4 historically, 19th-century, genetic, hatchback 5 missile, exile, rebellion, darkness Table 8: Selected learned measurements for TREC QA.", "ilarity metric.", "Tab.", "8 shows part of the most similar words of 5 measurements, which are randomly chosen from the total number of k =10 trainable measurements for the TREC QA dataset.", "It can be seen that the first three selected measurements were about positions, movement verbs, and peo-ple's names, while the rest were about the topic of history and rebellion respectively.", "Even though a clear explanation of the measurements is not available, we are still able to roughly understand the meaning of the measurements in the proposed data-driven approach.", "In CNM, each word is naturally embedded as a complex vector, composed of a complex phase part, a unit amplitude part, and a scalar-valued length.", "We argue that the amplitude part (i.e. the squared root of a probabilistic weight), corresponds to the classical word embedding with the lexical meaning, while the phase part implicitly reflects the higher-level semantic aspect e.g. polarity, ambiguity or emotion.", "The scalar-valued length is considered as the relative weight in a mixed system.", "The ablation study in Sec. 5.4 confirms that the complex-valued word embedding performs better than the real word embedding, which indicates that we benefit from the complex-valued embedding on the QA task.", "From a mathematical point of view, complex-valued word embedding and other complex-valued components forms a new Hilbert vector space for modelling language, with a new definitions of addition and multiplication, as well as a new inner product operation.", "For instance, addition in the word meaning combination is defined as z = z 1 + z 2 = r 1 e i 1 + r 2 e i 2 = (cid:113) r 21 + r 22 + 2 r 1 r 2 cos( 2 1 ) e i arctan (cid:16) r 1 sin( 1)+ r 2 sin( 2) r 1 cos( 1)+ r 2 cos( 2) (cid:17) (7) where z 1 and z 2 are the values for the corresponding element for two different word vectors | w 1 (cid:105) and | w 2 (cid:105) respectively.", "Both the amplitudes and complex phases of z are added with a nonlinear combination of phases and amplitudes of z 1 and z 2 .", "A classical linear addition gives z = r 1 + r 2 , which can be viewed as a degenerating case of the complex-valued addition with the phase information being removed ( 1 = 2 = 0 in the example).", "Towards the interpretable matching issue, we propose two research questions to investigate the possibility of language modelling with quantum mathematical framework.", "To this end, we design a new framework to model all the linguistic units in a unified Hilbert space with well-defined mathematical constraints and explicit physical meaning.", "We implement the above framework with neural network and then demonstrate its effectiveness in question answering (QA) task.", "Due to the well-designed components, our model is advantageous with its interpretability in term of transparency and post-hoc interpretability, and also shows its potential to use complex-valued components in NLP.", "Despite the effectiveness of the current network, we would like to further explore the phase part in complex-valued word embedding to directly link to concrete semantics such as word sentiment or word position.", "Another possible direction is to borrow other quantum concepts to capture the interaction and non-interaction between word semantics, such as the Fock Space (Sozzo, 2014) which considers both interacting and noninteracting entities in different Hilbert Spaces.", "Furthermore, a deeper and robust quantum-inspired neural architecture in a higher-dimension Hilbert space like (Zhang et al., 2018b) is also worth to be investigated for achieving stronger performances with better explanatory power.", "We thank Sagar Uprety, Dawei Song, and Prayag Tiwari for helpful discussions.", "Peng Zhang and Peter Bruza gave us constructive comments to improve the paper.", "The GPU computing resources are partly supported by Beijing Ultrapower Software Co., Ltd and Jianquan Li.", "Anne Elize R. V. Lima helped us redraw the figures using her talent.", "The three authors are supported by the Quantum Access and Retrieval Theory (QUARTZ) project, which has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skodowska-Curie grant agreement No. 721321." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "result", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "objective", "objective", "objective", "result", "objective", "abstain", "abstain", "other", "other", "other", "other", "other" ]
[ "Natural Language Inference is a challenging task that has received substantial attention, and state-of-the-art models now achieve impressive test set performance in the form of accuracy scores.", "Here, we go beyond this single evaluation metric to examine robustness to semantically-valid alterations to the input data.", "We identify three factors insensitivity , polarity and unseen pairs and compare their impact on three SNLI models under a variety of conditions.", "Our results demonstrate a number of strengths and weaknesses in the models' ability to generalise to new in-domain instances.", "In particular, while strong performance is possible on unseen hypernyms, unseen antonyms are more challenging for all the models.", "More generally, the models suffer from an insensitivity to certain small but semantically significant alterations, and are also often influenced by simple statistical correlations between words and training labels.", "Overall, we show that evaluations of NLI models can benefit from studying the influence of factors intrinsic to the models or found in the dataset used.", "The task of Natural Language Inference (NLI) 1 has received a lot of attention and has elicited models which have achieved impressive results on the Stanford NLI (SNLI) dataset (Bowman et al., 2015).", "Such results are impressive due to the linguistic knowledge required to solve the task (LoBue and Yates, 2011; Maccartney, 2009).", "However, the ever-growing complexity of these models inhibits a full understanding of the phenomena that they capture.", "1 Also known as Recognizing Textual Entailment.", "As a consequence, evaluating these models purely on test set performance may not yield enough insight into the complete repertoire of abilities learned and any possible abnormal behaviors (Kummerfeld et al., 2012; Sammons et al., 2010).", "A similar case can be observed in models from other domains; take as an example an image classifier that predicts based on the image's background rather than on the target object (Zhao et al., 2017; Ribeiro et al., 2016), or a classifier used in social contexts that predicts a label based on racial attributes (Crawford and Calo, 2016).", "In both examples, the models exploit a bias (an undesired pattern hidden in the dataset) to enhance accuracy.", "In such cases, the models may appear to be robust to new and even challenging test instances; however, this behavior may be due to spurious factors, such as biases.", "Assessing to what extent the models are robust to these contingencies just by looking at test accuracy is, therefore, difficult.", "In this work we aim to study how certain factors affect the robustness of three pre-trained NLI models (a conditional encoder, the DAM model (Parikh et al., 2016), and the ESIM model (Chen et al., 2017)).", "We call these target factors insensitivity (not recognizing a new instance), polarity (a word-pair bias), and unseen pairs (recognizing the semantic relation of new word pairs).", "We became aware of these factors based on an exploration of the models' behavior, and we hypothesize that these factors systematically influence the behavior of the models.", "In order to systematically test if the above factors affect robustness, we propose a set of challenging instances for the models: We sample a set of instances from SNLI data, we apply a transformation on this set that yields a new set of instances, and we test both how well the models 1975 classify these new instances and whether the target factors influence the models' behavior.", "The transformation (swapping a pair of words between premise and hypothesis sentences) is intended to yield both easy and difficult instances to challenge the models, but easy for a human to annotate them.", "We draw motivation to study the robustness of NLI models from previous work on evaluating complex models (Isabelle et al., 2017; White et al., 2017).", "Furthermore, we base our approach on the discipline of behavioral science which provides methodologies for analyzing how certain factors influence the behavior of subjects under study (Epling and Pierce, 1986).", "We aim to answer the research questions: How robust is the predictive behavior of the pre-trained models under our transformation to input data?", "Do the target factors (insensitivity, polarity, and unseen pairs) influence the prediction of the models?", "Are these factors common across models?", "Our results show that the models are robust mainly where the semantics of the new instances do not change significantly with respect to the sampled instances and thus the class labels remain unaltered;", "i.e., the models are insensitive to our transformation to input data.", "However, when the class labels change, the models significantly drop accuracy.", "In addition, the models exploit a bias, polarity, to stay robust when facing new instances.", "We also find that the models are able to cope with unseen word pairs under a hypernym relation, but not with those under an antonym relation, suggesting their inability to learn a symmetric relation.", "Previous works in ML and NLP have analyzed different aspects of complex models using a variety of approaches; for example, understanding input-output relationships by approximating the local or global behavior of the model using an interpretable model (Ribeiro et al., 2016; Craven and Shavlik, 1996), or analyzing the output of the model under lesions of its internal mechanism (Li et al., 2016).", "Another line of work has analyzed the robustness of NLP models both via controlled experiments to complement the information from the test set accuracy and test abilities of the models (Isabelle et al., 2017; B. Hashemi and Hwa, 2016; White et al., 2017) and via adversarial instances to expose weaknesses (Jia and Liang, 2017).", "In addition, work has been done to uncover and diminish gender biases in datasets captured by structured prediction models (Zhao et al., 2017) and word embeddings (Bolukbasi et al., 2016).", "However, to the best of our knowledge, there is no previous work to study the robustness of NLI models while analyzing factors affecting their predictions.", "Previous work on behavioral science has focused on understanding how environmental factors influence behaviors in both human (Soman, 2001) and animal (Mench, 1998) subjects with the objective of predicting behavioral patterns or analyzing environmental conditions.", "This methodology also helps to identify and understand abnormal behaviour by collecting behavioral data without the need to reach any internal component of the subject (Birkett and Newton-Fisher, 2011).", "We base our approach in the discipline of behavioral science since some of our research questions and objectives align to those from this discipline; in addition, its methodology to study how factors effect on the subjects' behavior provides statistical guarantees.", "NLI, or RTE, is the task of inferring whether a natural language sentence (hypothesis) is entailed by another natural language sentence (premise) (Maccartney, 2009; Dagan et al., 2009; Dagan and Glickman, 2004).", "More formally, given a pair of natural language sentences i = ( premise, hypothesis ) , a model classifies the type of relation such sentences fall in from three possible classes, entailment , where the hypothesis is necessarily true given the premise, neutral , where the hypothesis may be true given the premise, and contradiction , where the hypothesis is necessarily false given the premise.", "Solving this task is challenging since it requires linguistic and semantic knowledge, such as co-reference, hypernymy, and antonymy (LoBue and Yates, 2011), as well as pragmatic knowledge and informal reasoning (Maccartney, 2009).", "the influence of a factor on the subject's behavior can be done via statistical tests: A null hypothesis states no association between a target factor and behavior, whereas the alternative hypothesis states an association (McDonald, 2014).", "The Stanford NLI dataset (Bowman et al., 2015) was created with the purpose of training deep neural models while providing human-annotated data.", "Each instance was created by providing a premise sentence, harvested from a pre-existing dataset, to a crowdsource worker who was instructed to produce three hypothesis sentences, one for each NLI class (entailment, neutral, contradiction).", "This process yielded a balanced dataset containing around 570K instances.", "Conditional Encoder We use two bidirectional LSTMs; the first LSTM encodes the premise sentence into a fixed-size vector embedding by sequentially reading on a word basis, while the second LSTM encodes the hypothesis sentence conditioned on the representation of the premise sentence.", "At the final layer we used a softmax over the class labels on top of a 3-layer MLP.", "All embeddings, of dimensionality d = 100 , were randomly initialized and learned during training.", "Accuracy on SNLI's dev set is 0.782.", "Decomposable Attention Model DAM (Parikh et al., 2016) consists of 2-layer multilayer-perceptrons (MLPs) factorized in a 3-step process.", "First, a soft-alignment matrix is created for all the words in both the premise and hypothesis.", "Then, each word of the premise is paired with the soft-alignment representation of the hypothesis sentence and fed into an MLP, and similarly for each word in the hypothesis with the soft-alignment of the premise.", "The resulting representations are then aggregated where the vector representations of the premise are summed up and the same for those of the hypothesis; the new representations are then fed to an MLP, followed by a linear layer and a softmax whose output is a class label.", "We use d = 300 dimensional GloVe embeddings (not updated at training time).", "All layers use the ReLU function.", "Accuracy on SNLI's dev set is 0.854.", "Enhanced Sequential Information Model ESIM (Chen et al., 2017) performs inference in three stages.", "First, Input Encoding uses BiLSTMs to produce representations of each word in its context within premise or hypothesis.", "Then, Local Inference Modelling constructs new word representations for each hypothesis (premise) by summing over the BiLSTM hidden states for the premise (hypothesis) words using weights from a soft attention matrix.", "Additionally, these representations are enhanced with element-wise products and differences of the original hidden states vectors and the new attention based vectors.", "Finally, Inference Composition uses a BiLSTM, average and max pooling and an MLP output layer to produce predicted labels.", "Accuracy on SNLI's dev set is 0.882.", "We test our main hypothesis (Section 1) by perturbing instances in a controlled, simple, and meaningful way.", "This alteration, at the instance level, yields new sets of instances which range from easy (the semantics and the label of the new instance are the same to those of the original instance) to challenging (both semantics and label of the new instance change with respect to those of the original instance), but all of them remain easy to annotate for a human.", "To examine how the models generalize from seen instances to transformed instances, we sample our original instances from the SNLI training set, which we refer to as control instances from now on.", "We then produce new instances which differ either minimally from the control instances, by changing only a single word in the premise and hypothesis, or more substantially, by copying the same sentence structure into the premise and hypothesis with a single word changed.", "In this way, we produce instances that contain only words seen at training time, within sentence structures also seen at training time.", "Thus, our evaluation sets are as in-domain as possible, and control for factors associated with novel sentential contexts and vocabulary.", "We first sample an instance from the SNLI dataset according to a given criterion, namely we look for a specific word pair in the instance; then, we apply our transformation over the word pair.", "This pro-1977 cedure generates a new instance.", "After that, the models label the new instance, and we statistically analyze which target factors influenced the models to respond in such a way via chi-square (McNe-mar's, independence, and homogeneity) tests (Mc-Donald, 2014; Alpaydin, 2010).", "When the sample size is too small we apply Yate's correction or a Fisher test.", "We use the StatsModels (Seabold and Perktold, 2010) and SciPy (Oliphant, 2007) packages.", "The level of significance is p < 0 .", "0001 , unless otherwise stated.", "2 This procedure is applied in four experiments, where we study the effect of different word pairs (hypernym, hyponym, and antonyms) and the effect of two types of context words surrounding the word pairs which we refer to as in situ and ex situ (explained in Section 5.3).", "Given a set of word pairs of the form W = ( w 1 , w 2 ) , where w 1 and w 2 hold under a semantic relation s { antonymy, hypernymy, hyponymy } , we look through the training set for instances i k = ( p k , h k ) , where p k and h k are premise and hypothesis sentences, respectively, such that w 1 p k and w 2 h k .", "For each instance i k we apply transformation T : we swap w 1 with w 2 ; this transformation yields an instance i m = ( p m , h m ) where w 2 p m , w 1 h m and w 1 / p m , w 2 / h m .", "3 An example of transformation T on a contradiction instance i k is the following: (1) p k : A soccer game occurring at sunset.", "Where the word pair ( sunset , sunrise ) are antonyms.", "After applying transformation T , we obtain the new contradiction instance i m : (2) p m : A soccer game occurring at sunrise.", "h m : A basketball game is occurring at sunset.", "Consider now the following instance i l (class label entailment ): (3) p l : A little girl hugs her brother on a footbridge in a forest.", "h l : A pair of siblings are on a bridge.", "2 We apply a Bonferroni correction.", "3 If a word w 1 or w 2 appears more than once, we replace all the appearances with its corresponding pair, w 2 or w 1 .", "If we now apply transformation T on the hypernym word pair ( footbridge , bridge ) we derive the new instance i n (class neutral ): (4) p n : A little girl hugs her brother on a bridge in a forest.", "Since swapping word pairs under hypernymy or hyponymy relations may yield a different class label for the new instance, we manually annotate all the instances in the new sample, discarding those that are semantically incoherent.", "We consider two types of sentential context for the word pairs, namely in situ and ex situ .", "Examples of instances under the in situ condition are Examples 1, 2, 3, and 4 in Section 5.2.", "The name in situ refers to the fact that we analyze the effect of the transformation T within the original context of the premise and hypothesis sentences.", "This allows to control for confounding factors, such as sentence length and order of the context words.", "We also consider an ex situ condition in which we remove the word pair from the original premise and hypothesis and analyze the effect of the transformation T within a simplified sentential context which is the same in premise and hypothesis.", "Specifically, we randomly select either the premise or hypothesis context from the original instance and copy it into both positions.", "In this way, we obtain a sentence pair where the only difference between the premise and hypothesis is the word pair, which allows us to isolate the effect of this pair from its interaction with the surrounding context; this condition thus allows to control for context words.", "This process yields a new set of instances, which we refer to as E .", "An example of an ex situ instance can be constructed from Example 1 (Section 5.2).", "If the premise sentence is selected, then after performing the procedure described above, the following sentence pair e k is generated: (5) p k : A soccer game occurring at sunset.", "Given a sample E , we apply the transformation T in order to generate a transformed sample ET where the word pairs are swapped, similar to the procedure applied in Section 5.2 on SNLI control instances in order to generate their transformed instances counterpart.", "In the latter case, we say that 1978 Whole sample Subset 1: Subset 2: Subset 3: Gold label changes Unseen word pairs Polarity 6 = gold label Exp sample ESIM DAM CE ESIM DAM CE ESIM DAM CE ESIM DAM CE 1 IA 0.970 0.946 0.820 0.900 0.900 0.750 I TA1 0.933 0.946 0.732 0.600 0.500 0.400 0.681 0.637 0.536 I TA2 0.721 0.771 0.645 0.554 0.653 0.476 I TA3 0.722 0.745 0.646 0.568 0.630 0.535 2 EA 0.953 0.958 0.508 0.400 0.500 0.450 ETA 0.933 0.929 0.480 0.575 0.500 0.175 0.565 0.492 0.260 3 IH 0.898 0.819 0.828 0.836 0.701 0.733 ITH 0.648 0.691 0.543 0.315 0.509 0.271 0.694 0.777 0.555 0.719 0.697 0.586 4 EH 0.771 0.849 0.742 0.715 0.707 0.461 ETH 0.576 0.788 0.534 0.551 0.783 0.516 0.527 0.666 0.472 0.631 0.674 0.507 Table 1 : Accuracy scores of all models.", "Exp : experiment number.", "Whole sample : accuracy scores on the whole sample.", "Subset 1 : subset of transformed instances that have different gold label with respect to the control instances they were generated from.", "Subset 2 : subset of transformed instances that contain word pairs unseen at training time.", "Subset 3 : subset of control or transformed instances containing word pairs whose polarity does not match the instance's gold label.", "given a sample of control instances I we generate a transformed sample IT .", "As an example of obtaining a transformed ex situ instance, we apply T to ( sunset , sunrise ) in Example 5 to obtain the new instance e m : (6) p m : A soccer game occurring at sunrise.", "We note that for both conditions, in situ and ex situ , the same word pairs are swapped, so the differences are the surrounding context words and the factors being controlled.", "In each experiment we use two sets of instances in order to measure the robustness of the models and analyze our target factors: 1) The control instances where the target word pair is in its original position and 2) the transformed instances generated after applying transformation T .", "The name of each set corresponds with the experimental setting it is used in.", "Samples used in in situ experiments are named as I , and E for ex situ .", "Subscripts distinguish both the type of word pairs ( A for antonyms and H for hypernym/hyponym) and the type of set (control or transformed).", "For example, IA refers to the control in situ set whose instances contain antonym word pairs, whereas ETH refers to the ex situ transformed test set containing hypernym/hyponym swapped word pairs.", "We clarify:", "a) the sets IA and IH are sampled from the SNLI dataset;", "b) transformed test sets are generated from control sets containing control instances;", "c) we refer to the sets EA and EH as control test sets because the target word pairs are in their original position, and we apply T on them in order to obtain the transformed samples ETA and ETH , respectively.", "Details about the sets: In order to build set IA , we sample only contradiction instances (in-stances in EA are also contradictions ).", "We use the antonym word pairs from (Mohammad et al., 2013) to yield the sets I TA1 and ETA , which also only contain contradictions since the relation of antonymy is symmetric.", "4 We build two more sets, I TA2 and I TA3 (explained in Section 6.1).", "Sets IH , EH , ITH , and ETH contain instances with any class label.", "In order to generate sets ITH and ETH , we use the hypernym word pairs from (Ba-roni et al., 2012).", "We manually annotate these transformed sets and discard incoherent instances.", "Insensitivity is the name we give to the tendency of a model to predict the original label on a transformed instance that is similar to a control instance.", "Thus a model would be insensitive if, for example, it incorrectly predicts the same class label for both the control instance in Example 3 4 The word pair ( sunset , sunrise ) holds in an antonymy relation regardless of the position of the words in premise and hypothesis sentences.", "and the transformed instance in Example 4 just because they closely resemble each other.", "A simple measure of the impact of this effect is to look at the accuracy on the subset of instances in which the gold label was changed by the transformation.", "We show this effect by statistically correlating the rate of correct predictions with changes in the labels predicted.", "Unseen Word Pairs are another factor we can use to evaluate robustness.", "In this case, we are interested in the subset of transformed instances where the swapped word pair is now in an order within premise and hypothesis that was unseen in the training data.", "An example is Example 2 which contains the unseen word pair ( sunrise , sunset ) ;", "i.e., no instance in the training set contains the word sunrise in the premise and the word sunset in the hypothesis.", "Poor performance on this subset reflects an inability to exploit the symmetry (antonym pairs) or anti-symmetry (hypernym pairs) of the word pairs involved.", "We show models' abilities to cope with unseen pairs by statistically associating proportions of instances containing unseen pairs with incorrect predictions rates.", "Polarity is the name we give to the association between a word pair and the most frequent class it is found in across training instances.", "For example, we associate the word pair ( sunset , sunrise ) with polarity contradiction because it mainly appears on training instances with label contradiction .", "We define four main categories of polarity: neutral , contradiction , entailment , and none for unseen word pairs.", "5 Accuracy on the subset of instances where polarity and gold label disagree is an indicator of the extent to which a model is influenced by this factor.", "For example, a model incorrectly predicting label entailment for the instance in Example 4 (class neutral ) based on the polarity of class entailment of its word pair ( bridge , footbridge ) indicates that the model is influenced by this factor.", "We show this influence by statistically correlating labels predicted with polarities.", "Table 1 presents the performance of the models across the different test sets.", "In general, DAM and ESIM seem to be more robust than CE, with 5 We also define categories when a word pair appears the same number of times in two classes, such as entailmentneutral , though these cases are rare.", "the latter's accuracy degrading to essentially random performance on the most challenging subsets.", "However, this general trend is reversed in a single row of the table.", "On ETH , ESIM shows a comparable performance to CE.", "And on Subset 3 of IH , DAM appears to rely on a bias (polarity) in the same way as CE.", "Overall, all models are affected by the three target factors, dropping performance up to 0.25, 0.20, and 0.28 for ESIM, DAM, CE, respectively, just by virtue of our simple transformation of swapping words.", "In this experiment we use sets IA and I TA1 .", "Swapping antonyms seems to have no effect on the overall performance of the DAM model on I TA1 when compared to IA , and little effect on ESIM.", "Thus these two models appear to be robust to this transformation.", "Nonetheless, further analysis will not support the conclusion that both models have learned that antonymy is symmetric, and we will show that this seemingly robust behavior is due to confounding factors and not due to inference abilities.", "Accuracy scores of CE model seem to reveal that it is much less robust to the antonym swap, with performance significantly dropping by roughly 10.5% according to a McNemar's test.", "Insensitivity Because instances in I TA1 are contradiction , we perform a proxy experiment to understand the models' sensitivity.", "From IA , we substitute one of the antonyms in each word pair (in each instance) with a hyponym, hypernym, or synonym 6 of the other.", "Doing this on both the premise and hypothesis yields two new samples, I TA2 and I TA3 , which we manually annotate.", "Examples of control (Example 7) and transformed (Example 8) instances are given below, showing the replacement of young , in the hypothesis, with aged , a synonym of elderly from the premise.", "This transformation changes gold-label from contradiction to neutral .", "Approximately, half the sample yields such changes in gold-label.", "(7) p k : An elderly woman sitting on a bench.", "h k : A young mother sits down.", "(8) p m : An elderly woman sitting on a bench.", "h m : An aged mother sits down.", "6 We manually select these from WordNet such that it appears at least t = 10 times in the training set on either the premise sentences or the hypothesis sentences.", "This transformation leads to a considerable drop in overall performance for all models when accuracy scores on sets I TA2 and I TA3 are compared to the accuracy on the control instances in IA : up to 0.175 (CE), 0.201 (DAM), and 0.24 (ESIM) points (Table 1).", "To test if insensitivity to the transformation is associated with these behaviors, we measure accuracy only on those instances that changed gold-label (Subset 1 from the sets I TA2 and I TA3 ), where we see a further reduction in performance for all models.", "2-way tests of independence provide strong evidence for the insensitivity of the models (CE: 2 (1) = 73 .", "33 , DAM: 2 (1) = 108 .", "30 , ESIM: 2 (1) = 175 .", "34 ).", "Table 2 shows the case for ESIM: most of its incorrect predictions are due to predicting the same label on both control and transformed instances when these two type of instances have different gold labels.", "Paradoxically, this effect works in the models' favour in the antonym swapping case ( I TA1 ) because all the gold-labels remain as contradiction .", "Thus ignoring the transformation will avoid any loss in performance.", "Distribution of predictions Labels predicted correct incorrect change 155 31 no change 8 100 Table 2 : Contingency table for ESIM: Predictions on transformed instances with different gold labels from those of the control instances.", "Unseen Word Pairs The results in the column Subset 2 of I TA1 (Table 1) suggest that performance on unseen word pairs is weak.", "However, only 40 instances within I TA1 contain unseen antonym pairs; thus the impact of this result may be limited.", "2-way tests of homogeneity show that the difference in accuracy of predictions in instances containing seen or unseen word pairs is nonetheless significant for all models (CE: 2 (1) = 19 .", "46 , DAM: 2 (1) = 74 .", "16 , ESIM: 2 (1) = 39 .", "33 ).", "In other words, the models struggle to recognize the reversed antonym pairs, even though they were all seen in their original order at training time.", "This effect can be seen, for example, in the contingency table for DAM in Table 3.", "Polarity Only 11% of the instances in the transformed sample I TA1 contain word pairs that have polarity other than contradiction .", "Thus, a model Word pairs Predictions seen unseen correct 567 20 incorrect 13 20 Table 3 : Contingency table for DAM: Predictions distributed according to instances containing a seen or an unseen antonym word pair.", "relying only on this factor could achieve an accuracy of 89%.", "We investigate if the predicted labels on instances in I TA1 are associated with the polarity of the transformed word pair.", "For all models, independence tests are highly significant (CE: 2 (6) = 30 .", "69 , DAM: 2 (6) = 101 .", "26 , ESIM: 2 (6) = 64 .", "40 ).", "Table 4 shows that the predictions of DAM change according to the polarity of the word pairs.", "For example, when the polarity is contradiction , around 98.5% of the predictions are contradictions ; however, this figure changes when the polarity is neutral where the rate of correct predictions ( contradictions ) fall to 80.7%, and a more dramatic fall is observed when the word pairs are unseen (polarity none ) where only 50% of the predictions are correct.", "This is strong evidence that the models learned to rely on polarity.", "We note that a model with perfect accuracy on I TA1 , would lead to a statistic that does not reject the null hypothesis, showing in this case that the predictions are independent of polarity.", "In this experiment, we use samples EA and ETA .", "Swapping antonyms has little effect on the performance of all models, where the biggest drop comes from DAM (0.029 points).", "However, the CE model performs quite poorly at both samples (0.508 and 0.48 accuracy points on EA and ETA ); 1981 this drop in performance, with respect to the in situ condition, suggests that the repeated sentence context is too different from the structure of the training instances for the CE model to generalize effectively.", "In this condition, we refrain from analyzing the effect of insensitivity, since doing so would require a transformation similar to that in the in situ condition, which might add an extra layer of change and the results may turn difficult to interpret.", "Unseen Word Pairs Accuracy scores strongly suggest that the models are weak at dealing with unseen antonym pairs (Subset 2 of ETA in Table 1); drops in performance on this subset range from 0.315 up to 0.429 points across the three models.", "Tests of homogeneity show strong evidence of this weakness for all models (CE: 2 (1) = 15 .", "91 , DAM: 2 (1) = 59 .", "17 , ESIM: 2 (1) = 44 .", "72 ).", "Comparing results on this subset with those of Subset 2 in I TA1 , we notice that ESIM and DAM keep similar behavior, but CE seems to be strongly affected by this context type.", "Polarity All models perform poorly in the subset of instances where polarity disagrees with gold label of the instance (Subset 3 of ETA ), showing that the models' behavior rely on this bias.", "These results are highly significant (CE: 2 (6) = 34 .", "37 , DAM: 2 (6) = 136 .", "99 , ESIM: 2 (6) = 103 .", "47 ).", "This is further evidence that the models get confused with a simple reversal of an antonym pair.", "We now study the effect on the robustness of the systems when we swap hypernym and hyponym word pairs in in situ instances.", "Whole sample accuracy scores in Table 1 significantly drop, according to McNemar's tests, by 0.25 (ESIM), 0.285 (CE), and 0.128 (DAM) points when we compare scores on control instances ( IH ) with those on transformed instances ( ITH ).", "We investigate the role of our target factors on these behaviors.", "Insensitivity Around 42% of the instances in ITH (Subset 1) have different gold label from those in IH .", "On these instances, the models' results are severely impaired: CE and ESIM models' performances drop to close-to-random (0.271 and 0.315), while DAM decreases by 0.18 points.", "All models' errors on this subset are strongly associated with failure to change the predicted class (CE: 2 (1) = 90 .", "73 , DAM: 2 (1) = 101 .", "52 , ESIM: 2 (1) = 150 .", "92 ).", "In contrast to the case in Experiment 1, insensitivity acts in detriment of the models' robustness when gold labels change after the transformation.", "Unseen Word Pairs Whereas model performance was significantly worse on unseen antonym pairs, this effect is not obvious on the hyponym-hypernym results (Subset 2 of ITH ).", "In fact, all models have a slightly higher accuracy on this subset than overall.", "Homogeneity tests find no evidence of an association between unseen word pairs and incorrect predictions for any model (CE: 2 (1) = 0 .", "00036 , p = 0 .", "98 , DAM: 2 (1) = 0 .", "98 , p = 0 .", "32 , ESIM: 2 (1) = 0 .", "178 , p = 0 .", "67 ).", "This effect may be explained by the models exploiting information from word embeddings.", "It has been shown that word embeddings are able to capture hypernymy (Sanchez and Riedel, 2017); thus the models may use this information to generalize to unseen hypernym pairs.", "Polarity We find very strong evidence for an association between polarity and class label predicted on sample IH for all models (CE: 2 (10) = 168 .", "40 , DAM: 2 (10) = 182 .", "76 , ESIM: 2 (10) = 157 .", "76 ).", "However, for sample ITH , only DAM keeps this strong correlation ( 2 (14) = 47 .", "71 ).", "In the case of CE, we find weak evidence in favour of this correlation on instances of ITH ( 2 (14) = 25 .", "27 , p = 0 .", "03 ).", "For ESIM we find no evidence of correlation ( 2 (14) = 22 .", "72 , p = 0 .", "06 ), thus we do not reject the null hypothesis.", "Polarity's influence can be observed in Subset 3 of IH (Table 1), where we observe a drop in accuracy for instances whose gold labels do not match the polarity of the word pairs, compared to the accuracy of the whole sample; this means that when the models have polarity as a cue, they improve performance.", "All models' performance significantly drop ( p < 0 . 01 ) after our transformation by 0.208 (CE), 0.061 (DAM) and 0.195 (ESIM) points, where performance of ESIM is comparable to that of CE on both samples, EH and ETH .", "Compared to the in situ condition, DAM's performance improves, opposite to CE's and ESIM's behavior.", "Insensitivity The drop in performance described above can be partially explained by insensitivity to changes in gold label, since around 93% of the instances in ETH changed gold-label with respect to EH .", "We find strong statistical evidence for this hypothesis (CE: 2 (1) = 175 .", "19 , DAM: 2 (1) = 158 .", "62 , ESIM: 2 (1) = 252 .", "27 ).", "However, in the case of DAM, this factor seems to play a small role on its behavior as seen when we compare accuracy on Subset 1 with that of the whole transformed sample.", "Insensitivity seems to have a bigger influence on the models when the transformed instances are closer to the training set: Accuracy scores on Subset 1 from ITH are smaller than those on Subset 1 from ETH .", "Unseen Word Pairs Similar to the in situ condition, our homogeneity tests show no evidence for incorrect predictions being due to unseen word pairs (CE: 2 (1) = 0 .", "35 , p = 0 .", "55 , DAM: 2 (1) = 2 .", "43 , p = 0 .", "11 , ESIM: 2 (1) = 0 .", "183 , p = 0 .", "66 ).", "We posit the same explanation as before: Models may use hypernymy information contained in the embeddings.", "Polarity We find statistically high correlation of the models' predictions with the polarity of the word pairs in the instances from both samples, EH (CE: 2 (10) = 261 .", "77 , DAM: 2 (10) = 312 .", "67 , ESIM: 2 (10) = 176 .", "38 ) and ETH (CE: 2 (14) = 56 .", "52 , DAM: 2 (14) = 258 .", "09 , ESIM: 2 (10) = 105 .", "70 ).", "This evidence indicates that all models use, to some extent, the polarity as a feature for predicting class labels.", "Although all three models achieve strong results on the original SNLI development set (CE: 0.782, DAM: 0.854, ESIM: 0.882), each model exhibits particular weaknesses on the transformed training instances.", "Notably, all perform poorly on ITH instances in which the gold label is changed, with ESIM and CE performing below the level of chance.", "Thus, on these instances, the models tend to predict the label of the original unaltered training instance and inference in this case is similar to nearest-neighbour prediction.", "On the other hand, much better performance is obtained for the DAM and ESIM models on ITH instances containing unseen word pairs, indicating these models have learned to infer hyper-nym/hyponym relations from information in the pre-trained word embeddings.", "In contrast, performance on the unseen word pairs in I TA1 and ETA suggests that inferring antonymy from the embeddings is more difficult.", "Weak performance is seen again on the EA and ETA instances where the polarity of the antonym pair is not consistent with the gold label.", "For these cases, the only difference between premise and hypothesis is the antonym pair, and the models tend to fall back on predicting the most frequent gold label seen for that word pair.", "One result that remains anomalous is the overall performance of the ESIM model on the whole ETH sample.", "While this sample contains unseen word pairs and instances in which the gold label changes or is inconsistent with polarity, these effects do not by themselves explain the poor performance overall.", "Neither is this weakness explained by the ex situ structure, in which premise and hypothesis differ by only one word, as performance on the control ex situ sample, EH , is much stronger.", "The effect, then, appears to be due to an interaction of the ex situ structure in combination with the transformation.", "In the present work, we have limited ourselves to examining single influences independently.", "However, there are undoubtedly manifold interactions contributing to model performance.", "In fact, the complexities of these models (LSTMs, attention mechanisms and MLPs) are specifically intended to capture the interactions between the words in the premise and hypothesis.", "Further work is required to understand what these interactions are and how they contribute to performance.", "Fully uncovering these factors in current NLI datasets is a pre-requisite for the construction of more effective resources in the future.", "We thank Raul Ortiz Pulido and Erick Sanchez Carmona for insightful discussions, Pasquale Min-ervini for providing the implementations of DAM and ESIM, Pontus Stenetorp for providing valuable feedback on the manuscript, and Johannes Welbl for insightful comments.", "The first author was recipient of a scholarship from CONA-CYT.", "This work was supported by an Allen Distinguished Investigator Award and the EU H2020 SUMMA project (grant agreement number 688139)." ]
[ "abstain", "abstain", "method", "objective", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "abstain", "method", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "result", "other", "other", "other", "method", "other", "other", "objective", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "result", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "other", "other" ]
[ "Large-scale pre-trained language models such as BERT have brought significant improvements to NLP applications.", "However, they are also notorious for being slow in inference, which makes them difficult to deploy in real-time applications.", "We propose a simple but effective method, DeeBERT, to accelerate BERT inference.", "Our approach allows samples to exit earlier without passing through the entire model.", "Experiments show that DeeBERT is able to save up to 40% inference time with minimal degradation in model quality.", "Further analyses show different behaviors in the BERT transformer layers and also reveal their redundancy.", "Our work provides new ideas to efficiently apply deep transformer-based models to downstream tasks.", "Code is available at https://github.com/castorini/ DeeBERT .", "Large-scale pre-trained language models such as ELMo (Peters et al., 2018), GPT (Radford et al., 2019), BERT (Devlin et al., 2019), XLNet (Yang et al., 2019), and RoBERTa (Liu et al., 2019) have brought significant improvements to natural language processing (NLP) applications.", "Despite their power, they are notorious for being enormous in size and slow in both training and inference.", "Their long inference latencies present challenges to deployment in real-time applications and hardware-constrained edge devices such as mobile phones and smart watches.", "To accelerate inference for BERT, we propose DeeBERT : D ynamic e arly e xiting for BERT .", "The inspiration comes from a well-known observation in the computer vision community: in deep convolutional neural networks, higher layers typically produce more detailed and finer-grained features (Zeiler and Fergus, 2014).", "Therefore, we Figure 1: DeeBERT model overview.", "hypothesize that, for BERT, features provided by the intermediate transformer layers may suffice to classify some input samples.", "DeeBERT accelerates BERT inference by inserting extra classification layers (which we refer to as off-ramps ) between each transformer layer of BERT (Figure 1).", "All transformer layers and offramps are jointly fine-tuned on a given downstream dataset.", "At inference time, after a sample goes through a transformer layer, it is passed to the following off-ramp.", "If the off-ramp is confident of the prediction, the result is returned; otherwise, the sample is sent to the next transformer layer.", "In this paper, we conduct experiments on BERT and RoBERTa with six GLUE datasets, showing that DeeBERT is capable of accelerating model inference by up to 40% with minimal model quality degradation on downstream tasks.", "Further analyses reveal interesting patterns in the models' transformer layers, as well as redundancy in both BERT and RoBERTa.", "BERT and RoBERTa are large-scale pre-trained language models based on transformers (Vaswani et al., 2017).", "Despite their groundbreaking power, there have been many papers trying to examine and exploit their over-parameterization.", "Michel et al. (2019) and Voita et al. (2019) analyze redundancy in attention heads.", "Q-BERT (Shen et al., 2019) uses quantization to compress BERT, and LayerDrop (Fan et al., 2019) uses group regularization to enable structured pruning at inference time.", "On the knowledge distillation side, TinyBERT (Jiao et al., 2019) and DistilBERT (Sanh et al., 2019) both distill BERT into a smaller transformer-based model, and Tang et al. (2019) distill BERT into even smaller non-transformer-based models.", "Our work is inspired by Cambazoglu et al. (2010), Teerapittayanon et al. (2017), and Huang et al. (2018), but mainly differs from previous work in that we focus on improving model efficiency with minimal quality degradation.", "DeeBERT modifies fine-tuning and inference of BERT models, leaving pre-training unchanged.", "It adds one off-ramp for each transformer layer.", "An inference sample can exit earlier at an off-ramp, without going through the rest of the transformer layers.", "The last off-ramp is the classification layer of the original BERT model.", "We start with a pre-trained BERT model with n transformer layers and add n off-ramps to it.", "For fine-tuning on a downstream task, the loss function of the i th off-ramp is L i ( D ; ) = 1 |D| (cid:88) ( x,y ) D H ( y, f i ( x ; )) , (1) where D is the fine-tuning training set, is the collection of all parameters, ( x, y ) is the feature label pair of a sample, H is the cross-entropy loss function, and f i is the output of the i th off-ramp.", "The network is fine-tuned in two stages:", "1. Update the embedding layer, all transformer layers, and the last off-ramp with the loss function L n .", "This stage is identical to BERT fine-tuning in the original paper (Devlin et al., 2019).", "2. Freeze all parameters fine-tuned in the first stage, and then update all but the last offramp with the loss function (cid:80) n 1 i =1 L i .", "The reason for freezing parameters of transformer layers is to keep the optimal output quality for the last off-ramp; otherwise, transformer layers are no longer optimized solely for the last off-ramp, generally worsening its quality.", "The way DeeBERT works at inference time is shown in Algorithm", "1. We quantify an off-ramp's confidence in its prediction using the entropy of the output probability distribution z i .", "When an input sample x arrives at an off-ramp, the off-ramp compares the entropy of its output distribution z i with a preset threshold S to determine whether the sample should be returned here or sent to the next transformer layer.", "It is clear from both intuition and experimentation that a larger S leads to a faster but less accurate model, and a smaller S leads to a more accurate but slower one.", "In our experiments, we choose S based on this principle.", "We also explored using ensembles of multiple layers instead of a single layer for the off-ramp, but this does not bring significant improvements.", "The reason is that predictions from different layers are usually highly correlated, and a wrong prediction is unlikely to be fixed by the other layers.", "Therefore, we stick to the simple yet efficient single output layer strategy.", "We apply DeeBERT to both BERT and RoBERTa, and conduct experiments on six classification datasets from the GLUE benchmark (Wang et al., 2018): SST-2, MRPC, QNLI, RTE, QQP, and MNLI.", "Our implementation of DeeBERT is adapted from the HuggingFace Transformers Library (Wolf et al., 2019).", "Inference runtime measurements are performed on a single NVIDIA Tesla P100 graphics card.", "Hyperparameters such as hidden-state size, learning rate, fine-tune epoch, and batch size are kept unchanged from the library.", "There is no early stopping and the checkpoint after full fine-tuning is chosen.", "We vary DeeBERT's qualityefficiency trade-off by setting different entropy thresholds S , and compare the results with other baselines in Table", "1. Model quality is measured on the test set, and the results are provided by the GLUE evaluation server.", "Efficiency is quantified with wall-clock inference runtime 1 on the entire test set, where samples are fed into the model one by one.", "For each run of DeeBERT on a dataset, we choose three entropy thresholds S based on qualityefficiency trade-offs on the development set, aiming to demonstrate two cases: (1) the maximum runtime savings with minimal performance drop ( < 0 . 5% ), and (2) the runtime savings with moderate performance drop ( 2% 4% ).", "Chosen S values differ for each dataset.", "We also visualize the trade-off in Figure", "2. Each curve is drawn by interpolating a number of points, each of which corresponds to a different threshold S .", "Since this only involves a comparison between different settings of DeeBERT, runtime is measured on the development set.", "From Table 1 and Figure 2, we observe the following patterns: Despite differences in baseline performance, both models show similar patterns on all datasets: the performance (accuracy/F 1 score) stays (mostly) the same until runtime saving reaches a certain turning point, and then starts 1 This includes both CPU and GPU runtime.", "to drop gradually.", "The turning point typically comes earlier for BERT than for RoBERTa, but after the turning point, the performance of RoBERTa drops faster than for BERT.", "The reason for this will be discussed in Section 4.4.", "Occasionally, we observe spikes in the curves, e.g., RoBERTa in SST-2, and both BERT and RoBERTa in RTE.", "We attribute this to possible regularization brought by early exiting and thus smaller effective model sizes, i.e., in some cases, using all transformer layers may not be as good as using only some of them.", "Compared with other BERT acceleration methods, DeeBERT has the following two advantages: Instead of producing a fixed-size smaller model like DistilBERT (Sanh et al., 2019), DeeBERT produces a series of options for faster inference, which users have the flexibility to choose from, according to their demands.", "Unlike DistilBERT and LayerDrop (Fan et al., 2019), DeeBERT does not require further pretraining of the transformer model, which is much more time-consuming than fine-tuning.", "As the measurement of runtime might not be stable, we propose another metric to capture efficiency,", "called expected saving, defined as 1 (cid:80) ni =1 i N i (cid:80) ni =1 n N i , (2) where n is the number of layers and N i is the number of samples exiting at layer i .", "Intuitively, expected saving is the fraction of transformer layer execution saved by using early exiting.", "The advantage of this metric is that it remains invariant between different runs and can be analytically computed.", "For validation, we compare this metric with 5 10 Exit Layer 70 80 90 A cc u r a cy ( % ) base: SST-2 BERT RoBERTa 5 10 Exit Layer 82.5 85.0 87.5 90.0 92.5 F 1 S c o r e ( % ) base: MRPCBERT RoBERTa 5 10 Exit Layer 60 70 80 90 A cc u r a cy ( % ) base: QNLIBERT RoBERTa 5 10 Exit Layer 50 55 60 65 70 A cc u r a cy ( % ) base: RTEBERT RoBERTa 5 10 Exit Layer 50 60 70 80 90 F 1 S c o r e ( % ) base: QQPBERT RoBERTa 5 10 Exit Layer 40 50 60 70 80 A cc u r a cy ( % ) base: MNLIBERT RoBERTa Figure 4: Accuracy of each off-ramp for BERT-base and RoBERTa-base.", "measured saving in Figure", "3. Overall, the curves show a linear relationship between expected savings and measured savings, indicating that our reported runtime is a stable measurement of DeeBERT's efficiency.", "In order to understand the effect of applying DeeBERT to both models, we conduct further analyses on each off-ramp layer.", "Experiments in this section are also performed on the development set.", "Output Performance by Layer.", "For each offramp, we force all samples in the development set to exit here, measure the output quality, and visualize the results in Figure", "4. From the figure, we notice the difference between BERT and RoBERTa.", "The output quality of BERT improves at a relatively stable rate as the index of the exit off-ramp increases.", "The output quality of RoBERTa, on the other hand, stays almost unchanged (or even worsens) for a few layers, then rapidly improves, and reaches a saturation point be-0 25 50 75 Runtime Savings (%) 75 80 85 90 95 A cc u r a cy ( % ) large: SST-2 BERT RoBERTa 0 10 20 Exit Layer 70 80 90 A cc u r a cy ( % ) large: SST-2 BERT RoBERTa 0 25 50 75 Runtime Savings (%) 82.5 85.0 87.5 90.0 92.5 F 1 S c o r e ( % ) large: MRPCBERT RoBERTa 0 10 20 Exit Layer 82 84 86 88 90 92 F 1 S c o r e ( % ) large: MRPCBERT RoBERTa Figure 5: Results for BERT-large and RoBERTa-large.", "fore BERT does.", "This provides an explanation for the phenomenon mentioned in Section 4.2: on the same dataset, RoBERTa often achieves more runtime savings while maintaining roughly the same output quality, but then quality drops faster after reaching the turning point.", "We also show the results for BERT-large and RoBERTa-large in Figure", "5. From the two plots on the right, we observe signs of redundancy that both BERT-large and RoBERTa-large share: the last several layers do not show much improvement compared with the previous layers (performance even drops slightly in some cases).", "Such redundancy can also be seen in Figure", "4. Number of Exiting Samples by Layer.", "We further show the fraction of samples exiting at each off-ramp for a given entropy threshold in Figure 6.", "Entropy threshold S = 0 is the baseline, and all samples exit at the last layer; as S increases, gradually more samples exit earlier.", "Apart from the obvious, we observe additional, interesting patterns: if a layer does not provide better-quality output than previous layers, such as layer 11 in BERT-base and layers 24 and 6 in RoBERTa-base (which can be seen in Figure 4, top left), it is typically chosen by very few samples; popular layers are typically those that substantially improve over previous layers, such as layer 7 and 9 in RoBERTa-base.", "This shows that an entropy threshold is able to choose the fastest off-ramp among those with comparable quality, and achieves a good trade-off between quality and efficiency.", "We propose DeeBERT, an effective method that exploits redundancy in BERT models to achieve better qualityefficiency trade-offs.", "Experiments demonstrate its ability to accelerate BERT's and RoBERTa's inference by up to 40%, and also reveal interesting patterns of different transformer layers in BERT models.", "There are a few interesting questions left unanswered in this paper, which would provide interesting future research directions: (1) DeeBERT's training method, while maintaining good quality in the last off-ramp, reduces model capacity available for intermediate off-ramps; it would be important to look for a method that achieves a better balance between all off-ramps.", "(2) The reasons why some transformer layers appear redundant 2 and why DeeBERT considers some samples easier than others remain unknown; it would be interesting to further explore relationships between pre-training and layer redundancy, sample complexity and exit layer, and related characteristics.", "We thank anonymous reviewers for their insightful suggestions.", "We also gratefully acknowledge funding support from the Natural Sciences and Engineering Research Council (NSERC) of Canada.", "Computational resources used in this work were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "other", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "objective", "abstain", "result", "abstain", "other", "other", "other" ]
[ "Transformers are ubiquitous in Natural Language Processing (NLP) tasks, but they are dif-ficult to be deployed on hardware due to the intensive computation.", "To enable low-latency inference on resource-constrained hardware platforms, we propose to design Hardware-Aware Transformers (HAT) with neural architecture search.", "We first construct a large design space with arbitrary encoder-decoder attention and heterogeneous layers .", "Then we train a SuperTransformer that covers all candidates in the design space, and efficiently produces many SubTransformers with weight sharing.", "Finally, we perform an evolutionary search with a hardware latency constraint to find a specialized SubTransformer dedicated to run fast on the target hardware.", "Extensive experiments on four machine translation tasks demonstrate that HAT can discover efficient models for different hardware (CPU, GPU, IoT device).", "When running WMT'14 translation task on Raspberry Pi-4, HAT can achieve 3 speedup, 3.7 smaller size over baseline Transformer; 2.7 speedup, 3.6 smaller size over Evolved Transformer with 12,041 less search cost and no performance loss.", "HAT is open-sourced.", "Transformer (Vaswani et al., 2017) has been widely used in natural language processing tasks.", "By stacking multiple identical encoder/decoder layers with attention modules, it provides a significant performance improvement over previous convolutional or recurrent neural network models (Kim, 2014).", "Nevertheless, it is challenging to deploy Transformers on mobile devices due to the high computation cost.", "For instance, in order to translate a sentence with only 30 words, a Transformer-Big model needs to execute 13G FLOPs and takes 20 seconds on a Raspberry Pi.", "Such long latency will hurt the user experience on edge devices.", "Thus we SuperTransformer Evolutionary Search with Hardware Constraints Specialized Deployment is Efficient Hardware Latency Feedback SubTransformers 12 / 3 / 2019 (cid:81)(cid:82)(cid:88)(cid:81)_ (cid:86) e (cid:81) (cid:86) (cid:82) (cid:85) _1895988 .", "need hardware-efficient Transformers (Figure 1).", "There are two common pitfalls when evaluating the efficiency of a Transformer.", "(1) FLOPs does not reflect the measured latency .", "Although FLOPs is used as an metric for efficiency in prior arts (Howard et al., 2017; Wu et al., 2020), it is not a good latency proxy.", "As in Figure 2 ( Right ), models with the same FLOPs can result in very different measured latencies; (2) different hardware prefers different Transformer architecture.", "As in Table 1, the Transformer model optimized on one hardware is sub-optimal for another because latency is in-fluenced by different factors on different hardware platforms.", "For example, the embedding size has significant impact on the Raspberry Pi latency but hardly influences the GPU latency (Figure 2).", "Inspired by the success of Neural Architecture Search (NAS) (Bender et al., 2018; Guo et al., 2019; Pham et al., 2018; Cai et al., 2019a), we propose to search for H ardwareA ware T ransformers (HAT) On Titan xp MACs vs latency Change #Layers Change #Layers Change Hidden Dim Change Hidden Dim Change Embed Dim Change Embed Dim 159.171850 5.24E+01 159.171850 5.24E+01 159.171850 5.24E+01 192.514580 9.62E+01 194.561290 5.09E+01 194.564170 5.14E+01 225.857310 1.37E+02 226.018570 5.13E+01 229.956490 5.42E+01 259.200040 1.74E+02 257.475850 5.17E+01 265.348810 5.15E+01 292.542770 2.16E+02 288.933130 5.04E+01 300.741130 5.17E+01 325.885500 2.61E+02 320.390410 5.25E+01 336.133450 5.15E+01 Table 2 From Scratch #5 Acc Loss Prune+Distill #5 Acc Loss GAN Compression Untitled 1 Untitled 1 10.6 33.55 10.603462656 34.61 7.944142848 35.75 Untitled 2 7.5 34 7.51501312 32.89 6.9599232 34.84 Untitled 3 6.17 33.09 6.168969216 29.76 5.664538624 34.34 Untitled 4 4.955045888 31.62 4.955045888 30.27 4.67861504 33.14 Untitled 5 Untitled 6 Untitled 7 Untitled 8 Untitled 9 Untitled 10 Untitled 11 Untitled 12 Untitled 13 Untitled 14 Untitled 15 Untitled 16 Untitled 17 m A P ( ) 28 30 32 34 36 4 5 6 7 8 9 10 11 From Scratch Prune+DistillGAN Compression 0 75 150 225 300 120 180 240 300 360 On raspberry pi MACs vs latency Change #Layers Change #Layers Change Hidden Dim Change Hidden Dim Change Embed Dim Change Embed Dim 159.171850 1012.378561 159.171850 1012.378561 159.171850 1012.378561 192.514580 1321.4497 194.561290 1147.812423 194.564170 1114.87281 225.857310 1630.328098 226.018570 1281.604218 229.956490 1270.467544 259.200040 1925.809944 257.475850 1451.090652 265.348810 1403.119045 292.542770 2266.552657 288.933130 1585.657468 300.741130 1635.061976 325.885500 2580.540898 320.390410 1730.037206 336.133450 1831.49392 600 1150 1700 2250 2800 120 180 240 300 360 Hidden Dim Scaling Embed Dim Scaling Layer Number Scaling On 40-core Xeon CPU FLOPs vs latency Change #Layers Change #Layers Change Hidden Dim Change Hidden Dim Change Embed Dim Change Embed Dim 159.171850 57.81710545 159.171850 57.81710545 159.171850 57.81710545 192.514580 100.9847422 194.561290 73.918281 194.564170 62.03208764 225.857310 150.8454541 226.018570 80.10061979 229.956490 63.28665018 259.200040 205.2778184 257.475850 83.99975101 265.348810 69.9605157 292.542770 268.6143021 288.933130 89.46600159 300.741130 70.42247256 325.885500 295.1284428 320.390410 96.02826834 336.133450 77.27889021 10 90 170 250 330 120 180 240 300 360 Jatson Nano GPU FLOPs vs latency #Layers latency_layers FFN Hidden Dim latency_hidden Embedding Dim latency_embed 159.171850 276.686898 159.171850 276.686898 159.171850 276.686898 192.514580 502.7163233 194.561290 276.6293568 194.564170 279.5712204 225.857310 729.463118 226.018570 278.9684391 229.956490 283.4879486 259.200040 998.0243301 257.475850 287.5616707 265.348810 285.4414597 292.542770 1279.55703 288.933130 299.8618034 300.741130 294.368388 325.885500 1450.864827 320.390410 309.1898384 336.133450 297.9405331 L a t e n c y o n N v i d i a GPU ( m s ) 200 550 900 1250 1600 120 180 240 300 360 Layer Num FFN Hidden Dim Embedding Dim #Mult-Add (M) FLOPs (M) Nvidia GPU Raspberry Pi ARM CPU Intel CPU Hidden & Embed dim have no impact on Nvidia GPU latency Hidden & Embed dim have large impact on Raspberry Pi ARM CPU latency Hidden & Embed dim have small impact on Intel CPU latency La t en cy ( m s ) FLOPs (M) FLOPs (M) Similar FLOPs, Di erent Latency Figure 2: Latency of different Transformer models on different hardware.", "by directly involving the latency feedback into the design loop.", "In this way, we do not need FLOPs as the latency proxy and can search specialized models for various hardware.", "We first construct a large search space with arbitrary encoder-decoder attention and heterogeneous Transformer layers .", "Traditional Transformer has an information bottleneck between the encoder and decoder.", "Arbitrary encoder-decoder attention breaks the bottleneck, allowing all decoder layers to attend to multiple and different encoder layers instead of only the last one.", "Thus low-level information from the encoder can also be used by the decoder.", "Motivated by Figure 2, we introduce heterogeneous Transformer layers to allow different layers to have different architecture adapting various hardware.", "To perform a low-cost search in such a large design space, we first train a Transformer supernet SuperTransformer, which contains many SubTransformers sharing the weights.", "We train all SubTransformers simultaneously by optimizing the uniformly sampled SubTransformers from the SuperTransformer.", "The performance of a SubTransformer with inherited weights from the SuperTransformer can provide a good relative performance approximation for different architectures trained from-scratch.", "Unlike conventional NAS, we only need to pay the SuperTransformer training cost for once and can evaluate all the models in the design space with it.", "Finally, we conduct an evolutionary search to find the best SubTransformer under the hardware latency constraint.", "Experiments show that HAT can be naturally incorporated with model compression techniques such as quantization and knowledge distillation.", "We evaluate HAT with WMT'14 En-De, WMT'14 En-Fr, WMT'19 En-De, and IWSLT'14 De-En tasks on Raspberry Pi ARM CPU, Intel Xeon CPU, and Nvidia TITAN Xp GPU.", "Compared with previous work (Vaswani et al., 2017; So et al., 2019; Gu et al., 2019; Wu et al., 2020), HAT achieves up to 3 speedup, 3.7 smaller size over Transformer-Big without loss of accuracy.", "With 12,041 less search cost, HAT outperforms the Evolved Transformer with 2.7 speedup and 3.6 smaller size.", "It also achieves up to 1.9 speedup over Levenshtein and Lite Transformers with no BLEU score loss.", "With 4-bit quantization, HAT can further reach 25 model size reduction.", "HAT has three contributions: (1) Hardware-Aware and Specialization.", "To our best knowledge, we are the first to directly involve the hardware feedback in the model design, to reduce NLP model latency for target hardware, instead of relying on proxy signals (FLOPs).", "For different hardware platforms, specialized models for low-latency inference are explored.", "(2) Low-cost Neural Architecture Search with a Large Design Space.", "We propose arbitrary encoder-decoder attention to break the information bottleneck; and heterogeneous layer to let different layers alter its capacity.", "A weight-shared SuperTransformer is trained to search for efficient models at a low cost.", "(3) Design Insights.", "Based on the search results, we reveal some design insights: Attending to multiple encoder layers is beneficial for the decoder; GPU prefers shallow and wide models while ARM CPU prefers deep and thin ones.", "A large design space is constructed with Arbitrary Encoder-Decoder Attention and Heterogeneous Layers.", "(1) Train a weight-shared SuperTransformer by iteratively optimizing randomly sampled SubTransformers.", "It can provide a performance proxy for SubTransformers.", "(2) Collect (SubTransformer architecture, latency) data pairs on the target hardware.", "(3) Train a latency predictor for each hardware to provide fast and accurate latency feedback.", "(4) Perform an evolutionary search with hardware latency constraint to find the model with the lowest validation loss .", "(5) Finally, the searched model is trained from scratch to get the final performance.", "An overview of the HAT framework is shown in Figure 3.", "We firstly train a SuperTransformer with a large design space.", "Then, for a given hardware platform, we collect a dataset of (SubTrans-former architecture, measured latency) pairs for different models, and train a latency predictor.", "Finally, we conduct an evolutionary search with a latency constraint to find an efficient model specialized for the target hardware.", "We construct a large design space by breaking two conventions in the Transformer design: (1) All decoder layers only attend to the last encoder layer; (2) All the layers are identical.", "Arbitrary Encoder-Decoder Attention.", "Different encoder layers extract features on different abstraction levels.", "Conventionally, all the decoder layers only attend to the last encoder layer.", "It forms an information bottleneck that forces all the decoder layers to learn solely from the high abstraction level and ignore the low-level information.", "To break the bottleneck, we propose Arbitrary Encoder-Decoder Attention to learn the most suitable connections between the encoder and the decoder.", "Each decoder layer can choose multiple encoder layers to attend.", "The key and value vectors from encoder layers are concatenated in the sentence length dimension (Figure 4) and fed to the encoder-decoder cross attention module.", "The mechanism is efficient because it introduces no additional parameters.", "The latency overhead is also negligible.", "For example, with each decoder layer attending to two encoder layers, the latency of Transformer-Base on Nvidia TITAN Xp GPU barely increases by 0.4%.", "It improves the model capacity by allowing attention to different abstraction levels.", "Heterogeneous Transformer Layers.", "Previous Transformers repeat one architecture for all layers.", "In HAT, instead, different layers are heterogeneous , with different numbers of heads, hidden dim, and embedding dim.", "In attention layers, different heads are used to capture various dependencies.", "However, Voita et al. (2019) shows that many heads are redundant.", "We thereby make attention head number elastic so that each attention module can decide its necessary number of heads.", "activation layer.", "Traditionally, the hidden dim is set as 2 or 4 of the embedding dim, but this is sub-optimal since different layers need different capacities depending on the feature extraction difficulty.", "We hence make the hidden dim elastic .", "Moreover, we also support elastic embedding dim of encoder and decoder, but it is consistent inside encoder/decoder.", "The number of encoder & decoder layers are also elastic to learn the proper level of feature encoding and decoding.", "Other design choices such as the length of Q, K, V vectors in attention modules can be naturally incorporated in our framework, which we leave for future work.", "It is critical to have a large design space in order to find high-performance models.", "However, training all the models and comparing their BLEU scores is infeasible.", "We thus propose SuperTransformer, a supernet for performance approximation , which can judge the performance of a model without fully training it.", "The SuperTransformer is the largest model in the search space with weight sharing (Pham et al., 2018; Liu et al., 2019; Cai et al., 2019a).", "Every model in the search space (a SubTransformer) is a part of the SuperTransformer.", "All SubTransformers share the weights of their common parts.", "For elastic embedding dim, all SubTransformers share the front portion of the longest word embedding and corresponding FC layer weights.", "As in Figure 5, for elastic FFN hidden dim, the front part of the FC weights is shared.", "For elastic head number in attention modules, the whole Q, K, V vectors (the lengths are fixed in our design space) are shared by dividing into head number parts.", "Elastic layer numbers let all SubTransformers share the first several layers.", "In the SuperTransformer training, all possible SubTransformers are uniformly sampled , and the corresponding weights are updated.", "In practice, the SuperTransformer only needs to be trained for the same steps as a baseline Transformer model, which is fast and low-cost.", "After training, we can get the performance proxy of sampled models in the design space by evaluating the corresponding SubTransformers on the validation set without training.", "Given a latency requirement, we perform an evolutionary search to find a satisfactory SubTransformer.", "There are two ways to evaluate the hardware latency of a SubTransformer: (1) Online measurement in which we measure the models during the search process.", "(2) Offline, where we train a latency predictor to provide the latency.", "We apply the offline method here because it is fast and accurate .", "For the online method, a single sampled SubTransformer requires hundreds of inferences to get an accurate latency, which lasts for minutes and slows down the searching.", "For the offline method, we encode the architecture of a SubTransformer into a feature vector, and predict its latency instantly with a multi-layer perceptron (MLP).", "Trained with thousands of real latency data points, the predictor yields high accuracy (Figure 6).", "Note that the predicted latency is only used in the search process, and we report real measured latency in the experiment section.", "Compared with deducing a closed-form latency model for each hardware, the latency predictor method is more general and faster.", "We use an evolutionary algorithm to conduct the search process.", "As in Figure 3, the search engine queries the latency predictor for SubTransformer latency, and validates the loss on the validation set.", "The engine only adds SubTransformers with latency smaller than the hardware constraint to the population.", "We conduct experiments on four machine translation tasks: WMT'14 En-De, WMT'14 En-Fr, WMT'19 En-De, and IWSLT'14 De-En, consisting of 4.5M, 36.3M, 43.0M, and 160K pairs of training sentences, respectively.", "For WMT'14 En-De, we apply 32K source-target BPE vocabulary, train on WMT'16, validate on newstest2013 and test on newstest2014, replicating Wu et al. (2019b); For WMT'14 En-Fr, we use 40K source-target BPE vocabulary, validate on newstest2012&2013, and test on newstest2014, replicating Gehring et al. (2017).", "WMT'19 En-De adopts 49.6K source-target BPE vocabulary, validates on newstest2017, and tests on newstest2018, the same as Junczys-Dowmunt (2019).", "We use 10K joint BPE vocabulary in lower case for IWSLT'14 De-En (Grave et al., 2017).", "Baselines.", "Our baseline models are Transformer (Vaswani et al., 2017), Levenshtein Transformer (Gu et al., 2019), both with the Ott et al. (2019) implementation, Evolved Transformer (So et al., 2019) and Lite Transformer (Wu et al., 2020).", "Evaluation Metrics.", "For evaluation, we use beam four and length penalty 0.6 for WMT, and beam five for IWSLT (Vaswani et al., 2017).", "All BLEUs are calculated with case-sensitive tokeniza-tion 1 , but we also apply the compound splitting BLEU 2 for WMT, the same as Vaswani et al. (2017).", "We test the model with the lowest validation set loss for WMT and the last ten checkpoints averaged for IWSLT.", "We test the latency of the models by measuring translation from a source sentence to a target sentence with the same length.", "The length is the average output length on the test set 30 for WMT and 23 for IWSLT.", "For each model, we measure the latency for 300 times, remove the fastest and slowest 10% and then take the average of the rest 80%.", "We conduct experiments on three representative hardware platforms: Raspberry Pi-4 with an ARM Cortex-A72 CPU, Intel Xeon E5-2640 CPU, and Nvidia TITAN Xp GPU.", "SuperTransformer Setups.", "The SuperTransformer for WMT has the following design space: [512, 640] for embedding dim, [1024, 2048, 3072] for hidden dim, [4, 8] for the head number in all attention modules, [1, 2, 3, 4, 5, 6] for decoder layer number.", "Due to decoder auto-regression, encoder only accounts for less than 5% of the measured latency; thereby, we set the encoder layer number fixed as 6.", "For arbitrary encoder-decoder attention, each decoder can choose to attend to the last one, two, or three encoder layers.", "The SuperTransformer design space for IWSLT is the same as WMT except for [2048, 1024, 512] for hidden dim and [4, 2] for head number.", "We set the Q, K, V vector dim fixed as 512.", "The design space contains around 10 15 possible SubTransformers and covers a wide range of model size and latency (largest = 6 smallest).", "We train the SuperTransformers of WMT for 40K steps and 50K steps for IWSLT.", "Hardware-Aware Evolutionary Search Setups.", "The input of the latency predictor is a feature vector of SubTransformer architecture with ten elements: layer number, embed dim, average hidden dim, average self-attention heads, of both encoder and decoder; plus average encoder-decoder attention heads, and the average number of encoder layers each decoder layer attends.", "A dataset of 2000 (SubTransformer architecture, measured latency) samples for each hardware is collected, and split into train:valid:test=8:1:1.", "We normalize the features and latency, and train a three-layer MLP with 400 hidden dim and ReLU activation.", "We choose three-layer because it is more accurate than the one-layer model, and over three layers do not improve accuracy anymore.", "With the predictor, we conduct an evolutionary search for 30 iterations in the SuperTransformer, with population 125, parents population 25, mutation population 50 with 0.3 probability and crossover population 50.", "Training Settings.", "Our training settings are in line with Wu et al. (2019b) and Wu et al. (2020).", "For WMT, we train for 40K steps with Adam optimizer and a cosine learning rate (LR) scheduler (Kingma and Ba, 2015; Loshchilov and Hut-ter, 2017), where the LR is linearly warmed up from 10 7 to 10 3 , and then cosine annealed.", "For IWSLT, we train for 50K steps with inverse square root LR scheduler.", "The baseline Transformers are trained with the same settings as the searched SubTransformers for fair comparisons.", "HAT consistently outperforms the baseline Transformers and achieves up to 3 faster inference speed and 3.7 smaller size over Transformer-Big.", "Specific latency, BLEU and SacreBLEU (Post, 2018) are in Appendix Table 8.", "0 HAT (Ours) Layer Number Scaling of Transformer (Vaswani et al.) Dimension Scaling of Transformer (Vaswani et al.) Transformer-Base BLEUS c o r e 33 34 35 Transformer-Base Figure 8: Inference latency and BLEU trade-offs of WMT'19 and IWSLT'14 tasks on Nvidia GPU.", "On ARM CPU, HAT is 3 faster and 3.7 smaller than Transformer-Big with the same BLEU.", "On Intel CPU, HAT achieves over 2 speedup.", "On Nvidia GPU, the blue dash line is nearly vertical , indicating that dimension scaling can hardly reduce the latency.", "In this case, HAT can still find models with low latency and high performance.", "In Figure 7, 8 and Appendix Table 8, we compare HAT with Transformer baselines on four tasks.", "The embedding dims are 512 and 1024 for the Transformer-Base and Big, respectively.", "The hidden dims are 4 and 2 of the embedding dim for WMT and IWSLT.", "The IWSLT models are smaller to prevent overfitting (Wu et al., 2019b).", "We obtain a series of baseline models with layer number scaling (yellow) and dimension scaling (blue).", "We set different latency constraints on three hardware to get a series of HAT models.", "HAT consistently outperforms baselines with a large gap under different latency constraints.", "We further compare various aspects of HAT with Transformer (Vaswani et al., 2017) and Evolved Transformer (So et al., 2019) in Table 2.", "HAT achieves up to 1.6 , 3 , and 3.4 speedup with up to 1.4 , 3.7 , and 4 smaller size than baselines.", "We report FLOPs for translating a 23-token sentence for IWSLT and 30 for WMT.", "We show the overall GPU hours for training the SuperTransformer and the searched SubTransformer.", "We also calculate the cloud computing costs with different modes: preemptable is cheaper ($0.74/h) than on-demand ($2.48/h) (Strubell et al., 2019).", "HAT is highly affordable since the total GPU-hour is over 12000 smaller than the Evolved Transformer, and is even smaller than Transformer-Big by virtue of the compact model size.", "In Table 3, we compare HAT with other latest models.", "We scale down all models to have similar BLEU scores with Levenshtein for fair comparisons.", "We adopt the average iteration time of 2.88 for decoding (Gu et al., 2019), without limiting the length of the output sentence (12 tokens after Hardware-Aware Hetero.Layers Latency #Params FLOPs(G) BLEU GPUHours CO 2 e (lbs) Cloud Comp.", "decoding).", "HAT runs 1.3 faster than Transformer with higher BLEU; 1.9 faster than Levenshtein with 0.7 higher BLEU.", "Under similar latency, HAT also outperforms Lite Transformer.", "These results demonstrate HAT's effectiveness in lower latency scenarios.", "Our framework can also be adopted to speedup those models.", "Design Insights.", "For all HAT WMT models in Figure 7, 10% of all decoder layers attend to SubTransformer Latency #Params BLEU WMT'14En-De Largest 10.1s 71M 28.1 Searched HAT 6.9s 48M 28.4 WMT'14En-Fr Largest 10.1s 71M 41.4 Searched HAT 9.1s 57M 41.8 Table 4: The searched HAT compared with the largest SubTransformer in the design space.", "three encoder layers, 40% attend to two encoder layers.", "That demonstrates the necessity of arbitrary encoder-decoder attentions.", "In Appendix Figure 12, we visualize the models specialized for different hardware mentioned in Table 1.", "We find that the GPU model is wide but shallow ; the Raspberry Pi model is deep but thin .", "The phenomenon echos with our latency profiling (Figure 2) as GPU latency is insensitive to embedding and hidden dim, but Raspberry Pi is highly sensitive.", "It guides manual designs: on GPU, we can reduce the layer number and increase dimension to reduce latency and keep high performance.", "Ablation Study.", "HAT achieves higher BLEU with 1.5 lower latency and 1.5 smaller size compared with the largest SubTransformer (Table 4).", "This suggests that larger models do not always provide better performance, and demonstrates the effectiveness of HAT.", "We also compare the evolutionary search with random search (Figure 9).", "Evolutionary search can find models with lower losses than random search.", "SubTransformer Performance Proxy.", "All SubTransformers inside the SuperTransformer are uniformly sampled and thus equally trained , so the performance order is well-preserved during training.", "We conduct experiments to show the effectiveness of the SubTransformer performance proxy as in Table 5 and Appendix Figure", "11. The BLEUs of SubTransformers with inherited weights and weights trained from-scratch are very close.", "More importantly, they also have the same relative performance order .", "Therefore, we can rely on the proxy to search high-performance model architecture, significantly reducing the search cost.", "Low Search Cost.", "As shown in Table 2 and Figure 10, the search cost of HAT is 12,041 lower than the Evolved Transformer.", "Although both are using Evolutionary Search, the key difference is that Evolved Transformer needs to train all individual models and sort their final performance to pick top ones; on the contrary, HAT trains all models together inside SuperTransformer and sorts their performance proxy to pick top ones.", "The superior performance of HAT proves that the performance proxy is accurate enough to find good models.", "from-scratch in order to conduct fair comparisons with baselines.", "In practice, we can also directly finetune the SubTransformers with the inherited weights from the SuperTransformer to further reduce the training cost.", "With 10K finetuning steps (1/4 of from-scratch training), the inherited SubTransformers can achieve similar or better performance than trained from-scratch ones (Table 6).", "In this way, the training cost for a model under a new hardware constraint can be further reduced by 4 , since the SuperTransformer training cost is amortizable among all searched models.", "Quantization Friendly.", "HAT is orthogonal to other model compression techniques such as quantization.", "We apply K-means quantization to HAT and further reduce the model size.", "We initialize centroids uniformly in the range of [min, max] of each weight matrix and run at most 300 iterations for each of them.", "Even without any finetuning, 4-bit quantization can reduce the model size by 25 with negligible BLEU loss compared to the Transformer-Big baseline (Table 7).", "Interestingly, the 8-bit model even has 0.1 higher BLEU than the full precision model, indicating the robustness of searched HAT.", "Compared with the Transformer-Base 4-bit quantization baseline, which has 24MB model size and 38.9 BLEU score, HAT has 2.2 higher BLEU with similar model size.", "Knowledge Distillation Friendly.", "HAT is also orthogonal to knowledge distillation (KD) because HAT focuses on searching for an efficient architecture while KD focuses on better training a given architecture.", "We combine KD with HAT by distilling token-level knowledge (top-5 soft labels) from a high-performance SubTransformer to a low-performance SubTransformer on WMT'14 En-De task.", "The teacher model has a BLEU of 28.5 and 49M parameters; the student model has 30M parameters.", "KD can improve the BLEU of the student model from 25.8 to 26.1.", "Transformer.", "Transformer (Vaswani et al., 2017) has prevailed in sequence modeling (Ng et al., 2019; Junczys-Dowmunt, 2018).", "By stacking identical blocks, the model obtains a large capacity but incurs high latency.", "Recently, a research trend is to modify the Transformer to improve the performance (Chen et al., 2018; Wu et al., 2019b; Sukhbaatar et al., 2019; Wang et al., 2019).", "Among them, Wu et al. (2019b) introduced a convolution-based module to replace the attention; Wang et al. (2019) proposed to train deep Transformers by propagating multiple layers together in the encoder.", "Zhang et al. (2018) and Kim et al. (2019) also proposed AAN and SSRU to replace the attention mechanism.", "HAT is orthogonal to them and can be combined to search for efficient architecture with those new modules.", "Another trend is to apply nonor partially-autoregressive models to cut down the iteration number for decoding (Gu et al., 2019; Akoury et al., 2019; Wei et al., 2019; Gu et al., 2018).", "Although reducing latency, they sometimes suffer from low performance.", "Bapna et al. (2018) explored using learned linear combinations of encoder outputs as decoder inputs, while HAT concatenates the outputs without linear combinations, thus better preserving the low-level information.", "Wu et al. (2020) investigated mobile settings for NLP tasks and proposed a multi-branch Lite Transformer.", "However, it relied on FLOPs for efficient model design, which is an inaccurate proxy for hardware latency (Figure 2).", "There are also works (Kim and Rush, 2016; Junczys-Dowmunt et al., 2018; Kim et al., 2019; Yan et al., 2020) using Knowledge Distillation (KD) to obtain small student models.", "Our method is orthogonal to KD and can be combined with it to improve the efficiency further.", "There are also hardware accelerators (Ham et al., 2020; Zhang et al., 2020) for attention and fully-connected layers in the Transformer to achieve efficient processing.", "Neural Architecture Search.", "In the computer vision community, there has been an increasing interest in automating efficient model design with Neural Architecture Search (NAS) (Zoph and Le, 2017; Zoph et al., 2018; Pham et al., 2018; He et al., 2018).", "Some applied black-box optimization such as evolutionary search (Wang et al., 2020b) and reinforcement learning (Cai et al., 2019b; He et al., 2018; Wang et al., 2018, 2020a; Mao et al., 2019); Some leveraged backpropagation with differentiable architecture search (Liu et al., 2019).", "Some also involved hardware constraints into optimizations such as MNasNet (Tan et al., 2019), ProxylessNAS (Cai et al., 2019b), FBNet (Wu et al., 2019a) and APQ (Wang et al., 2020b).", "To reduce the NAS cost, supernet based methods (Pham et al., 2018; Bender et al., 2018; Guo et al., 2019) apply a proxy for sub-network performance and adopt search algorithms to find good sub-networks.", "For NLP tasks, the benefits of the architecture search have not been fully investigated.", "Recently, So et al. (2019) proposed the Evolved Transformer to search for architectures under model size constraints and surpassed the original Transformer baselines.", "However, it suffered from very high search costs ( 250 GPU years ), making it unaffordable to search specialized models for various hardware and tasks.", "In addition, hardware latency feedback was not taken into account for better case-by-case specializations.", "Since different hardware has distinct architecture and features (Cong et al., 2018), feedback from hardware is critical for efficient NLP.", "We propose Hardware-Aware Transformers (HAT) framework to solve the challenge of efficient deployments of Transformer models on various hardware platforms.", "We conduct hardware-aware neural architecture search in an ample design space with an efficient weight-shared SuperTransformer, consuming four orders of magnitude less cost than the prior Evolved Transformer, and discover high-performance low-latency models.", "We hope HAT can open up an avenue towards efficient Transformer deployments for real-world applications.", "We thank NSF Career Award #1943349, MIT-IBM Watson AI Lab, Semi-conductor Research Corporation (SRC), Intel, and Facebook for supporting this research." ]
[ "abstain", "objective", "objective", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "objective", "method", "abstain", "method", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "other" ]
[ "Translation-based methods for grammar correction that directly map noisy, ungrammatical text to their clean counterparts are able to correct a broad range of errors; however, such techniques are bottlenecked by the need for a large parallel corpus of noisy and clean sentence pairs.", "In this paper, we consider synthesizing parallel data by noising a clean monolingual corpus.", "While most previous approaches introduce perturbations using features computed from local context windows, we instead develop error generation processes using a neural sequence transduction model trained to translate clean examples to their noisy counterparts.", "Given a corpus of clean examples, we propose beam search noising procedures to synthesize additional noisy examples that human evaluators were nearly unable to discriminate from nonsynthesized examples.", "Surprisingly, when trained on additional data synthesized using our best-performing noising scheme, our model approaches the same performance as when trained on additional nonsynthesized data.", "Correcting noisy, ungrammatical text remains a challenging task in natural language processing.", "Ideally, given some piece of writing, an error correction system would be able to fix minor typographical errors, as well as grammatical errors that involve longer dependencies such as nonidiomatic phrasing or errors in subject-verb agreement.", "Existing methods, however, are often only able to correct highly local errors, such as spelling errors or errors involving articles or prepositions.", "Classifier-based approaches to error correction are limited in their ability to capture a broad range of error types (Ng et al., 2014).", "Machine translation-based approachesthat instead trans-Noise (Decode) Denoise \"New Orleans\" \"NLP\" \"new Orleens\"\"nlp\"", "late noisy, ungrammatical sentences to clean, corrected sentencescan flexibly handle a large variety of errors; however, such approaches are bottlenecked by the need for a large dataset of source-target sentence pairs.", "To address this data sparsity problem, we propose methods for synthesizing noisy sentences from clean sentences, thus generating an additional artificial dataset of noisy and clean sentence pairs.", "A simple approach to noise clean text is to noise individual tokens or bigrams, for example by replacing each token with a random draw from the unigram distribution.", "This type of approach, however, tends to generate highly unrealistic noise and fails to capture phrase-level phenomena.", "Other rule-based approaches fail to capture a diverse set of error types.", "We consider a method inspired by the backtranslation procedure for machine translation (Sennrich et al., 2015).", "Our method combines a neural sequence transduction trained on a seed corpus of clean noisy pairs with beam search 619 noising procedures to produce more diversity in the decoded outputs.", "This technique addresses two issues with existing synthesis techniques for grammar correction:", "1. By using a neural model trained end-to-end on a large corpus of noisy and clean sentences, the model is able to generate rich, diverse errors that better capture the noise distribution of real data.", "2. By encouraging diversity through applying noise to hypotheses during decoding, we avoid what we refer to as the one-to-many problem, where decoding from a model trained on clean noisy examples results in overly clean output, since clean subphrases still form the majority of noisy examples.", "We perform experiments using several noising methods to validate these two claims, yielding gains on two benchmarks.", "Our main empirical result is that, starting with only clean news data and models trained on a parallel corpus of roughly 1.3 million sentences, we can train models with additional synthesized data that nearly match the performance of models trained on 3 million nonsynthesized examples.", "Noising While for images, there are natural noising primitives such as rotations, small translational shifts, and additive Gaussian noise, similar primitives are not as well developed for text data.", "Similarly, while denoising autoencoders for images have been shown to help with representation learning (Vincent et al., 2010), similar methods for learning representations are not well developed for text.", "Some recent work has proposed noising in the form of dropping or replacing individual tokensas a regularizer when training sequence models, where it has been demonstrated to have a smoothing effect on the softmax output distribution (Bowman et al., 2015; Xie et al., 2017; Dai and Le, 2015; Kumar et al., 2015).", "Grammar correction Recent work by Chollampatt and Ng (2018) has achieved impressive performance on the benchmarks we consider using convolutional encoder-decoder models.", "Previous work using data synthesis for grammatical error correction (GEC) has introduced errors by examining the distribution of error types, then applying errors according to those distributions together with lexical or part-of-speech features based on a small context window (Brockett et al., 2006; Felice, 2016).", "While these methods can introduce many possible edits, they are not as flexible as our approach inspired by the backtranslation procedure for machine translation (Sennrich et al., 2015).", "This is important as neural language models not explicitly trained to track long-range linguistic dependencies can fail to capture even simple noun-verb errors (Linzen et al., 2016).", "Recently, in the work perhaps most similar to ours, Rei et al. (2017) propose using statistical machine translation and backtranslation along with syntactic patterns for generating errors, albeit for the error detection task.", "Neural machine translation Recent end-to-end neural network-based approaches to machine translation have demonstrated strong empirical results (Sutskever et al., 2014; Cho et al., 2014).", "Building off of these strong results on machine translation, we use neural encoder-decoder models with attention (Bahdanau et al., 2014) for both our data synthesis (noising) and grammar correction (denoising) models.", "Although many recent works on NMT have focused on improving the neural network architecture, the model architecture is orthogonal to the contributions in this work, where we instead focus on data synthesis.", "In parallel to our work, work on machine translation without parallel corpora has also explored applying noise to avoid copying when pretraining autoencoders by swapping adjacent words (Lample et al., 2017; Artetxe et al., 2017).", "Diverse decoding Key to the data generation procedure we describe is adding noise to the scores of hypotheses during beam searchotherwise, decoded outputs tend to contain too few errors.", "This is inspired by work in dialogue, in which neural network models tend to produce common, overly generic responses such as I don't know (Sordoni et al., 2015; Serban et al., 2015).", "To mitigate this issue, Li et al. (2015) and others have proposed methods to increase the diversity of neural network outputs.", "We adopt a similar approach to Li et al. (2015) to generate noisier hypotheses during decoding.", "We first briefly describe the neural model we use, then detail the noising schemes we apply when synthesizing examples.", "In order to generate noisy examples as well as to translate ungrammatical examples to their corrected counterparts, we need to choose a sequence transduction model.", "Based off their strong empirical performance, we use a neural network-based model for this work.", "Our method uses two neural encoder-decoder models:", "1. The first is the noising model, which, given a clean sentence, is used to generate a noised version of that sentence.", "This model is trained on a seed corpus of parallel clean noisy sentences.", "2. The second is the denoising model, which, given a noisy, ungrammatical sentence, generates the clean, corrected sentence.", "For both models, we use the same convolutional encoder-decoder to model p ( Y | X ) = TYY t =1 p ( y t | X, y 1: t 1 ; ) where X = ( x 1 , x 2 , . . . , x TX ) is the source sequence and Y = ( y 1 , y 2 , . . . , y TY ) the corresponding target sequence, and we minimize the training loss ( ) = log TYX t =1 p ( y t | X, y 1: t 1 ; ) thus maximizing log-likelihood.", "The model architecture we use is similar to that described by Kalchbrenner et al. (2016) and Gehring et al. (2017).", "Gated convolutions are applied with maskingto avoid peeking at future inputs when training using teacher forcingsuch that they form an autoregressive network similar to a recurrent neural network with gated hidden units.", "This architecture was selected so that training steps could be parallelized across the time dimension through the use of convolutions.", "However, we emphasize that the architecture is not a focus of this paper, and we would expect that RNN architectures with LSTM cells would achieve similar results.", "For simplicity and to avoid handling out-of-vocabulary words, we use character-level tok-enization.", "Figure 2 illustrates the model architecture.", "The amount of parallel data is often the limiting factor in the performance of neural network systems.", "In order to obtain more parallel examples for the grammar correction task, we take clean text Y and apply noise, yielding noisy text Y , then train a denoising model to map from Y back to Y .", "The noising process used to generate Y greatly affects final performance.", "First, we consider noising methods which we use as our baselines, as well as the drawbacks for each method.", "appending clean examples : We first consider simply appending clean examples with no noise applied to both the source and the target.", "The aim is for the decoder to learn a better language model when trained on additional clean text, similar to the motivation described in Dai and Le (2015).", "However, for the models we consider, the attention mechanism allows copying of source to target.", "Thus the addition of examples where source and target are identical data may also cause the model to become too conservative with edits and thus reduce the recall of the system.", "First, for every character in each word we sample deletions, followed by transpositions.", "Then we sample deletions and transpositions for every word in the sentence.", "Deletion and transposition probabilities were selected such 621 How What ar e i s i s ar e Expansi on St ep Sel ect i on St ep + + = = ar e i s i s ar e you he up you + + = = you he up you i s i s you he scor es penal t y Figure 3: Illustration of random noising with beam width", "that overall character and word-level edit distances roughly matched the edit distances between clean and noisy examples in our parallel seed corpus.", "While this method is fast to apply, it tends to produce highly unrealistic errors leading to a mismatch between the synthesized and real parallel data.", "reverse noising : For reverse noising, we simply train a reverse model from Y X using our parallel noisy-clean corpus and run standard beam search to generate noisy targets Y from clean inputs Y .", "However, we find vanilla reverse noising tends to be too conservative.", "This is due to the one-to-many problem where a clean sentence has many possible noisy outputs which mostly consist of clean phrases.", "The output then contains far fewer errors on average than the original noisy text.", "To address the drawback of the reverse noising scheme, we draw inspiration from ideas for increasing diversity of outputs in dialogue (Li et al., 2016).", "During the beam search procedure, we add noise to the scores of hypotheses on the beam to encourage decoding to stray from the greedy output.", "Recall that during beam search, we iteratively grow a set of hypotheses H = { h 1 , h 2 , . . . } , only keeping the top hypotheses after each step of decoding according to some scoring function s ( h ) .", "Extending the reverse noising scheme, the beam search noising schemes we consider are: rank penalty noising We directly apply the method of Li et al. (2016).", "At every step of the search procedure, siblings from the same parent are penalized by adding k rank to their scores, where k is their rank (in descending log-likelihood) amongst their siblings and rank is a penalty hyperparameter corresponding to some log-probability.", "top penalty noising Only the top (most-probable) hypothesis h top of the beam is penalized by adding top to its score s ( h top ) .", "random noising Every hypothesis is penalized by adding r random to its score, where r is drawn uniformly from the interval [0 , 1] .", "For sufficiently large random , this leads to a random shuffling of the ranks of the hypotheses according to their scores.", "An illustration of the random noising algorithm is shown in Figure", "3. Note that although rank penalty noising should encourage hypotheses whose parents have similar scores to remain on the beam, it can also tend to leave the hypothesis from greedy decoding on the beam in the case where softmax output distributions are highly peaked.", "This is much more of an issue for tasks that involve significant copying of source to target, such as grammar correction.", "Note also that the random noising can yield more diverse outputs than top penalty noising, depending on the probability with which each is applied.", "All of the beam search noising methods described are intended to increase the diversity and the amount of noise in the synthe-622 Corpus Sent.", "sized outputs Y .", "By performing beam search noising, we can produce errors such as those shown in Table", "4. 3.3 Denoising Once noised data has been generated, denoising simply involves using a neural sequence transduction model to backtranslate the noised text to the original clean text.", "For denoising, during decoding we apply length normalization as well as a coverage penalty to the scoring function s ( h ) (Wu et al., 2016).", "The final scoring function also incorporates a 5 -gram language model trained on a subset of Common Crawl, estimated with Kneser-Ney smoothing using KenLM (Heafield, 2011).", "We incorporate the language model during final reranking by modifying the score for a completed hypothesis s ( h ) to be s LM ( h ) = s ( h ) + log p LM ( h ) where is a hyperparameter and p LM ( h ) is given by the language model.", "To determine the effectiveness of the described noising schemes, we synthesize additional data using each and evaluate the performance of models trainined using the additional data on two benchmarks.", "Datasets For training our sequence transduction models, we combine the publicly available English Lang-8 dataset, a parallel corpus collected from a language learner forum, with training data from the CoNLL 2014 challenge (Mizumoto et al., 2011; Ng et al., 2014).", "We refer to this as the base dataset.", "Junczys-Dowmunt and Grundkiewicz (2016) additionally scraped 3.3M pairs of sentences from Lang-8.", "Although this expanded dataset, which we call the expanded dataset, is not typically used when comparing performance on grammar correction benchmarks, we use it instead to compare performance when training on additional synthesized data versus nonsynthesized data.", "For clean text to be noised, we use the LDC New York Times corpus for 2007, which yields roughly 1 million sentences.", "A summary of the data used for training is given in Table", "1. We use the CoNLL 2013 evaluation set as our development set in all cases (Ng et al., 2013).", "Our test sets are the CoNLL 2014 evaluation set and the JFLEG test set (Ng et al., 2014; Napoles et al., 2017).", "Because CoNLL 2013 only has a single set of gold annotations while CoNLL 2014 has two, performance metrics tend to be significantly higher on CoNLL 2014.", "We report precision, recall, and F 0 .", "5 score, which is standard for the task, as precision is valued over recall.", "On JFLEG, we report results with the GLEU metric (similar to BLEU) developed for the dataset.", "Training and decoding details All models are trained using stochastic gradient descent with annealing based on validation perplexity on a small held-out subset of the Lang-8 corpus.", "We apply both dropout and weight decay regularization.", "We observed that performance tended to saturate after 30 epochs.", "Decoding is done with a beam size of 8; in early experiments, we did not observe significant gains with larger beam sizes (Koehn and Knowles, 2017).", "Results for the CoNLL 2013 (dev) and 2014 (test) datasets but with and without language model reranking are given in Table", "2. In general, adding noised data helps, while simply adding clean data leads the model to be too conservative.", "Overall, we find that the random noising scheme yields the most significant gain of 4 .", "5 F -score.", "Surprisingly, we find that augmenting the base dataset with synthesized data generated with random noising yields nearly the same performance when compared to using only nonsynthesized examples.", "To determine whether this might be due to overfitting, we reduced the dropout rate when training on the expanded dataset, but did not observe better results.", "The random noising scheme achieves the best performance, while the top noising scheme matches the best performance on the development set but not the test set.", "We believe this is due to a mismatch between the CoNLL 2013 dev and 2014 623 Method Dev (no LM) Dev Test P R F 0 .", "tets sets.", "Since the 2013 dev set has only a single annotator, methods are encouraged to target higher recall, such that the top noising scheme was optimized for precision over recall.", "To check this, we ran decoding on CoNLL 2014 using the best dev settings with no language model, and found that the top noising scheme yielded an F 0 .", "5 -score of 45.2, behind only random (47.1) and ahead of token (42.0) and reverse (43.9) noising.", "Overall, we find the data synthesis method we describe to yield large gains in recall.", "For completeness, we also compare to other state-of-the-art systems, such as the phrase-based machine translation system by Junczys-Dowmunt and Grundkiewicz (2016), who performed parameter tuning with sparse and dense features by cross-validation on the CoNLL 2014 training set.", "Chollampatt and Ng (2018) achieve even higher state-of-the-art results using the neural machine translation model of Gehring et al. (2017) along with improvements to the reranking procedure.", "Recently, Napoles et al. (2017) introduced the JFLEG dataset, intended to evaluate the fluency of grammar correction systems rather than simply the precision and recall of edits.", "The evaluation metric proposed is GLEU, a variant of BLEU score.", "Most results for this task were reported with hyperparameter settings from the CoNLL task; hence we report results with the best settings on our CoNLL 2013 dev set.", "Results are shown in Table 3 1 .", "Token noising performs surprisingly well; we suspect this is because a significant portion of errors in the JFLEG dataset are spelling errors, as demonstrated from strong gains in performance by using a spelling checker reported by Chollampatt and Ng (2018).", "Our experiments illustrate that synthesized parallel data can yield large gains on the grammar correction task.", "However, what factors make for an effective data synthesis technique?", "We consider the properties of the noising scheme and the corresponding data that lead to better performance.", "First, we manually compare each of the different noising methods to evaluate how realistic the errors introduced are.", "This is reminiscent of the generative adversarial network setting (Goodfel-low et al., 2014), where the generator seeks to produce samples that fool the discriminator.", "Here the discriminator is a human evaluator who, given the clean sentence Y , tries to determine which of two sentences X and Y is the true noisy sentence, and which is the synthesized sentence.", "To be clear, 1 Comparisons taken from https://github.com/ keisks/jfleg 624 Scheme P R F 0 .", "we do not train with a discriminatorthe beam search noising procedures we proposed alone are intended to yield convincing errors.", "For each noising scheme, we took 100 ( X, Y ) pairs from the development set (500 randomly chosen pairs combined), then generated Y from Y .", "We then shuffled the examples and the order of X and Y such that the identity of X and Y as well as the noising scheme used to generate Y were unknown 2 .", "Given Y , the task for human evaluators is to predict whether X or Y was the synthesized example.", "For every example, we had two separate evaluators label the sentence they thought was synthesized.", "We chose to do this labeling task ourselves (blind to system) since we were familiar with the noising schemes used to generate examples, which should reduce the number of mis-classifications.", "Results are shown in Figure 4, and examples of the evaluation task are provided in Table", "4. 5.2 Noise Frequency and Diversity Comparing the performance using different noising methods on the CoNLL 2014 dataset to the human evaluation in the previous section, we see that generating errors which match the real distribution tends to result in higher performance, as seen by the poor performance of token noising relative 2 Hence the human labelers cannot favor a particular scheme unless it can be distinguished from Y .", "to the other methods.", "Injecting the appropriate amount of noise is important as well, as seen by improved performance when using beam search noising to increase diversity of outputs, and no performance gain when simply adding clean text.", "We observe that token noising, despite matching the frequency of errors, fails to generate realistic errors (Figure 4).", "On the other hand, reverse noising yields significantly more convincing errors, but the edit distance between synthesized examples is significantly lower than in real data (Figure 5).", "A combination of sufficient amounts of noise and rich, diverse errors appears to lead to better model performance.", "Mismatches in the distribution of error types can often severely impact the performance of data synthesis techniques for grammar correction (Felice, 2016).", "For example, only synthesizing noun number articles or preposition errors based on rules 625 Sentence 1 or 2 clean Day after day , I get up at 8 oclock .", "may improve the performance for those two error types, but may hurt overall performance.", "In contrast, the approaches we consider, with the exception of token noising, are fully data-driven, and hence we would expect gains across all different error types.", "We observe this is the case for random noising, as shown in Figure 6.", "Domain adaptation can yield significant differences in performance for dissimilar domains (such as those of the datasets used in our experiments) (Daume III, 2009).", "The Lang-8, CoNLL, and JFLEG datasets contain online forum data and essay data from English learners.", "The n -gram language model is estimated using Common Crawl data from the web.", "The clean data which we noise is collected from a news corpus.", "Yet each dataset yields significant gains.", "This suggests that at current levels of system performance, data sparsity remains the key data issue, more so than domain adaptation.", "It is also possible that LDC New York Times data is better matched to the CoNLL essay data than the Lang-8 forum data, and this in part accounts for the large gains we observe from training on synthesized data.", "In this work, we address one of the key issues for developing translation-based grammar correction systems: the need for a large corpus of parallel data.", "We propose synthesizing parallel data by noising clean text, where instead of applying noise based on finite context windows, we instead train a reverse model and apply noise during the beam search procedure to synthesize noisy examples that human evaluators were nearly unable to distinguish from real examples.", "Our experiments suggest that the proposed data synthesis technique can yields almost as strong results as when training with additional nonsynthesized data.", "Hence, we hope that parallel data becomes less of a bottleneck, and more emphasis can be placed on developing better models that can capture the longer dependencies and structure in the text.", "We thank the anonymous reviewers for their helpful feedback, as well as Steven Tan for comments on an early draft." ]
[ "abstain", "method", "objective", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "method", "abstain", "objective", "result", "method", "result", "other", "other", "other", "abstain", "other", "abstain", "other", "other", "other", "abstain", "objective", "abstain", "method", "other", "other", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "method", "abstain", "abstain", "result", "objective", "objective", "objective", "objective", "other" ]
[ "We present InferWiki, a Knowledge Graph Completion (KGC) dataset that improves upon existing benchmarks in inferential ability, assumptions, and patterns.", "First, each testing sample is predictable with supportive data in the training set.", "To ensure it, we propose to utilize rule-guided train/test generation, instead of conventional random split.", "Second, InferWiki initiates the evaluation following the open-world assumption and improves the inferential difficulty of the closed-world assumption, by providing manually annotated negative and unknown triples.", "Third, we include various inference patterns (e.g., reasoning path length and types) for comprehensive evaluation.", "In experiments, we curate two settings of InferWiki varying in sizes and structures, and apply the construction process on CoDEx as comparative datasets.", "The results and empirical analyses demonstrate the necessity and high-quality of InferWiki.", "Nevertheless, the performance gap among various inferential assumptions and patterns presents the difficulty and inspires future research direction.", "Our datasets can be found in https://github.", "com/TaoMiner/inferwiki .", "Knowledge Graph Completion (KGC) aims to predict missing links in KG by inferring new knowledge from existing ones.", "Attributed to its reasoning ability, KGC models are crucial in alleviating the KG's incompleteness issue and benefiting many downstream applications, such as recommendation (Cao et al., 2019b) and information extraction (Hu et al., 2021; Cao et al., 2020a).", "However, the KGC performance on existing benchmarks are still unsatisfactory 0.51 Hit Ratio@1 and 187 Mean Rank of the top-ranked model (Wang et al., 2019) on the widely used FB15k237 (Toutanova and Chen, 2015).", "Do we have a slow progress of Head Predicate Tail Test David location ?", "models (Akrami et al., 2020)?", "Or should we blame for the low-quality of benchmarks?", "In this paper, we re-think the task of KGC and construct a new benchmark dubbed InferWiki that highlights three fundamental objectives: Test triples should be inferential : this is the essential requirement of KGC.", "Each test triple should have supportive samples in the train set.", "However, we observe two major issues of current KGC datasets: unpredictable and meaningless test triples, which may hinder evaluating and advancing state-of-the-arts.", "As shown in Table 1, the first example of inferring the location for David (i.e., Florida) is even impossible for humans not to mention machines merely based on his birthplace and nationality (i.e., Atlanta and USA).", "In contrast, the second one is predictable but meaningless to find the missing month from a list of months within a year.", "The above cases are very common in existing datasets, e.g., YAGO3-10 (Dettmers et al., 2018) and CoDEx (Safavi and Koutra, 2020), mainly due to their construction process: first collecting a high-frequency subset of entities and then randomly splitting their triples into train/test.", "In this setting, KGC models may be overor under-estimated, as we are even unsure if a human can perform better.", "Test triples may be inferred positive, negative, or unknown .", "Following open-world assumption: what is not observed in KG is not necessar-FB15k237 WN18RR YAGO3-10 CoDEx-m Kinship Country InferWiki16k/64k Source FreeBase WordNet YAGO Wikidata Artificial Wikidata Inferential (cid:55) (cid:55) (cid:55) (cid:55) (cid:51) (cid:51) (cid:51) (cid:51) #Entity 14,541 40,943 123,182 17,050 104 272 16,288 64,718 #Relation 237 11 37 51 26 2 197 239 #train 272,115 86,835 1,079,040 185,584 8,548 1,111 162,424 782,243 #valid (+) 17,535 3,034 5,000 10,310 1,069 24 3,398 7,747 (\\ UNK) \\ -\\ -\\ -10,310 \\ -\\ -\\ -1,910 \\ 1,456 6,125 \\ 1,605 #test (+) 20,466 3,134 5,000 10,311 1,069 24 3,398 7,747 (\\ UNK) \\ -\\ -\\ -10,311 \\ -\\ -\\ -1,868 \\ 1,501 6,062 \\ 1,685 Table 2: Statistics of KGC datasets.", "However, existing benchmarks generate unseen triples as negatives (i.e., the closed-world assump-tion), because KG contains only positive triples.", "They usually randomly corrupt the head or tail entity in a triple, sometimes with type constraints (Li et al., 2019a).", "This leads to trivial evaluation (al-most 100% accuracy in triple classification (Safavi and Koutra, 2020)).", "Besides, the lack of unknown test ignores a critical inference capacity and may cause false negative errors in knowledge-driven tasks (Kotnis and Nastase, 2017).", "Inference has various patterns .", "Concentrating on limited patterns in evaluation may bring in severe bias.", "Domain-specific datasets Kinship (Kemp et al., 2006) and Country (Bouchard et al., 2015) only focus on a few relations and are nearly solved (Das et al., 2017).", "General-domain WN18RR (Dettmers et al., 2018) contains prevalent symmetry relation types, which incorrectly boosts the performance of RotatE (Abboud et al., 2020).", "Clearly, limited patterns leads to unfair comparisons among KGC models.", "To this end, we curated an Infer ential KGC dataset extracted from Wiki data and establish the benchmark with two settings of varying in sizes and structures: InferWiki64k and InferWiki16k .", "Instead of random split, we mine rules via AnyBURL (Meilicke et al., 2019) to guide train/test generation.", "All test triples are thus guaranteed inferential from training data.", "To avoid the rule leakage, we utilize two sets of triples: a large set for high-quality rule extraction and a small set for train/test split.", "Moreover, we infer unseen triples and manually annotate them with positive, negative and unknown labels to improve the difficulty of evaluation following both closed-world and openworld assumptions.", "For inference patterns, we include and balance triples with different reasoning path length, relation types and patterns (e.g., symmetry and composition).", "Our contributions can be summarized as follows: We summarize three principles of KGC: inferential ability, assumptions and patterns, and construct a rule-guided dataset.", "We highlight the importance of negatives and unknowns, and initiate open-world evaluation.", "We conduct extensive experiments to establish the benchmark.", "The results and deep analyses verify the necessity and challenge of InferWiki, providing insights for future research.", "We can roughly classify current KGC datasets into two groups: inferential and non-inferential datasets.", "The first group is usually manually curated to ensure each testing sample can be inferred from training data through reasoning paths, while they only focus on specific relations, such as Families (Garcia-Duran et al., 2015), Kinship (Kemp et al., 2006), and Country (Bouchard et al., 2015).", "The limited scale and inference patterns make them not challenging.", "HOLE (Nickel et al., 2016) achieves 99.7% ACU-PR on the dataset of Country.", "The second group of datasets are automatically derived from public KGs and randomly split positive triples into train/test, leading to a risk of testing samples non-inferential from training data.", "Popular datasets include FB15k-237 (Toutanova and Chen, 2015), WN18RR (Dettmers et al., 2018), and YAGO3-10 (Dettmers et al., 2018).", "CoDEx (Safavi and Koutra, 2020) argues the scope and difficulty of the above datasets, thus propose a comprehensive dataset with manually verified hard negatives.", "In fact, inference is an important ability for intelligence.", "Various fields study how inference is done in practice, ranging from logic to cognitive psychology.", "Inference helps people make reliable predictions, which is also an expected ability for AI models.", "Indeed, once deployed, a model may have to make a prediction when there is no evidence in the training set.", "But, instead of an unreliable guess, we highlight the ability to know unknown, a.k.a. open-world assumption.", "Therefore, we aim to curate an large-scale inferential benchmark InferWiki including various inference patterns and testing samples (i.e., positive, negative, and unknown), for better evaluation.", "We list the statistics in Table 2.", "We describe our dataset construction that comprises four steps: data preprocessing, rule mining, rule-guided train/test generation, and inferred test labeling.", "We then give a detailed analysis.", "More and more studies utilize Wikidata 1 as a knowledge resource due to its high quality and large quantity.", "We utilize the September 2019 English dump in experiments.", "Data preprocessing aims to define relation vocabulary and extract two sets of triples from Wikidata: a large one for rule mining T r and a relatively small one for dataset generation T d .", "The reason for using two sets is to avoid the leakage of rules.", "In other words, some frequent rules on the large set may be very few on the small set.", "The different distributions shall avoid that rule mining methods will easily achieve high performance.", "Besides, more triples can improve the quality of mined rules.", "In contrast, the relatively small set is enough for efficient KGC training and evaluation.", "In specific, we first extract all triples that consist of two entity items and one relation with English labels.", "We then remove the repeated triples and obtain 40,199,175 triples with 7,734,841 entities and 1,170 different relation types.", "Considering rule mining efficiency, we reduce the relation vocabulary by (1) manually filtering out meaningless relations, such as movie ID or film rating, (2) removing relations of InstanceOf and subClassOf following existing benchmarks (Toutanova and Chen, 2015), (3) select the most frequent 500 relation types.", "We focus on the most frequent 800,000 entities, which result in 8,632,777 triples as the large set for rule mining.", "To obtain the small set for dataset construction, we further select the most frequent 120,000 entities and 300 relations, which result in 1,283,246 triples.", "Note that we also infer new triples and label them as positive, negative, or unknown later.", "Since developing advanced rule mining models is not the focus of this paper and several mature tools are available online, such as AMIE+ (Galarraga et al., 2015) and AnyBURL (Meilicke et al., 2019).", "We utilize AnyBURL 2 in experiments due to its efficiency and effectiveness.", "Given a set of triples (i.e., the large set T r ), this step aims to automatically learn rules F = { ( f p , p ) } Pp =1 , where f p denotes a horn rule, e.g., spouse ( x, y ) father ( x, z ) mother ( y, z ) , and p [0 , 1] denotes the confidence of f p .", "For each rule f p , the left side of is called the premise, and the right side is called the conclusion, where the conclusion contains a single atom and the premise is a conjunction of several atoms in the Horn rule scheme.", "We can ground specific entities to replace x, y, z in f p , which shall denote an inferential relationship between premise and conclusion triples.", "For example, given spouse (LeBron James, Savannah Brinson) and father (LeBron James, Bronny James), we may infer a new triple mother (Savannah Brinson, Bronny James).", "Of course, not all of the mined rules are reasonable.", "To alleviate the negative impacts of unreasonable rules, we rely on more data (a large set of triples) and keep high-confidence rules only.", "Particularly, we follow the suggested configuration of AnyBURL.", "We run it for 500 seconds to ensure that all triples can be traversed at least once and obtain 251,317 rules, where 168,996 out of them whose confidence meets p > 0 .", "1 have been selected as the rule set to guide dataset construction.", "Different from existing benchmarks, InferWiki provides inferential testing triples with supportive data in the training set.", "Moreover, it aims to include as many inference patterns as possible and these patterns are better evenly distributed to avoid biased evaluation.", "Thus, this step has four objectives: rule-guided split, path extension, negative supplement, and inference pattern balance.", "Rule-guided Split grounds the mined rules F on triples T d to obtain premise triples and corresponding conclusion triples.", "All premise triples form a training set, and all conclusion triples form a test set.", "Thus, they are naturally guaranteed to be inferential.", "For correctness, all of premise triples 2 http://web.informatik.uni-mannheim.", "must exist in the given triple set T d , while conclusion triples are not necessarily in T d and may be generated for further annotation (i.e., Section 3.4).", "For example, given a rule spouse ( x, y ) father ( x, z ) mother ( y, z ) , we traverse all of the given triples and find entities LeBron James , Savannah Brinson , and Bronny James that meet the premise.", "We then add the premise triples spouse (LeBron James, Savannah Brinson) and father (LeBron James, Bronny James) into the training set, and generate the conclusion triple mother (Savannah Brinson, Bronny James) for testing, no matter it is given or not.", "Path Extension aims to increase the inference path patterns by (1) adding more reasoning paths for the same testing triple, and (2) elongating paths by replacing those premise triples that have reasoning paths.", "For example, we replace father (LeBron James, Bronny James) with two triples that can infer it: father (LeBron James, Bryce James) and brother (Bronny James, Bryce James).", "The original path is then extended by one hop.", "Correspondingly, we define the confidence of extended paths as the multiplication of all involved rules.", "Longer paths will challenge long-distance reasoning ability.", "Negative Supplement is to generate negative triples if we cannot annotate the same number of negatives with positive triples.", "Otherwise, we will face an imbalance issue.", "Following conventions, we randomly corrupt the head or tail entities in a positive triple with the following constraints: (1) the relation of the positive triple is exclusive, e.g., placeOfBirth , if the ratio from head to tail entities is smaller than a threshold (we choose 1 . 2 heuristically in experiments); otherwise, the corrupted negative triple may be actually positive, leading to false negative errors.", "(2) We choose positive triples from the test set for corruption to improve the difficulty the model has to correctly infer the corresponding positive triple from training data, then classify the corrupted triple as negative through the confliction.", "Particularly, for non-exclusive relation types, most of their corrupted results should be unknown following open-world assumption.", "The inferred test set covers such cases, which will be discussed in Section 3.4.", "Inference Pattern Balance aims to balance various inference patterns, including path length, relation types, and relation patterns (i.e., symmetry, inversion, hierarchy, composition, and others).", "This is because concentrating on some patterns may lead to severe bias and unfair comparison between KGC models (Zhang et al., 2020).", "We first count the frequency of testing triples according to the path lengths, relation types and patterns, respectively.", "For each of them, we rank their counting and choose highest ranked groups of triples as frequent ones, instead of setting a threshold.", "We then carefully remove some frequent triples randomly, until the new distributions reach an accepted range (checked by humans).", "Different from existing datasets, InferWiki aims to include positive, negative, and unknown testing triples, to evaluate the model under two types of assumptions: open-world assumption and closedworld assumption.", "The main difference between them is whether unknown triples are regarded as negatives.", "That is, the open-world evaluation is a three-class classification problem (i.e., positive, negative, and unknown).", "The closed-world evaluation targets only positive and negative triples, and we can simply relabel unknown triples as negatives without changing the test set.", "So far, we have two test sets: one is generated via rule guidance, and the other contains the supplemented negatives.", "This section aims to label the generated triples.", "First, we automatically label the triples with positive if they exist in Wikidata.", "Then, we manually annotate the remaining 4,053 triples.", "The annotation guideline can be found in Appendix B. Note that all of the unknowns are factually incorrect but not inferential.", "To assess the quality of annotations, we verify a random selection of 300 test triples ( 100 for each label).", "The annotators agree with our labels 84 .", "3 % of the time.", "We further investigate the disagreements by relabeling 100 samples.", "85% of the time, humans prefer an unknown, while automatic labeling tends to assign them with positive or negative labels.", "This suggests the inferential difference between humans and machines the capacity of knowing unknown.", "Finally, we remove the entities that are not in any of the grounded paths and their triples.", "We randomly select half of the test set as valid.", "This forms InferWiki64k .", "We further extract a dense subset InferWiki16k by filtering out the positive triples whose confidence is smaller than 0 .", "6 .", "Correspondingly, negative/unknown triples are reduced to keep balance.", "The statistics is listed in Table 2.", "Table 3 shows positive, negative, and unknown examples of InferWiki and their (possible) supportive training data.", "For positives, their paths seem reasonable and vary in length, relation types, and patterns.", "The 7-hop path of the sibling example is even difficult for a human.", "For negatives and unknowns, they are indeed incorrect and more challenging.", "There are no direct contradicted triples in the train set the model is encouraged to reason related triples and justify if there is a confliction (i.e., negative) or not (i.e., unknown).", "Nevertheless, there are two minor issues.", "First, some unreasonable paths may corrupt the predictability.", "We thus increase the rule confidence threshold > 0 .", "6 for InferWiki16k and manually annotate uncertain test triples for the correctness of labels.", "More advanced rule mining models can improve the construction pipeline.", "We leave it in the future.", "Second, does unknown triples have a bias on certain relation types?", "The answer is yes but not exactly.", "As shown in Table 3, the relation connectsW ith is involved in both positive and unknown triples, which is also determined by the paths.", "Next, we analyze the relation patterns and path length distribution through comparisons with existing KGC datasets.", "Due to the different construction pipelines, existing datasets are difficult to offer quantitative statistics.", "We thus apply our pipeline on CoDEx (Safavi and Koutra, 2020).", "Only inferential test triples remain, and the training set keeps unchanged, namely CoDEx-m-infer, which reduces the test and valid positives from 20,622 Figure 1: Distribution of paths in relation patterns.", "to 7,050.", "This agree with the original paper that reports 20.56% triples are symmetry or compositional through AMIE+ analysis.", "We find more paths due to more extensive rules extracted from a large set of triples.", "This also demonstrates the necessity of rule-guided train/test generation most test triples are not guaranteed inferential when using random split.", "Relation Pattern Following convention, we count reasoning paths for various patterns: symmetry, inversion, hierarchy, composition, and others, whose detailed explanations and examples can be found in Appendix C. If a triple has multiple paths, we count all of them.", "As Figure 1 shows, we can see that (1) there are no inversion and only a few symmetry and hierarchy patterns in CoDEx-m, as most current datasets remove them to avoid train/test leakage.", "But, we argue that learning and remembering such patterns are also an essential capacity of inference.", "It just needs to control their numbers for a fair comparison.", "(2) The patterns of InferWiki is more evenly distributed.", "Note that the patterns Figure 2: Comparison of paths in different lengths.", "of symmetry, inversion, and hierarchy refer to 1-hop paths, while composition and others refer to multi-hop paths.", "So, the total number of the former three is almost the same as that of the latter two, to balance paths with varying lengths, which will be discussed next.", "Path Length Distribution The reasoning paths can ensure test triples' predictability but may not be the shortest ones, as there may be undiscovered paths connecting two entities.", "Thus, our statistics concerning path length offer a conservative analysis and give an upper bound.", "For a test triple with multiple paths, we count the shortest one.", "As shown in Figure 2, we can see that InferWiki has more long-distance paths, while CoDEx-m-infer normally concentrates on maximum 3-hop reasoning paths.", "In specific, the maximum path length of InferWiki is 9 (4 before path extension) and the average length is 2 .", "9 ( 1 . 5 before path extension).", "Further analysis of relation, entity and neighbor distributions can be found in Appendix D&E.", "Although we carefully design the construction of inferWiki, there are still two types of limitations: rule biases and dataset errors, that can to be addressed along with the development of KG techniques in the future.", "In terms of rule biases, AnyBURL may be over-estimated due to its role in the construction.", "Although we utilize two triple sets to avoid rule leakage, their overlap may still bring unfair performance gain to AnyBURL.", "We consider synthesize several rule mining results to improve InferWiki in the next version.", "In terms of dataset errors, first, to balance positive and negative triples in the larger InferWiki64k, we follow conventions to randomly sample a portion of negatives.", "These negatives may be unknown if following open-world assumption.", "We manually assess the randomly sampled negatives and find a 15.7% error rate.", "Therefore, we conduct open-world experiments on the smaller InferWiki16k, all of whose testing negatives are verified by humans.", "The second type of errors is due to unreasonable rules for dataset split, which is caused by prediction errors of existing rule mining models.", "However, there is no suitable evaluation in this field to provide quantitative analysis.", "Our ongoing work aims to develop an automatic evaluation for path rationality to improve the mining quality, and thus facilitate our inferential pipeline.", "We benchmark performance on InferWiki for the tasks: (1) Link Prediction , the task of predicting the missing head/tail entity for a given query triple (?, r, t) or (h, r, ?).", "Models are encouraged to rank correct entities higher than others in the vocabulary.", "We adopt the filtering setting (Bor-des et al., 2013) that excludes those entities, if the predicted triples have been seen in the train set.", "Mean reciprocal rank (MRR) and hits@k are standard metrics for evaluation.", "(2) Triple Classification aims to predict a label for each given triple (h, r, t).", "The label following open-world assumption is trinary y { 1 , 0 , 1 } and becomes binary y { 1 , 1 } when adopting closed-world assumption all 0 -label triples are re-labeled with 1 , since our unknown triples are factually negative yet non-inferential from training data.", "Since KGC models output real-value scores for triples, we classify scores into labels by choosing one or two thresholds per relation type on valid.", "Accuracy, precision, recall, and F1 are measurements.", "For comprehensive comparison, we choose three types of representative models as baselines: (1) Knowledge Graph Embedding models, including TransE (Bordes et al., 2013), ComplEx (Trouil-lon et al., 2016), RotatE (Sun et al., 2019), ConvE (Dettmers et al., 2018), and TuckER (Bal-azevic et al., 2019), (2) multihop reasoning model Multihop (Lin et al., 2018), and (3) rule-based AnyBURL (Meilicke et al., 2019).", "Note that the latter two are specially designed for link prediction.", "The detailed implementation including parameters and thresholds can be found in Appendix F. 4.3 Triple Classification Results Table 4 shows micro scores for triple classification.", "well around 90% F1 scores.", "This is consistent with recent findings that triple classification is a nearly solved task (around 98% F1 scores) (Safavi and Koutra, 2020).", "Nevertheless, the lower performance demonstrates the difficulty of our curated datasets, mainly due to the manually annotated hard negatives of InferWiki (and CoDEx).", "Impacts of Hard Negatives Figure 3 presents the accuracy on InferWiki16k regarding various types of triples: positive, random supplemented negatives, and annotated negatives (including relabeled unknowns).", "We can see that (1) random negative triples are indeed trivial for all of baseline models, which motivates the necessity of harder negative triples to push this research direction forward, (2) positive triples are slightly difficult to judge than random negatives, and (3) the accuracy significantly drops on annotation negatives.", "This is mainly because most annotated triples are actually unknown they are factually incorrect, but there are no obvious abnormal patterns.", "Such non-inferential cases may underestimate KGC models.", "Since most baselines fail in judging unknown as negative, we now investigate them following open-world assumption to see their ability in recog-Acc", "nizing unknown triples.", "Table 5 shows the macro performance 3 on InferWiki16k.", "We can see that all of the baseline models perform worse than those under the closed-world assumption.", "On one hand, the trinary classification is intuitively more difficult than binary classification.", "On the other hand, it is a rather straightforward method to search two decision thresholds one between positive and unknown and the other between unknown and negative.", "This motivates us future works on advanced models to represent KG, which should also be able to detect the limitation and boundaries of given KG.", "It is a fundamental capacity of inference to respond I do not know, to avoid false negatives in downstream applications.", "Figure 4 presents a detailed analysis of each model regarding their search thresholds.", "We can see that although their best performance seems not bad, the worst scores are only around 10%.", "That is, they are very sensitive to thresholds.", "Besides, most of the time, the average F1 scores of ComplEx, RotatE, and TuckER are around 20%, while transE achieves higher scores.", "Maybe that is the reason why it is still the most widely used KGC method.", "ConvE stably outperforms other baselines, no matter in terms of best, worst, or average performance.", "Table 6 shows the average scores for head and tail prediction.", "We can see that (1) AnyBURL performs the best most of the time, but the performance gap is not significant.", "This is mainly due to its role in 3 Micro performance is only applicable to binary classification, while open-world evaluation is trinary.", "dataset construction, although we utilize two sets of triples to minimize rule leakage.", "Actually, inference of rules may be more important than we thought to improve the reliability and interpretability of knowledge-driven models.", "This also motivates us to incorporate rule knowledge into KGC training for advanced reasoning ability (Guo et al., 2018; Li et al., 2019b).", "(2) KGC models perform better on InferWiki16k than InferWiki64k, due to the higher structure density and rule confidence.", "(3) Models have higher hit@10 and lower hit@1 on InferWiki than other datasets (e.g., CoDEx).", "This agrees with an intuition that most entities are irrelevant, making it trivial to judge these corrupted triples as in triple classification.", "And, only a small portion of entities is difficult to predict, which requires strong inference ability.", "Besides, hit@1 varies a lot, so that we can better compare among models.", "Figure 5 presents Hit@1 curves for tail prediction regarding varying path length on InferWiki64k 4 .", "We can see an overall downwards trend along with the increasing path length.", "Meanwhile, the large fluctuation may be due to two possible reasons: (1) as discussed in Section 3.5, the inferential paths ensure the predictability, but may not be the shortest ones.", "This thus offers a conservative 4 Multihop is designed for tail prediction, and Hit@1 on InferWiki64k is more distinct for following ablation study.", "analysis and give an upper bound of the performance concerning k-hop paths.", "Our paths are of high coverage and quality compared with existing datasets, which either conduct case study or postprocess datasets via rule mining.", "(2) Relation types and patterns also have significant impacts.", "Shorter paths contain more long-tail relations, and longer paths tend to cover many common relations.", "This improves the difficulty of shorter paths and makes longer paths easier.", "We present the Hit@1 tail prediction on InferWiki64k regarding relation patterns in Table 7. We can see that symmetry and inversion are not well-solved, which should be considered into evaluation but limited in scale.", "TransE performs worse on symmetry and inversion relations, consistent with the analysis in Abboud et al. (2020).", "Even if ComplEx and RotatE can capture such patterns, they fail to rank corresponding entities at the top.", "Embedding-based models perform well on hierarchy relations, even outperforms AnyBURL.", "For compositional relations, it is still quite challenging and worthwhile further investigation.", "(a) Triple classification (F1).", "(b) Link Prediction (MRR).", "CoDEx-m.", "The two datasets share the same training set.", "The only difference lies in how we obtain the test triples, either using our proposed pipeline (CoDEx-m-infer) or randomly (CoDEx-m).", "Thus, the results reflect the impacts of inferential guarantee for dataset construction and demonstrate the necessity to avoid over-estimation or underestimation of the inferential ability of KGC models.", "We report the performance on CoDEx-m from the original paper (Safavi and Koutra, 2020).", "We can see that all of models perform better with inferential path guarantee on CoDEx-m-infer than CoDEx-m, except ComplEx for link prediction.", "This is because rule guidance elimites those noninferential testing triples, making the task easier.", "Nevertheless, the scores on hard cases are actually decreased (as discussed in Figure 3 and Table 7).", "Models are excepted a stronger reasoning ability among several related entities, instead of trivially filtering out massive irrelevant entities.", "This also demonstrates the necessity of InferWiki to avoid overor underestimation of the inferential ability of KGC models learning new knowledge from existing ones.", "We illustrate the most frequent relation types and their distribution of InferWiki64k and InferWiki16k in Figure 8. We can see that InferWiki has a diverse relation types that are not limited to specific domains.", "Besides, the triples of each relation type are well balanced.", "We established a benchmark with three types of seven KGC models on two tasks of triple classification and link prediction.", "The results present a detailed analysis regarding various inference patterns, which demonstrates the necessity of an inferential guarantee for better evaluation and the difficulty of new open-world triple classification.", "In the future, we are interested in cross-KGs inference and transfer (Cao et al., 2019a), and investigating how to inject knowledge into deep learning architectures, such as for information extraction (Tong et al., 2020) or text generation (Cao et al., 2020b).", "This research was conducted in collaboration with SenseTime.", "This work is partially supported by A*STAR through the Industry Alignment Fund Industry Collaboration Projects Grant, by NTU (NTUACE2020-01) and Ministry of Education (RG96/20), and by the National Research Foundation, Prime Minister's Office, Singapore under its Energy Programme (EP Award No. NRF2017EWT-EP003-023) administrated by the Energy Market Authority of Singapore.", "This work is partially supported by Singapore MOE AcRF T1." ]
[ "result", "abstain", "objective", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "result", "abstain", "objective", "result", "method", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "other", "other" ]
[ "We study the learning of a matching model for dialogue response selection.", "Motivated by the recent finding that models trained with random negative samples are not ideal in real-world scenarios, we propose a hierarchical curriculum learning framework that trains the matching model in an easy-to-difficult scheme.", "Our learning framework consists of two complementary curricula: (1) corpus-level curriculum (CC); and (2) instance-level curriculum (IC).", "In CC, the model gradually increases its ability in finding the matching clues between the dialogue context and a response candidate.", "As for IC, it progressively strengthens the model's ability in identifying the mismatching information between the dialogue context and a response candidate.", "Empirical studies on three benchmark datasets with three state-of-the-art matching models demonstrate that the proposed learning framework significantly improves the model performance across various evaluation metrics.", "Building intelligent conversation systems is a longstanding goal of artificial intelligence and has attracted much attention in recent years (Shum et al., 2018; Kollar et al., 2018).", "An important challenge for building such conversation systems is the response selection problem, that is, selecting the best response to a given dialogue context from a set of candidate responses (Ritter et al., 2011).", "To tackle this problem, different matching models are developed to measure the matching degree between a dialogue context and a response candidate (Wu et al., 2017; Zhou et al., 2018; Lu et al., 2019; Gu et al., 2019).", "Despite their differences, The main body of this work was done during internship at Tencent Inc.", "The first two authors contributed equally.", "Yan Wang is the corresponding author.", "Dialogue Context Between Two Speakers A and B A : Would you please recommend me a good TV series to watch during my spare time?", "B : Absolutely!", "Which kind of TV series are you most interested in?", "A : My favorite type is fantasy drama.", "B : I think both Game of Thrones and The Vampire Diaries are good choices.", "Positive Response P1 : Awesome, I believe both of them are great TV series!", "I will first watch Game of Thrones.", "( Easy )", "P2 : Cool!", "I think I find the perfect thing to kill my weekends.", "( Difficult )", "Negative Response N1 : This restaurant is very expensive.", "( Easy )", "N2 : Iain Glen played Ser Jorah Mormont in the HBO fantasy series Game of Thrones.", "( Difficult )", "most prior works train the model with data constructed by a simple heuristic.", "For each context, the human-written response is considered as positive (i.e., an appropriate response) and the responses from other dialogue contexts are considered as negatives (i.e., inappropriate responses).", "In practice, the negative responses are often randomly sampled and the training objective ensures that the positive response scores are higher than the negative ones.", "Recently, some researchers (Li et al., 2019; Lin et al., 2020) have raised the concern that randomly sampled negative responses are often too trivial (i.e., totally irrelevant to the dialogue context).", "Models trained with trivial negative responses may fail to handle strong distractors in real-world scenarios.", "Essentially, the problem stems from the ignorance of the diversity in context-response matching degree.", "In other words, all random responses are treated as equally negative regardless of their different distracting strengths.", "For example, Table 1 shows a conversation between two speakers and two negative responses (N1, N2) are presented.", "For N1, one can easily dispel its appropriateness as it unnaturally diverges from the TV show topic.", "On the other hand, N2 is a strong distractor as it overlaps significantly with the context (e.g., fantasy series and Game of Thrones ).", "Only with close observation we can find that N2 does not maintain the coherence of the discussion, i.e., it starts a parallel discussion about an actor in Game of Thrones rather than elaborating on the enjoyable properties of the TV series.", "In addition, we also observe a similar phenomenon on the positive side.", "For different training context-response pairs, their pairwise relevance also varies.", "In Table 1, two positive responses (P1, P2) are provided for the given context.", "For P1, one can easily confirm its validity as it naturally replies the context.", "As for P2, while it expatiates on the enjoyable properties of the TV series, it does not exhibit any obvious matching clues (e.g., lexical overlap with the context).", "Therefore, to correctly identify P2, its relationship with the context must be carefully reasoned by the model.", "Inspired by the above observations, in this work, we propose to employ the idea of curriculum learning (CL) (Bengio et al., 2009).", "The key to applying CL is to specify a proper learning scheme under which all training examples are learned.", "By analyzing the characteristics of the concerned task, we tailor-design a hierarchical curriculum learning (HCL) framework.", "Specifically, our learning framework consists of two complementary curriculum strategies, corpus-level curriculum (CC) and instance-level curriculum (IC), covering the two distinct aspects of response selection.", "In CC, the model gradually increases its ability in finding matching clues through an easy-to-difficult arrangement of positive context-response pairs.", "In IC, we sort all negative responses according to their distracting strength such that the model's capability of identifying the mismatching information can be progressively strengthened.", "Notably, our learning framework is independent to the choice of matching models.", "For a comprehensive evaluation, we evaluate our approach on three representative matching models, including the current state of the art.", "Results on three benchmark datasets demonstrate that the proposed learning framework leads to remarkable performance improvements across all evaluation metrics.", "In a nutshell, our contributions can be summarized as: (1) We propose a hierarchical curriculum learning framework to tackle the task of dialogue response selection; and (2) Empirical results on three benchmark datasets show that our approach can significantly improve the performance of various strong matching models, including the current state of the art. 2 Background Given a dataset D = { ( c i , r i ) } |D| i =1 , the learning of a matching model s ( , ) is to correctly identify the positive response r i conditioned on the dialogue context c i from a set of negative responses R i .", "The learning objective is typically defined as L s = m (cid:88) j =1 max { 0 , 1 s ( c i , r i )+ s ( c i , R ij ) } , (1) where m is the number of negative responses associated with each training context-response pair.", "In most existing studies (Wu et al., 2017; Zhou et al., 2018; Gu et al., 2019), the training negative responses R i are randomly selected from the dataset D .", "Recently, Li et al. (2019) and Lin et al. (2020) proposed different approaches to strengthen the training negatives.", "In testing, for any context-response ( c, r ) , the models give a score s ( c, r ) that reflects their pairwise matching degree.", "Therefore, it allows the user to rank a set of response candidates according to the scores for response selection.", "We propose a hierarchical curriculum learning (HCL) framework for training neural matching models.", "It consists of two complementary curricula: (1) corpus-level curriculum (CC); and (2) instance-level curriculum (IC).", "Figure 1 illustrates the relationship between these two strategies.", "In CC (3.2), the training context-response pairs with lower difficulty are presented to the model before harder pairs.", "This way, the model gradually increases its ability to find the matching clues contained in the response candidate.", "As for IC (3.3), it controls the difficulty of negative responses that associated with each training context-response pair.", "Starting from easier negatives, the model progressively strengthens its ability to identify the mismatching information (e.g., semantic incoherence) in the response candidate.", "The following gives a detailed description of the proposed approach.", "Given the dataset D = { ( c i , r i ) } |D| i =1 , the corpus-level curriculum (CC) arranges the ordering of different training context-response pairs.", "The model first learns to find easier matching clues from the pairs with lower difficulty.", "As the training progresses, harder cases are presented to the model to learn less obvious matching signals.", "Two examples are shown in the left part of Figure 1.", "For the easier pair, the context and the positive response are semantically coherent as well as lexically overlapped (e.g., TV series and Game of Thrones ) with each other and such matching clues are simple for the model to learn.", "As for the harder case, the positive response can only be identified via numerical reasoning, which makes it harder to learn.", "Difficulty Function.", "To measure the difficulty of each training context-response pair ( c i , r i ) , we adopt a pre-trained ranking model G ( , ) (3.4) to calculate its relevance score as G ( c i , r i ) .", "Here, a higher score of G ( c i , r i ) corresponds to a higher relevance between c i and r i and vice versa.", "Then, for each pair ( c i , r i ) D , its corpus-level difficulty d cc ( c i , r i ) is defined as d cc ( c i , r i ) = 1 .", "Pacing Function.", "In training, to select the training context-response pairs with appropriate difficulty, we define a corpus-level pacing function, p cc ( t ) , which controls the pace of learning from easy to hard instances.", "In other words, at time step t , p cc ( t ) represents the upper limit of difficulty and the model is only allowed to use the training instances ( c i , r i ) whose corpus-level difficulty score d cc ( c i , r i ) is lower than p cc ( t ) .", "In this work, we propose a simple functional form for p cc ( t ) 1 as p cc ( t ) = (cid:40) 1 .", "where p cc (0) is a predefined initial value.", "At the training warm up stage (first T steps), we learn a basic matching model with a easy subset of the training data.", "In this subset, the difficulty of all samples are lower than p cc ( t ) .", "After p cc ( t ) becomes 1 .", "0 (at time step T ), the corpus-level curriculum is completed and the model can then freely access the entire dataset.", "In Figure", "2(a), we give an illustration of the corpus-level curriculum.", "As a complement of CC, the instance-level curriculum (IC) controls the difficulty of negative responses.", "For an arbitrary training context-response 1 More sophisticated designs for the function p cc ( t ) are possible, but we do not consider them in this work.", "pair ( c i , r i ) , while its associated negative responses can be any responses r j (s.t. j (cid:54) = i ) in the training set, the difficulties of different r j are diverse.", "Some examples are presented in the right part of Figure 1.", "We see that the negative responses with lower difficulty are always simple to spot as they are often obviously off the topic.", "As for the harder negatives, the model need to identify the fine-grained semantic incoherence between them and the context.", "The main purpose of IC is to select negative responses with appropriate difficulty based on the state of the learning process.", "At the beginning, the negative responses are randomly sampled from the entire training set, so that most of them are easy to distinguish.", "As the training evolves, IC gradually increases the difficulty of negative responses by sampling them from the responses with higher difficulty (i.e., from a harder subset of the training data).", "In this way, the model's ability in finding the mismatching information is progressively strengthened and will be more robust when handling those strong distractors in real-world scenarios.", "Difficulty Function.", "Given a specific training instance ( c i , r i ) , we define the difficulty of an arbitrary response r j (s.t. j (cid:54) = i ) as its rank in a sorted list of relevance score in descending order, d ic ( c i , r j ) = sort r j D ,j (cid:54) = i ( G ( c i , r j )) .", "In this formula, the response r h with the highest relevance score, i.e., r h = max r j D ,j (cid:54) = i G ( c i , r j ) , has a rank of 1, thus d ic ( c i , r h ) = 1 .", "For the response r l with the lowest relevance score, i.e., r l = min r j D ,j (cid:54) = i G ( c i , r j ) , has a rank of |D| , thus d ic ( c i , r l ) = |D| .", "Here, a smaller rank means the corresponding negative response is more relevant to the context c i , thus it is more difficult for the model to distinguish.", "Pacing Function.", "Similar to CC, in IC, the pace of learning from easy to difficult negative responses is controlled by an instance-level pacing function, p ic ( t ) .", "It adjusts the size of the sampling space (in log scale) from which the negative responses are sampled from.", "Given a training instance ( c i , r i ) , at time step t , the negative examples are sampled from the responses r j (s.t. j (cid:54) = i ) whose rank is smaller than 10 p ic ( t ) ( d ic ( c i , r j ) 10 p ic ( t ) ), i.e., the negative responses are sampled from a subset of the training data which consists of the top10 p ic ( t ) relevant responses in relation to c i .", "The smaller the p ic ( t ) is, the harder the sampled negatives will be.", "In this work, we define the function p ic ( t ) as p ic ( t ) = (cid:40) ( k 0 k T ) T ( T t ) + k T if t T, k T if t > T, where T is the same as the one in the corpus-level pacing function p cc ( t ) .", "k 0 = log |D| 10 , meaning that, at the start of training, the negative responses are sampled from the entire training set D .", "k T is a hyperparameter and it is smaller than k 0 .", "After p ic ( t ) becomes k T (at step T ), the instance-level curriculum is completed.", "For the following training steps, the size of the sampling space is fixed at 10 k T .", "An example of p ic ( t ) is depicted in Figure", "2(b).", "Model Training.", "Our learning framework jointly employs the corpus-level and instance-level curriculum.", "For each training step, we construct a batch of training data as follows: First, we select the positive context-response pairs according to the corpus-level pacing function p cc ( t ) .", "Then, for each instance in the selected batch, we sample its associated negative examples according to the instance-Algorithm 1: Hierarchical Curriculum Learning Input : Dataset, D = { ( c i , r i ) } |D| i =1 ; model trainer, T , that takes batches of training data as input to optimize the matching model; corpus-level difficulty and pacing function, d cc and p cc ; instance-level difficulty and pacing function, d ic and p ic ; number of negative responses, m ; 1 for train step t = 1 , ... do 2 Uniformly sample one batch of context-response pairs, B t , from all ( c i , r i ) D , such that d cc ( c i , r i ) p cc ( t ) , as shown in Figure", "level pacing function p ic ( t ) .", "Details of our learning framework are presented in Algorithm 1.", "Fast Ranking Model.", "As described in Eq.", "(2) and (3), our framework requires a ranking model G ( , ) that efficiently measures the pairwise relevance of millions of possible context-response combinations.", "In this work, we construct G ( , ) as an non-interaction matching model with dual-encoder structure such that we can precompute all contexts and responses offline and store them in cache.", "For any context-response pair ( c, r ) , its pairwise relevance G ( c, r ) is defined as G ( c, r ) = E c ( c ) TE r ( r ) , (4) where E c ( c ) and E r ( r ) are the dense context and response representations produced by a context encoder E c ( ) and a response encoder E r ( ) 2 .", "Offline Index.", "After training the ranking model on the same response selection dataset D using the in-batch negative objective (Karpukhin et al., 2020), we compute the dense representations of all contexts and responses contained in D .", "Then, as described in Eq.", "(4), the relevance scores of all possible combinations of the contexts and responses in D can be easily computed through the dot product between their representations.", "After this step, we can compute the corpus-level and instance-level difficulty of all possible combinations and cache them in memory for a fast access in training.", "Dialogue Response Selection.", "Early studies in this area devoted to the response selection for single-turn conversations (Wang et al., 2013; Tan et al., 2016; Su et al., 2020).", "Recently, researchers turned to the scenario of multi-turn conversations and many sophisticated neural network architectures have been devised (Wu et al., 2017; Gu et al., 2019; Zhou et al., 2018; Gu et al., 2020).", "There is an emerging line of research studying how to improve existing matching models with better learning algorithms.", "Wu et al. (2018) proposed to adopt a Seq2seq model as weak teacher to guide the training process.", "Feng et al. (2019) designed a co-teaching framework to eliminate the training noises.", "Similar to our work, Li et al. (2019) proposed to alleviate the problem of trivial negatives by sampling stronger negatives.", "Lin et al. (2020) attempted to create negative examples with a retrieval system and a pre-trained generation model.", "In contrast to their studies, we not only enlarge the set of negative examples but also arrange the negative examples in an easy-to-diffuclt fashion.", "Curriculum Learning.", "Curriculum Learning (Bengio et al., 2009) is reminiscent of the cognitive process of human being.", "Its core idea is first learning easier concepts and then gradually transitioning to more complex concepts based on some predefined learning schemes.", "Curriculum learning (CL) has demonstrated its benefits in various machine learning tasks (Spitkovsky et al., 2010; Ilg et al., 2017; Li et al., 2017; Svetlik et al., 2017; Liu et al., 2018; Platanios et al., 2019).", "Recently, Penha and Hauff (2020) employed the idea of CL to tackle the response selection task.", "However, they only apply curriculum learning for the positive-side response selection, while ignoring the diversity of the negative responses.", "Douban Dataset.", "This dataset (Wu et al., 2017) consists of multi-turn Chinese conversation data crawled from Douban group 3 .", "The size of training, validation and test set are 500k, 25k and 1k.", "In the test set, each dialogue context is paired with 10 candidate responses.", "Following previous works, 3 https://www.douban.com/group Model Douban Ubuntu E-Commerce MAP MRR P @ 1 R 10 @ 1 R 10 @ 2 R 10 @ 5 R 2 @ 1 R 10 @ 1 R 10 @ 2 R 10 @ 5 R 10 @ 1 R 10 @ 2 R 10 @ 5 RNN 0.390 0.422 0.208 0.118 0.223 0.589 0.768 0.403 0.547 0.819 0.325 0.463 0.775 CNN 0.417 0.440 0.226 0.121 0.252 0.647 0.848 0.549 0.684 0.896 0.328 0.515 0.792 LSTM 0.485 0.527 0.320 0.187 0.343 0.720 0.901 0.638 0.784 0.949 0.365 0.536 0.828 BiLSTM 0.479 0.514 0.313 0.184 0.330 0.716 0.895 0.630 0.780 0.944 0.355 0.525 0.825 MV-LSTM 0.498 0.538 0.348 0.202 0.351 0.710 0.906 0.653 0.804 0.946 0.412 0.591 0.857 Match-LSTM 0.500 0.537 0.345 0.202 0.348 0.720 0.904 0.653 0.799 0.944 0.410 0.590 0.858 DL2R 0.488 0.527 0.330 0.193 0.342 0.705 0.899 0.626 0.783 0.944 0.399 0.571 0.842 Multi-View 0.505 0.543 0.342 0.202 0.350 0.729 0.908 0.662 0.801 0.951 0.421 0.601 0.861 DUA 0.551 0.599 0.421 0.243 0.421 0.780 -0.752 0.868 0.962 0.501 0.700 0.921 DAM 0.550 0.601 0.427 0.254 0.410 0.757 0.938 0.767 0.874 0.969 0.526 0.727 0.933 MRFN 0.571 0.617 0.448 0.276 0.435 0.783 0.945 0.786 0.886 0.976 -IOI 0.573 0.621 0.444 0.269 0.451 0.786 0.947 0.796 0.894 0.974 0.563 0.768 0.950 SMN 0.529 0.569 0.397 0.233 0.396 0.724 0.926 0.726 0.847 0.961 0.453 0.654 0.886 MSN 0.587 0.632 0.470 0.295 0.452 0.788 -0.800 0.899 0.978 0.606 0.770 0.937 SA-BERT 0.619 0.659 0.496 0.313 0.481 0.847 0.965 0.855 0.928 0.983 0.704 0.879 0.985 SMN+HCL 0.575 0.620 0.446 0.281 0.452 0.807 0.947 0.777 0.885 0.981 0.507 0.723 0.935 MSN+HCL 0.620 0.668 0.507 0.321 0.508 0.841 0.969 0.826 0.924 0.989 0.642 0.814 0.968 SA-BERT+HCL 0.639 0.681 0.514 0.330 0.531 0.858 0.977 0.867 0.940 0.992 0.721 0.896 0.993 Table 2: Experimental results of different models trained with our approach on Douban, Ubuntu, and E-Commerce datasets.", "we report the results of Mean Average Precision (MAP), Mean Reciprocal Rank (MRR) and Precision at Position 1 (P@1).", "In addition, we also report the results of R 10 @ 1, R 10 @ 2, R 10 @5, where R n @ k means recall at position k in n candidates.", "Ubuntu Dataset.", "This dataset (Lowe et al., 2015) contains multi-turn dialogues collected from chat logs of the Ubuntu Forum.", "The training, validation and test size are 500k, 50k and 50k.", "Each dialogue context is paired with 10 response candidates.", "Following previous studies, we use R 2 @ 1, R 10 @ 1, R 10 @ 2 and R 10 @ 5 as evaluation metrics.", "E-Commerce Dataset.", "This dataset (Zhang et al., 2018) consists of Chinese conversations between customers and customer service staff from Taobao 4 .", "The size of training, validation and test set are 500k, 25k and 1k.", "In the test set, each dialogue context is paired with 10 candidate responses.", "R n @ k are employed as the evaluation metrics.", "In the experiments, we compare our approach with the following models that can be summarized into three categories.", "Single-turn Matching Models.", "This type of models treats all dialogue context as a single long utterance and then measures the relevance score between the context and response candidates, including RNN (Lowe et al., 2015), CNN (Lowe et al., 2015), LSTM (Lowe et al., 2015), Bi-LSTM 4 www.taobao.com (Kadlec et al., 2015), MV-LSTM (Wan et al., 2016) and Match-LSTM (Wang and Jiang, 2016).", "Multi-turn Matching Models.", "Instead of treating the dialogue context as one single utterance, these models aggregate information from different utterances in more sophisticated ways, including DL2R (Yan et al., 2016), Multi-View (Zhou et al., 2016), DUA (Zhang et al., 2018), DAM (Zhou et al., 2018), MRFN (Tao et al., 2019a), IOI (Tao et al., 2019b), SMN (Wu et al., 2017) and MSN (Yuan et al., 2019).", "BERT-based Matching Models.", "Given the recent advances of pre-trained language models (De-vlin et al., 2019), Gu et al. (2020) proposed the SA-BERT model which adapts BERT for the task of response selection and it is the current state-of-the-art model on the Douban and Ubuntu dataset.", "For all experiments, we set the value of p cc (0) in the corpus-level pacing function p cc ( t ) as 0 .", "3 , meaning that all models start training with the context-response pairs whose corpus-level difficulty is lower than 0 .", "3 .", "For the instance-level pacing function p ic ( t ) , the value of k T is set as 3 , meaning that, after IC is completed, the negative responses of each training instance are sampled from the top10 3 relevant responses.", "In the experiments, each matching model is trained for 40 , 000 steps with a batch size of 128, and we set the T in both p cc ( t ) and p ic ( t ) as half of the total training steps, i.e., T = 20 , 000 .", "To build the context and CC ICSMN MSN SA-BERTP @ 1 R 10 @ 1 R 10 @ 2 P @ 1 R 10 @ 1 R 10 @ 2 P @ 1 R 10 @ 1 R 10 @ 2 0.402 0.238 0.410 0.474 0.298 0.462 0.499 0.315 0.493 (cid:88) 0.422 0.253 0.429 0.482 0.305 0.479 0.504 0.320 0.511 (cid:88) 0.441 0.271 0.444 0.499 0.315 0.492 0.511 0.325 0.524 (cid:88) (cid:88) 0.446 0.281 0.452 0.507 0.321 0.508 0.514 0.330 0.531 Table 3: Ablation study on Douban dataset using different combinations of the proposed curriculum strategies.", "response encoders in the ranking model G ( , ) , we use a 3 -layer transformers with a hidden size of 256 .", "We select two representative models (SMN and MSN) along with the state-of-the-art SA-BERT to test the proposed learning framework.", "To better simulate the true testing environment, the number of negative responses ( m in Eq.", "(1)) is set to be 5. 6 Result and Analysis 6.1 Main Results Table 2 shows the results on Douban, Ubuntu, and E-Commerce datasets, where X+HCL means training the model X with the proposed learning HCL.", "We can see that HCL significantly improves the performance of all three matching models in terms of all evaluation metrics, showing the robustness and universality of our approach.", "We also observe that, by training with HCL, a model (MSN) without using pre-trained language model can even surpass the state-of-the-art model using pre-trained language model (SA-BERT) on Douban dataset.", "These results suggest that, while the training strategy is under-explored in previous studies, it could be very decisive for building a competent response selection model.", "To reveal the individual effects of CC and IC, we train different models on Douban dataset by removing", "removing either CC or IC.", "The experimental results are shown in Table 3, from which we see that both CC and IC make positive contributions to the overall performance when used alone.", "Only utilizing IC leads to larger improvements than only using CC.", "This observation suggests that the ability of identifying the mismatching information is a more important factor for the model to achieve its optimal performance.", "However, the optimal performance is achieved when CC and IC are combined, indicating that CC and IC are complementary to each other.", "Next, we compare our approach with other learning strategies proposed recently (Li et al., 2019; Penha and Hauff, 2020; Lin et al., 2020).", "We use Semi, CIR, and Gray to denote the approaches in Li et al. (2019), Penha and Hauff (2020), and Lin et al. (2020) respectively, where Gray is the current state of the art.", "We conduct experiments on Douban and Ubuntu datasets and the experimental results of three matching models are listed in Table 4. From the results, we can see that our approach consistently outperforms other learning strategies in all settings.", "The performance gains of our approach are even more remarkable given its simplicity; it does not require running additional generation models (Lin et al., 2020) or re-scoring negative samples at different epochs (Li et al., 2019).", "In this part, we study how the key hyper-parameters affect the performance of HCL, including the initial difficulty of CC, p cc (0) , and the curriculum length of IC, k T .", "5 In addition, we also investigate the effect of different ranking model choices.", "Initial Difficulty of CC.", "We run sensitivity analysis experiments on Douban dataset with the SMN model by tuning p cc (0) in the corpus-level pacing function p cc ( t ) .", "The results of P @ 1 and R 10 @ 2 in terms of p cc (0) and k T are shown in Figure", "3(a).", "We observe that when p cc (0) is small (i.e., p cc (0) 0 .", "3 ), the model performances are relatively similar.", "When p cc (0) approaches to 1.0, the results drop significantly.", "It concurs with our expectation that, in CC, the model should start learning with training context-response pairs of lower difficulty.", "Once p cc (0) becomes 1.0, the CC is disabled, resulting the lowest model performances.", "Curriculum Length of IC.", "Similair to p cc (0) , we also run sensitivity analysis experiments by tuning k T in the instance-level pacing function p ic ( t ) and Figure", "3(b) shows the results.", "We observe that 5 Our experiments show that other hyper-parameter settings have little impact on the model performance.", "a too small or too large KT results in performance degradation.", "When k T is too small, after IC is completed, the negative examples are only sampled from a very small subset of the training data that consists of responses with high relevance.", "In this case, the sampled responses might be false negatives that should be deemed as positive cases.", "Thus, learning to treat those responses as true negatives could harm the model performance.", "On the other hand, as k T increases, the effect of IC becomes less obvious.", "When k T = log 500 k 10 ( |D| = 500 k ), IC is completely disabled, leading to the further decrease of model performances.", "Ranking Model Architecture.", "Lastly, we examine the effect of the choice of the ranking model architecture.", "We build two ranking model variants by replacing the Transformers module E c ( ) and E r ( ) in Eq.", "(4) with other modules.", "For the first case, we use 3 -layer BiLSTM with a hidden size of 256.", "For the second one, we use BERT-base (Devlin et al., 2019) model.", "Then, we train the matching models using the proposed HCL but with different ranking models as the scoring basis.", "The results on Douban dataset are shown in Table 5. We first compare the performance of different ranking models by directly using them to select the best response.", "The results are shown in the Ranking Model row of Table 5. Among all three variants, BERT performs the best but it is still less accurate than these sophisticated matching models.", "Second, we study the effect of different ranking models on the matching model performance.", "We see that, for different matching models, Transformers and BERT perform comparably but the results from BiLSTM are much worse.", "This further leads to a conclusion that, while the choice of ranking model does have impact on the overall results, the improvement of the ranking model does not necessarily lead to the improvement of matching models once the ranking model achieves certain accuracy.", "In this work, we propose a novel hierarchical curriculum learning framework for training response selection models for multi-turn conversations.", "During training, the proposed framework simultaneously employs corpus-level and instance-level curricula to dynamically select suitable training data based on the state of the learning process.", "Extensive experiments and analysis on two benchmark datasets show that our approach can significantly improve the performance of various strong matching models on all evaluation metrics.", "The authors wish to thank Jialu Xu and Sihui Wang for their insightful discussions and support.", "Many thanks to our anonymous reviewers for their suggestions and comments.", "We honor and support the ACL code of Ethics.", "Dialogue response selection aims to build a retrieval-based dialogue system which better interacts with users.", "The selection of the best response does not involve any bias towards to the participants.", "All datasets used in this work are from previously published works, and in our view, do not have any attached privacy or ethical issues." ]
[ "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "other", "other", "other", "other", "other", "other", "abstain", "other", "objective", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "abstain", "result", "other", "other", "abstain", "abstain", "abstain", "method" ]
[ "Abstractive conversation summarization has received much attention recently.", "However, these generated summaries often suffer from insufficient, redundant, or incorrect content, largely due to the unstructured and complex characteristics of human-human interactions.", "To this end, we propose to explicitly model the rich structures in conversations for more precise and accurate conversation summarization, by first incorporating discourse relations between utterances and action triples ( WHODOING-WHAT ) in utterances through structured graphs to better encode conversations, and then designing a multi-granularity decoder to generate summaries by combining all levels of information.", "Experiments show that our proposed models outperform state-of-the-art methods and generalize well in other domains in terms of both automatic evaluations and human judgments.", "We have publicly released our code at https://github.com/ GT-SALT/Structure-Aware-BART .", "Online interaction has become an indispensable component of everyday life and people are increasingly using textual conversations to exchange ideas, make plans, and share information.", "However, it is time-consuming to recap and grasp all the core content within every complex conversation (Gao et al., 2020; Feng et al., 2020).", "As a result, how to organize massive everyday interactions into natural, concise, and informative text, i.e., abstractive conversation summarization, starts to gain importance.", "Significant progress has been made on abstractive summarization for structured document via pointer generator (See et al., 2017), reinforcement methods (Paulus et al., 2018; Huang et al., 2020a) and pre-trained models (Liu and Lapata, 2019; Lewis et al., 2020; Zhang et al., 2019).", "Despite the huge success, it is challenging to directly apply document models to summarize conversations, due to Figure 1: An example of discourse relation graph", "a set of inherent differences between conversations and documents (Gliwa et al., 2019).", "First, speaker interruptions like repetitions, false-starts, and hesitations are frequent in conversations (Sacks et al., 1978), and key information resides in different portions of a conversation.", "These unstructured properties pose challenges for models to focus on salient contents that are necessary for generating both abstractive and informative summaries.", "Second, there is more than one speaker in conversations and people interact with each other in different language styles (Zhu et al., 2020b).", "The complex interactions among multiple speakers make it harder for models to identify and associate speakers with correct actions so as to generate factual summaries.", "In order to summarize the unstructured and complex conversations, a growing body of research has been conducted, such as transferring document summarization methods to conversation settings (Shang et al., 2018; Gliwa et al., 2019), adopting hierarchical models (Zhao et al., 2019; Zhu et al., 2020b), or incorporating conversation structures like topic segmentation (Liu et al., 2019b; Li et al., 2019; Chen and Yang, 2020), dialogue acts (Goo and Chen, 2018), and conversation stages (Chen and Yang, 2020).", "However, current approaches still face challenges in terms of succinctness and faithfulness, as most prior studies", "(i) fail to explicitly model dependencies between utterances which can help identify salient portions of conversations (Bui et al., 2009), and", "(ii) lack structured representations (Huang et al., 2020a) to learn the associations between speakers, actions and events.", "We argue that these rich linguistic structures associated with conversations are key components towards generating abstractive and factual conversation summaries.", "To this end, we present a structure-aware sequence-to-sequence model, in which we equip abstractive conversation summarization models with rich conversation structures through two types of graphs: discourse relation graph and action graph .", "Discourse relation graphs are constructed based on dependency-based discourse relations (Kirschner et al., 2012; Stone et al., 2013; Asher et al., 2016; Qin et al., 2017) between intertwined utterances, where each Elementary Discourse Unit (EDU) is one single utterance and they are linked through 16 different types of relations (Asher et al., 2016).", "As shown in Figure", "1(a), highly related utterances are linked based on discourse relations like Question Answer Pairs , Comment and Explanation .", "Explicitly modeling these utterances relations in conversations can aid models in recognizing key content for succinct and informative summarization.", "Action graphs are constructed as the WHO-DOING-WHAT triplets in conversations which express socially situated identities and activities (Gee, 2014).", "For instance, in Figure", "1(b), the action graph provides explicit information between Simon , fetch , and tissues for the utterance it is Simon who will fetch the tissues , making models less likely to generate summaries with wrong references (e.g., Helen will fetch the tissues ).", "propose to utilize discourse relation graphs and action graphs to better encode conversations for conversation summarization.", "(2) We design structure-aware sequence-to-sequence models to combine these structured graphs and generate summaries with the help of a novel multi-granularity decoder.", "(3) We demonstrate the effectiveness of our proposed methods through experiments on a large-scale conversation summarization dataset, SAMSum (Gliwa et al., 2019).", "(4) We further show that our structure-aware models can generalize well in new domains such as debate summarization.", "Document Summarization Compared to extractive document summarization (Gupta and Lehal, 2010; Narayan et al., 2018; Liu and Lapata, 2019), abstractive document summarization is generally considered more challenging and has received more attention.", "Various methods have been designed to tackle abstractive document summarization like sequence-to-sequence models (Rush et al., 2015), pointer generators (See et al., 2017), reinforcement learning methods (Paulus et al., 2018; Huang et al., 2020a) and pre-trained models (Lewis et al., 2020; Zhang et al., 2019).", "To generate faithful abstractive document summaries (Maynez et al., 2020), graph-based models were introduced recently such as extracting entity types (Fernandes et al., 2018; Fan et al., 2019), leveraging knowledge graphs (Huang et al., 2020a; Zhu et al., 2020a) or designing extra fact correction modules (Dong et al., 2020).", "Inspired by these graph-based methods, we also construct action graphs for generating more factual conversation summaries.", "Conversation Summarization Extractive dialogue summarization (Murray et al., 2005) has been studied extensively via statistical machine learning methods such as skip-chain CRFs (Galley, 2006), SVM with LDA models (Wang and Cardie, 2013), and multi-sentence compression algorithms (Shang et al., 2018).", "Such methods struggled with generating succinct, fluent, and natural summaries, especially when the key information needs to be aggregated from multiple first-person point-of-view utterances (Song et al., 2020).", "Abstractive conversation summarization overcomes these issues by designing hierarchical models (Zhao et al., 2019; Zhu et al., 2020b), incorporating commonsense knowledge (Feng et al., 2020), or leveraging conversational structures like dialogue acts (Goo and Chen, Figure 2: Model architecture.", "2018), key point sequences (Liu et al., 2019a), topic segments (Liu et al., 2019b; Li et al., 2019) and stage developments (Chen and Yang, 2020).", "Some recent research has also utilized discourse relations as input features in classifiers to detect important content in conversations (Murray et al., 2006; Bui et al., 2009; Qin et al., 2017).", "However, current models still have not explicitly utilized the dependencies between different utterances, making models hard to leverage long-range dependencies and utilize these salient utterances.", "Moreover, less attention has been paid to identify the actions of different speakers and how they interact with or refer to each other, leading to unfaithful summarization with incorrect references or wrong reasoning (Gliwa et al., 2019).", "To fill these gaps, we propose to explicitly model actions within utterances, and relations between utterances in conversations in a structured way, by using discourse relation graphs and action graphs and further combining these through relational graph encoders and multi-granularity decoders for abstractive conversation summarization.", "To generate abstractive and factual summaries from unstructured conversations, we propose to model structural signals in conversations by first constructing discourse relation graphs and action graphs (Section 3.1), and then encoding the graphs together with conversations (Section 3.2) as well as incorporating these different levels of information in the decoding stage through a multi-granularity", "decoder (Section 3.3) to summarize given conversations.", "The overall architecture is shown in Figure 2. 3.1 Structured Graph Construction This section describes how to construct the discourse relation graphs and action graphs.", "Formally, for a given conversation C = { u 0 , ..., u m } with m utterances, we construct discourse relation graph GD = ( VD , ED ) , where VD is the set of nodes representing Elementary Discourse Units (EDUs), and ED is the adjacent matrix that describes the relations between EDUs, and action graph GA = ( VA , EA ) , where VA is the set of nodes representing WHO , DOING and WHAT arguments, and EA is the adjacent matrix to link WHO-DOING-WHAT triples.", "Discourse Relation Graph Utterances from different speakers do not occur in isolation; instead, they are related within the context of discourse (Murray et al., 2006; Qin et al., 2017), which has been shown effective for dialogue understanding like identifying the decisions in multi-party dialogues (Bui et al., 2009) and detecting salient content in email conversations (McKeown et al., 2007).", "Although current attention-based neural models are supposed to, or might implicitly, learn certain relations between utterances, they often struggle to focus on many informative utterances (Chen and Yang, 2020; Song et al., 2020) and fail to address long-range dependencies (Xu et al., 2020), especially when there are frequent interruptions.", "As a result, explicitly incorporating the discourse relations will help neural summarization models better encode the unstructured conversations and concentrate on the most salient utterances to generate more informative and less redundant summaries.", "To do so, we view each utterance as an EDU and use the discourse relation types defined in Asher et al. (2016).", "We first pre-train a discourse parsing model (Shi and Huang, 2019) on a human-annotated multiparty dialogue corpus (Asher et al., 2016), with 0.775 F1 score on link predictions and 0.557 F1 score on relation classifications, which are comparable to the state-of-the-art results (Shi and Huang, 2019).", "We then utilize this pre-trained parser to predict the discourse relations within conversations in our SAMSum corpus (Gliwa et al., 2019).", "After predictions, there are 138,554 edges iden-tified in total and 8.48 edges per conversation.", "The distribution of these predicted discourse relation types is: Comment (19.3%), Clarification Question (15.2%), Elaboration (2.3%), Acknowledge-ment(8.4%), Continuation (10.1%), Explanation (2.8%), Conditional (0.2 %), Question Answer Pair (21.5%), Alternation (0.3%), Q-Elab (2.5%), Result (5.5%), Background (0.4%), Narration (0.4%), Correction (0.4%), Parallel (0.9%), and Contrast (1.0%).", "Then for each conversation, we construct a discourse relation graph GD = ( VD , ED ) , where VD [ k ] represents the k -th utterance.", "ED [ i ][ j ] = r if there is a link from the i -th utterance to the j -th one with discourse relation r .", "Action Graph The who-doing-what triples from utterances can provide explicit visualizations of speakers and their actions, the key to understanding concrete details happened in conversations (Moser, 2001; Gee, 2014; Sacks et al., 1978).", "Simply relying on neural models to identify this information from conversations often fail to produce factual characterizations of concrete details happened (Cao et al., 2018; Huang et al., 2020a).", "To this end, we extract WHO-DOING-WHAT triples from utterances and construct action graphs for conversation summarization (Chen et al., 2019; Huang et al., 2020b,a).", "Specifically, we first transform the first-person point-of-view utterances to its third-person point-of-view forms based on simple rules:", "(i) substituting first/second-person pronouns with the names of current speaker or surrounding speakers and", "(ii) replacing third-person pronouns based on coreference clusters in conversations detected by the Stanford CoreNLP (Manning et al., 2014).", "For example, an utterance I'll bring it to you tomorrow from Amanda to Jerry will be transformed into Amanda'll bring cakes to Jerry tomorrow .", "Then we extract WHO-DOING-WHAT (subject-predicate-object) triples from transformed conversations using the open information extraction (Ope-nIE) systems 1 (Angeli et al., 2015).", "We then construct the Action Graph GA = ( VA , EA ) from the extracted triples by taking arguments ( WHO , DOING , or WHAT ) as nodes in VA , and connect them with edge EA [ i ][ j ] = 1 if they are adjacent in one WHO-DOING-WHAT triple.", "Given a conversation and its corresponding discourse relation graph and action graph, we utilize an utterance encoder and two graph encoders, to obtain its hidden representations shown in Figure", "2(a).", "We initialize our utterance encoder FU ( . ) with a pre-trained encoder, i.e., BART-base (Lewis et al., 2020), and encode tokens { x i, 0 , ..., x i,l } in an utterance u i into its hidden representation:", "Node Initialization For discourse relation graph , we employ the output embeddings of the special tokens x i, 0 from the utterance encoder, i.e., h Ui, 0 , to initialize the i -th node v Di in GD .", "We use a one-hot embedding layer to encode the relations ED [ i ][ j ] = e Di,j between utterance i and j .", "For action graph , we first utilize FU ( . ) to encode each token in nodes v Ai and then average their output embeddings as their initial representations.", "Structured Graph Attention Network Based on Graph Attention Network (Velickovic et al., 2018), we utilize these relations between nodes to encode each node v Di in GD or v Ai in GA through: ij = exp (cid:0) (cid:0) a T [ W v i (cid:107) W v j (cid:107) W e e i,j ] (cid:1)(cid:1) (cid:80) k N i exp ( ( a T [ W v i (cid:107) W v k (cid:107) W e e i,k ])) h i = ( (cid:88) j N i ij W v j ) 1 https://github.com/philipperemy/ Stanford-OpenIE-Python Dataset Split # Conv # Participants # Turns # Discourse Edges # Action Triples SAMSum Train 14732 2.40 11.17 8.47 6.72 Val 818 2.39 10.83 8.34 6.48 Test 819 2.36 11.25 8.63 6.81 ADSC Full 45 2.00 7.51 6.51 37.20 Table 1: Statistics of the used datasets, including the total number of conversations (# Conv), the average number of participants, turns, discourse edges and action triples per conversation.", "W , W e and a are trainable parameters.", "[. (cid:107) .] denotes the concatenation of two vectors.", "is the activation function, N i is the set containing node-i 's neighbours in G .", "Different levels of encoded representations are then aggregated via our multi-granularity decoder to generate summaries as shown in Figure", "2(b).", "With s 1 previously generated tokens y 1 , ..., y s 1 , our decoder G ( . ) predicts the l -th token via: y = G ( y 1: s 1 , FU ( C ) , FD ( GD ) , FA ( GA )) (4) P ( y s | y <s , C , GD , GA ) = Softmax ( W p y ) (5) To better incorporate the information in constructed graphs, different from the traditional pretrained BART model (Lewis et al., 2020), we improve the BART transformer decoder with two extra cross attentions (Discourse Attention and Action Attention) added to each decoder layer, which attends to the encoded node representations in discourse relation graphs and action graphs.", "In each decoder layer, after performing the original cross attentions over every token in utterances { h Ui, 0: l } and getting the utterance-attended representation x U , multi-granularity decoder then conducts cross attentions over nodes { h D 0: m } and { h A 0: n } that are encoded from graph encoders in parallel, to obtain the discourse-attended representation x D and action-attended representation x A .", "These two attended vectors are then combined into a structure-aware representation x S , through a feed-forward network for further forward passing in the decoder.", "To alleviate the negative impact of randomly initialized graph encoders and cross attentions over graphs on pre-trained BART decoders at early stages and accelerate the learning of newly-introduced modules during training, we apply ReZero (Bachlechner et al., 2020) to the residual connection after attending to graphs in each decoder layer: x S = x U + x S (6) where is one trainable parameter instead of a fixed value 1, which modulates updates from cross attentions over graphs.", "Training During training, we seek to minimize the cross entropy and use the teacher-forcing strategy (Bengio et al., 2015): L = (cid:88) log P ( y l | y <l , C , GD , GA ) (7) 4 Experiments 4.1 Datasets We trained and evaluated our models on a conversation summarization dataset SAMSum (Gliwa et al., 2019) covering messenger-like conversations about daily topics, such as arranging meetings and discussing events.", "We also showed the generalizability of our models on the Argumentative Dialogue Summary Corpus (ADSC) (Misra et al., 2015), a debate summarization corpus.", "The data statistics of two datasets were shown in Table 1, with the discourse relation types distributions in the Appendix.", "Pointer Generator (See et al., 2017): We followed the settings in Gliwa et al. (2019) and used special tokens to separate each utterance.", "Transformer (Vaswani et al., 2017): We trained transformer seq2seq models following the OpenNMT (Klein et al., 2017).", "D-HGN (Feng et al., 2020) incorporated commonsense knowledge from ConceptNet (Liu and Singh, 2004) for dialogue summarization.", "BART (Lewis et al., 2020): We utilized BART 2 , and separated utterances by a special token.", "Multi-View Seq2Seq (Chen and Yang, 2020) utilized topic and stage views on top of BART for summarizing conversations.", "Here we implemented it based on BART-base models.", "We used the BART-base model to initialize our sequence-to-sequence model for training in all experiments.", "For parameters in the original BART encoder/decoder, we followed the default settings and set the learning rate 3e-5 with 120 warm-up steps.", "For graph encoders, we set the number of hidden dimensions as 768, the number of attention heads as 2, the number of layers as 2, and the dropout rate as 0.2.", "For graph cross attentions added to BART decoder layers, we set the number of attention heads as 2. The weights in ReZero residual connections were initialized with 1. The learning rate for parameters in newly added modules was 3e-4 with 60 warm-up steps.", "All experiments were performed on GeForce RTX 2080Ti (11GB memory).", "ROUGE scores (Lin and Och, 2004) 3 , and reported ROUGE-1, ROUGE-2, and ROUGE-L in Table 2. We found that, compared to simple sequence-to-sequence models ( Pointer Generator and Transformer ), incorporating extra information such as commonsense knowledge from ConceptNet ( D-HGN ) increased the ROUGE metrics.", "When equipped with pre-trained models and simple conversation structures such as topics and conversation stages, Multi-View Seq2Seq boosted ROUGE scores.", "Incorporating discourse relation graphs or action graphs helped the performances of summarization, suggesting the effectiveness of explicitly modeling relations between utterances and the associations between speakers and actions within utterances.", "Combining two different structured graphs produced better ROUGE scores compared to previous state-of-the-art methods and our base models, 3 We followed fairseq and used https://github.", "com/pltrdy/rouge to calculate ROUGE scores.", "Note that different tools may result in different ROUGE scores.", "with an increase of 2.0% on ROUGE-1, 4.3% on ROUGE-2, and 1.2% on ROUGE-L compared to our base model, BART .", "This indicates that, our structure-aware models with discourse and action graphs could help abstractive conversation summarization, and these two graphs complemented each other in generating better summaries.", "Human Evaluation We conducted human evaluation to qualitatively evaluate the generated summaries.", "Specifically, we asked annotators from Amazon Mechanical Turk to score a set of randomly sampled 100 generated summaries from ground-truth, BART and our structured models, using a Likert scale from 1 (worst) to 5 (best) in terms of factualness (e.g., associates actions with the right actors) , succinctness (e.g., does not contain redundant information), and informativeness (e.g., covers the most important content) (Feng et al., 2020; Huang et al., 2020a).", "To increase annotation quality, we required turkers to have a 98% approval rate and at least 10,000 approved tasks for their previous work.", "Each message was rated by three workers.", "The scores for each summary were averaged.", "The Intra-Class Correlation was 0.543, showing moderate agreement (Koo and Li, 2016).", "As shown in Table 4, S-BART that utilized structured information from discourse relation graphs and action graphs generated significantly better summaries with respect to factualness, succinctness, and informativeness.", "This might because that the incorporation of structured information such as discourse relations helped S-BART to recognize the salient parts in conversations, and thus improve the succinctness and informativeness over BART .", "Modeling the connections between speakers and actions greatly helped generate more factual summaries than the baselines, e.g., with an increase of 0.27 from BART to S-BART w.", "Action .", "To investigate the generalizability of our structure-aware models, we then tested the S-BART model trained on SAMSum corpus directly on the debate summarization domain (ADSC Corpus (Misra et al., 2015)) in a zero-shot setting.", "Besides the differences in topics, utterances in debate conversations were generally longer and include more action triples (37.20 vs 6.81 as shown in Table 1) and fewer participants.", "The distribution of discourse relation types also differed a lot across different Graph Types R-1 R-2 R-LS-BART w.", "As shown in Table 3, our single graph models S-BART w.", "Discourse and S-BART w.", "Action boosted ROUGE scores compared to BART , suggesting that utilizing structures can also increase the generalizability of conversation summarization methods.", "However, contrary to in-domain results in Table 2, action graphs led to much more gains than discourse graphs.", "This indicated that when domain shifts, action triples were most robust in terms of zero-shot setups; differences in discourse relation distributions could limit such generalization.", "Consistent with in-domain scenarios, our S-BART w.", "Discourse&Action achieved better results, with an increase of 66.2% on ROUGE-1, 373.4% on ROUGE-2, and 82.2% on ROUGE-L over BART .", "The Quality of Discourse Relation Graphs We showed how the quality of discourse relation graphs affected the performances of conversation summarization in Table 5. Specifically, we compared the ROUGE scores of S-BART using our constructed discourse relation graphs ( S-BART w. Discourse Graph ) and S-BART using randomly generated discourse relation graphs S-BART w.", "Random Graph where both connections between nodes and relation types were randomized.", "The number of edges in two graphs was kept the same.", "We found that S-BART with our discourse graphs outperformed 4 The detailed distributions were shown in the Appendix.", "Different Ways to Combine Graphs We experimented with different ways to combine discourse relation graphs and action graphs in our S-BART w.", "Discourse & Action , and presented the results in Table 6. Here, parallel strategy performed cross attentions on different graphs separately and then combined the attended results with feed-forward networks as discussed in Section 3.3; sequential strategy performed cross attentions on two graphs in a specific order (from discourse relation graphs to actions graphs, or vice versa).", "We found that the parallel strategy showed better performances and the sequential ones did not introduce gains compared to S-BART with single graphs.", "This demonstrates that discourse relation graphs and action graphs were both important and provided different signals for abstractive conversation summarization.", "Visualizing ReZero Weights We further tested our structure-aware BART with two ReZero settings:", "(i) initializing from 0,", "(ii) initializing from 1, and found initializing from 1 would bring in more performance gains (see Appendix).", "We then visualized the average over different decoder layers after training in Figure 3, and observed that", "(i) when was initialized with 1, the final was much larger than the setting where was initialized with 0, which might because randomly initialized modules barely received supervisions at early stages and therefore contributes less to BART .", "(ii) Compared to discourse graphs, action graphs received higher weights after training in both initializing settings, suggesting that the information from structured action graphs might be harder for the end-to-end BART models to capture.", "(iii) Utilizing both graphs spontaneously led to higher Conversations # Num # Dis.", "ReZero weights, further validating the effectiveness of combining discourse relation graphs and action graphs and their complementary properties.", "To inspect when our summarization models could help the conversations summarization, we visualized the average number of discourse edges and the average number of action triples in three sets of conversations in Table 7:", "(i) Similar : examples where S-BART generated similar ROUGE scores (the differences were less than 0.1) compared to BART ;", "(ii) Increase : examples where S-BART resulted in higher ROUGE scores (the differences were larger than 1.0) compared to BART ;", "(iii) Challenging : examples where both S-BART and BART showed low ROUGE scores (ROUGE-1 < 20.0, ROUGE-2 < 10.0, ROUGE-L < 10.0).", "When the structures in conversations were simpler (fewer discourse edges and fewer action triples than the average), BART showed similar performance as S-BART .", "As the structures of conversations become more complex with more discourse relations and more action mentions, S-BART outperformed BART as it explicitly incorporated these structured graphs.", "However, both BART and S-BART struggled when there were much more interactions beyond certain thresholds, calling for better mechanisms to model structures in conversations for generating better summaries.", "In this work, we introduced a structure-aware sequence-to-sequence model for abstractive conversation summarization by incorporating discourse relations between utterances, and the connections between speakers and actions within utterances.", "Experiments and ablation studies on SAMSum corpus showed the effectiveness of these structured graphs in aiding the task of conversation summarization via both quantitative and qualitative evaluation metrics.", "Results in zero-shot settings on ADCS Corpus further demonstrated the generalizability of our structure-aware models.", "In the future, we plan to extend our current conversation summarization models for various application domains such as emails, debates, and podcasts, and in conversations that might involve longer utterances and more participants in an unsynchronized way.", "We would like to thank the anonymous reviewers for their helpful comments, and the members of Georgia Tech SALT group for their feedback.", "This work is supported in part by grants from Google, Amazon and Salesforce." ]
[ "abstain", "abstain", "objective", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "method", "method", "method", "abstain", "other", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "other", "other" ]
[ "Unsupervised neural machine translation (UNMT) has recently achieved remarkable results for several language pairs.", "However, it can only translate between a single language pair and cannot produce translation results for multiple language pairs at the same time.", "That is, research on multilingual UNMT has been limited.", "In this paper, we empirically introduce a simple method to translate between thirteen languages using a single encoder and a single decoder, making use of multilingual data to improve UNMT for all language pairs.", "On the basis of the empirical findings, we propose two knowledge distillation methods to further enhance multilingual UNMT performance.", "Our experiments on a dataset with English translated to and from twelve other languages (including three language families and six language branches) show remarkable results, surpassing strong unsupervised individual baselines while achieving promising performance between non-English language pairs in zero-shot translation scenarios and alleviating poor performance in low-resource language pairs.", "Recently, neural machine translation (NMT) has been adapted to the unsupervised scenario in which NMT is trained without any bilingual data.", "Unsupervised NMT (UNMT) (Artetxe et al., 2018; Lample et al., 2018a) requires only monolingual corpora.", "UNMT achieves remarkable results by using a combination of diverse mechanisms (Lample et al., 2018b) such as an initialization with bilingual word embeddings, denoising auto-encoder (Vin-cent et al., 2010), back-translation (Sennrich et al., 2016a), and shared latent representation.", "More recently, Lample and Conneau (2019) achieves better Haipeng Sun was an internship research fellow at NICT when conducting this work.", "UNMT performance by introducing the pretrained language model.", "However, conventional UNMT can only translate between a single language pair and cannot produce translation results for multiple language pairs at the same time (Wang et al., 2020).", "Multilingual UNMT (MUNMT) translating multiple languages at the same time can save substantial training time and resources.", "Moreover, the performance of MUNMT in similar languages can promote each other.", "Research on MUNMT has been limited and there are only a few pioneer studies.", "For example, Xu et al. (2019) and Sen et al. (2019) proposed a multilingual scheme that jointly trains multiple languages with multiple decoders.", "However, the performance of their MUNMT is much worse than our re-implemented individual baselines (shown in Tables 2 and 3) and the scale of their study is modest (i.e., 4-5 languages).", "In this paper, we empirically introduce an unified framework to translate among thirteen languages (including three language families and six language branches) using a single encoder and single decoder, making use of multilingual data to improve UNMT for all languages.", "On the basis of these empirical findings, we propose two knowledge distillation methods, i.e., self-knowledge distillation and language branch knowledge distillation, to further enhance MUNMT performance.", "Our experiments on a dataset with English translated to and from twelve other languages show remarkable results, surpassing strong unsupervised individual baselines.This paper primarily makes the following contributions: We propose a unified MUNMT framework to translate between thirteen languages using a single encoder and single decoder.", "This paper is the first step of multilingual UNMT training on a large scale of European languages.", "methods for MUNMT and our proposed knowledge distillation methods consider linguistic knowledge in the specific translation task.", "Our proposed MUNMT system achieves state-of-the-art performance on the thirteen languages.", "It also achieves promising performance in zero-shot translation scenarios and alleviates poor performance in low-resource language pairs.", "UNMTUNMT can be decomposed into four components: cross-lingual language model pretraining, denoising auto-encoder, back-translation, and shared latent representations.", "For UNMT, two monolingual corpora X 1 = { X 1 i } and X 2 = { X 2 i } in two languages L 1 and L 2 are given.", "| X 1 | and | X 2 | are the number of sentences in monolingual corpora { X 1 i } and { X 2 i } respectively.", "A cross-lingual masked language model, which can encode two monolingual sentences into a shared latent space, is first trained.", "The pretrained cross-lingual encoder is then used to initialize the whole UNMT model (Lample and Conneau, 2019).", "Compared with previous bilingual embedding pretraining (Artetxe et al., 2018; Lample et al., 2018a; Yang et al., 2018; Lample et al., 2018b; Sun et al., 2019), this pretraining can provide much more cross-lingual information, causing the UNMT model to achieve better performance and faster convergence.", "Noise obtained by randomly performing local substitutions and word reorderings (Vincent et al., 2010; Hill et al., 2016; He et al., 2016), is added to the input sentences to improve model learning ability and regularization.", "Consequently, the input data are continuously modified and are different at each epoch.", "The denoising auto-encoder model objective function can be minimized by encoding a noisy sentence and reconstructing it with the decoder in the same language: LD = | X 1 | (cid:88) i =1 logP L 1 L 1 ( X 1 i | C ( X 1 i )) + | X 2 | (cid:88) i =1 logP L 2 L 2 ( X 2 i | C ( X 2 i )) , (1) where { C ( X 1 i ) } and { C ( X 2 i ) } are noisy sentences.", "Back-translation (Sennrich et al., 2016a) plays a key role in achieving unsupervised translation that relies only on monolingual corpora in each language.", "The pseudo-parallel sentence pairs { ( M 2 ( X 1 i ) , X 1 i ) } and { ( M 1 ( X 2 i ) , X 2 i ) } produced by the model in the previous iteration are used to train the new translation model.", "Therefore, the back-translation objective function can be optimized by minimizing: LB = | X 1 | (cid:88) i =1 logP L 2 L 1 ( X 1 i | M 2 ( X 1 i )) + | X 2 | (cid:88) i =1 logP L 1 L 2 ( X 2 i | M 1 ( X 2 i )) , (2) where PL 1 L 2 and PL 2 L 1 denote the translation probability across the two languages.", "Encoders and decoders are (partially) shared between L 1 and L 2 .", "Therefore, L 1 and L 2 must use the same vocabulary.", "The entire training of UNMT needs to consider back-translation between the two languages and their respective denoising processes.", "In summary, the entire UNMT model can be optimized by minimizing: L all = LD + LB .", "Motivated by Lample and Conneau (2019), we construct a multilingual masked language model, using a single encoder.", "For each language, the language model is trained by encoding the masked input and reverting it with this encoder.", "This pretrained multilingual language model is used to initialize the full set of parameters of MUNMT.", "We have established a MUNMT model on N languages with a single encoder and single decoder.", "We denote a sentence in language L j as X ji .", "For example, L 1 indicates English.", "| X j | is the number of sentences in the corpus X j = { X ji } .", "As Figure 1 shows, the entire training process of the MUNMT model is performed through the denoising and back-translation mechanisms, between English and non-English language pairs, by minimizing: LMUNMT = LMD + LMB , (4) where LMD denotes the denoising function and LMB denotes the back-translation function.", "In the denoising training, noise (in the form of random token deletion and swapping) is introduced into the input sentences for any language L j .", "The denoising auto-encoder, which encodes a noisy version and reconstructs it with the decoder in the same language, is optimized by minimizing: LMD = N (cid:88) j =1 | X j | (cid:88) i =1 logP L j L j ( X ji | C ( X ji )) , (5) where { C ( X ji ) } is a set of noisy sentences for language L j .", "j", "In this paper, we primarily focus on the translation from English to other languages or from other languages to English.", "This is because most test dataset contains English.", "In the process of back-translation training, we only conduct back-translation from language L 1 (English) to other languages and back-translation from other languages to language L 1 .", "For any non-English language L j , the pseudo-parallel sentence pairs { ( M j ( X 1 i ) , X 1 i ) } and { ( M 1 ( X ji ) , X ji ) } are obtained by the previous model in the L 1 L j Algorithm 1 The SKD algorithm Input: Monolingual training data X 1 , X 2 , , XN ; The pretrained model 0 ; Number of steps K 1: Initialize 0 2: while Step q max step K do 3: for j = 1 ; j < N ; j + + do 4: Sample batch { X ji } from X j 5: Compute denoising loss LMD 6: Update optimizer( LMD ) 7: end for 8: for j = 2 ; j < N ; j + + do 9: Sample batch { X 1 i } from X 1 10: Compute back-translation loss LMB 11: Randomly select another language L z and compute distillation loss LSKD 12: Update optimizer( LMB + LSKD ) 13: Sample batch { X ji } from X j 14: Compute back-translation loss LMB 15: Randomly select another language L z and compute distillation loss LSKD 16: Update optimizer( LMB + LSKD ) 17: end for 18: end while and L j L 1 direction, respectively.", "Therefore, the back-translation objective function can be optimized on these pseudo-parallel sentence pairs by minimizing: LMB = N (cid:88) j =2 | X 1 | (cid:88) i =1 logP L j L 1 ( X 1 i | M j ( X 1 i )) + N (cid:88) j =2 | X j | (cid:88) i =1 logP L 1 L j ( X ji | M 1 ( X ji )) , (6) where PL 1 L j and PL j L 1 denote the translation probabilities, in each direction, between any non-English language and English.", "To further enhance the performance of our proposed MUNMT described in Section 3, we propose two knowledge distillation methods: self-knowledge distillation (Algorithm 1) and language branch knowledge distillation (Algorithm 2).", "Figure 2 illustrates the architecture of MUNMT and the proposed knowledge distillation methods.", "Generally, during UNMT training, an objective function LKD is added, to enhance the generalization ability of the MUNMT model.", "The general M Previous model X X X C(X ) N noise MUNMTMUNMTMUNMT Denoising Back-translation X X X MUNMTMUNMTMUNMTX X X X MUNMTMUNMTMUNMTX X X X M Previous model (X ) (X ) (X ) (X ) (X ) (X )", "where is a hyper-parameter that adjusts the weight of the two loss functions during back-translation.", "T denotes the temperature used on the softmax layer.", "If the temperature is higher, the probability distribution obtained would be softer (Hin-ton et al., 2015).", "On the basis of the existing architecture of MUNMT, we introduce self-knowledge distillation (Hahn and Choi, 2019) (SKD) during back-translation, to enhance the generalization ability of the MUNMT model, as shown in Figure", "2(a).", "Unlike Hahn and Choi (2019)'s method, using two soft target probabilities that are based on the word embedding space, we make full use of multilingual information via self-knowledge distillation.", "During back-translation, only language L j sentences M j ( X 1 i ) are generated before training the MUNMT model in the L j L 1 direction.", "However, other languages, which have substantial multilingual information, are not used during this training.", "Motivated by this, we propose to introduce another language L z (randomly chosen but distinct from L 1 and L j ) during this training.", "We argue that the translation from the source sentences through different paths, L 1 L j L 1 and L 1 L z L 1 , should be similar.", "The MUNMT model matches not only the ground-truth output of language L j sentences M j ( X 1 i ) , but also the soft probability output of language L z sentences M z ( X 1 i ) .", "The opposite direction is similar.", "Therefore, this MUNMT model is optimized by minimizing the objective function: LMB (cid:48) = (1 ) LMB + T 2 LSKD , LSKD = N (cid:88) j =2 | X 1 | (cid:88) i =1 KL ( X 1 ( M j ( X 1 i )) , X 1 ( M z ( X 1 i ))) + N (cid:88) j =2 | X j | (cid:88) i =1 KL ( X j ( M 1 ( X j i )) , X j ( M z ( X j i ))) , (8) where KL ( ) denotes the KL divergence.", "It is computed over full output distributions to keep these two probability distributions similar.", "For any language L j , X 1 ( M j ( X 1 i )) and X 1 ( M z ( X 1 i )) denote the softened L 1 sentence probability distribution after encoding M j ( X 1 i ) and M z ( X 1 i ) , respectively.", "M j ( X 1 i ) and M z ( X 1 i ) were generated by the previous model in the L 1 L j and L 1 L z directions, respectively.", "X j ( M 1 ( X ji )) and X j ( M z ( X ji )) denote the softened L j sentence probability distribution after encoding M 1 ( X ji ) Uralic language family Altaic language family Turkic language branch Tr Indo-European language family Germanic language branch Baltic language branch Romance language branch Slavic language branch De En Lv Lt Cs It Es Fr Ro Finno-Ugric language branch Fi Et Hu Figure 3: The language distribution of our selected languages.", "Algorithm 2 The LBKD algorithm Input: Monolingual training data X 1 , X 2 , , XN ; LBUNMT models LB 1 , LB 2 , , LBM ; The pretrained model 0 ; Number of steps K 1: Initialize 0 2: while Step q max step K do 3: for j = 1 ; j < N ; j + + do 4: Sample batch { X ji } from X j 5: Compute denoising loss LMD 6: Update optimizer( LMD ) 7: end for 8: for j = 2 ; j < N ; j + + do 9: Sample batch { X 1 i } from X 1 10: Compute back-translation loss LMB 11: Select LBUNMT language L 1 belongs and compute distillation loss LLBKD 12: Update optimizer( LMB + LLBKD ) 13: Sample batch { X ji } from X j 14: Compute back-translation loss LMB 15: Select LBUNMT language L j belongs and compute distillation loss LLBKD 16: Update optimizer( LMB + LLBKD ) 17: end for 18: end while and M z ( X ji ) , respectively.", "M 1 ( X ji ) and M z ( X ji ) were generated by the previous model in the L j L 1 and L j L z directions, respectively.", "Note that zero-shot translation was used to translate language L j to language L z .", "The direction L j L z was not trained during MUNMT training.", "We consider thirteen languages: Czech (Cs), German (De), English (En), Spanish (Es), Estonian (Et), Finnish (Fi), French (Fr), Hungarian", "(Hu), Lithuanian (Lt), Latvian (Lv), Italian (It), Romanian (Ro), and Turkish (Tr), which belong to three language families including several language", "branches (Lewis, 2009) as shown in Figure 3. As shown in Figure", "2(b), we propose knowledge distillation within a language branch (LBKD), to improve MUNMT performance through the existing teacher models.", "To the best of our knowledge, this is the first proposal that aims to distill knowledge within a language branch.", "As the number of languages increases, the cost of training time and resources to train an individual model on any two languages increases rapidly.", "An alternative knowledge distillation method within a language branch can avoid this prohibitive computational cost.", "Because languages in the same language branch are similar, we first train small multilingual models across all languages in the same language branch (LBUNMT) before training MUNMT.", "The LBUNMT model trained in the same language branch performed better than the single model because similar languages have a positive interaction during the training process as shown in Tables 2 and 3. Therefore, the distilled information of LBUNMT is used to guide the MUNMT model during back-translation.", "The MUNMT model matches both the ground-truth output and the soft probability output of LBUNMT.", "Therefore, this MUNMT model is optimized by minimizing the objective function: LMB (cid:48) = (1 ) LMB + T 2 LLBKD , LLBKD = N (cid:88) j =2 | X 1 | (cid:88) i =1 KL ( X 1 ( M j ( X 1 i )) , LB 1 ( M j ( X 1 i ))) + N (cid:88) j =2 | X j | (cid:88) i =1 KL ( X j ( M 1 ( X ji )) , LB j ( M 1 ( X ji ))) , (9) where X 1 ( M j ( X 1 i )) and LB 1 ( M j ( X 1 i )) denote the softened L 1 sentence probability distribution of the MUNMT and LBUNMT models, respectively, after encoding M j ( X 1 i ) generated by the previous MUNMT model in the L 1 L j direction.", "X j ( M 1 ( X ji )) and LB j ( M 1 ( X ji )) denote the softened L j sentence probability distribution of the MUNMT and LBUNMT models, respectively, after encoding M 1 ( X ji ) generated by the previous MUNMT model in the L j L 1 direction.", "To establish an MUNMT system, we considered 13 languages from WMT monolingual news crawl datasets: Cs, De, En, Es, Et, Fi, Fr, Hu, It, Lt, Lv, Ro, and Tr.", "For preprocessing, we used the Moses tokenizer (Koehn et al., 2007).", "For cleaning, we only applied the Moses script clean-corpus-n.perl to remove lines in the monolingual data containing more than 50 words.", "We then used a shared vocabulary for all languages, with 80,000 sub-word tokens based on BPE (Sen-nrich et al., 2016b).", "The statistics of the data are presented in Table 1.", "For Cs,De,En, we randomly extracted 50M monolingual news crawl data after cleaning; For other languages, we used all news crawl data after cleaning as shown in Table 1.", "We report the results for WMT newstest2013 for Cs-En, De-En, Es-En, and Fr-En.", "We can evaluate the translation performance between pairs of non-English languages because newstest2013 includes these five languages parallel to each other.", "For other language pairs, we chose the newest WMT newstest set.", "That is, we reported the results on WMT newstest2019 for Fi-En and Lt-En; WMT newstest2018 for Et-En and Tr-En; WMT new-stest2017 for Lv-En; WMT newstest2016 for Ro-En; and WMT newstest2009 for Hu-En and It-En.", "Note that the versions of newstest2019 on Fi/Lt En and En Fi / Lt are different.", "We chose the corresponding newstest2019 for each direction.", "We used a transformer-based XLM toolkit to train a multilingual masked language model and followed the settings used in Lample and Conneau (2019): six layers were used for the encoder.", "The dimension of hidden layers was set to 1024.", "The Adam optimizer (Kingma and Ba, 2015) was used to optimize the model parameters.", "The initial learning rate was 0.0001, 1 = 0 .", "9 , and 2 = 0 .", "98 .", "We used the same toolkit and followed the settings of UNMT used in (Lample and Conneau, 2019): six layers were used for the encoder and decoder.", "The batch size was set to 2000 tokens.", "The other parameters were the same as those used for training language model.", "For our proposed knowledge distillation method, was set to 0.1 and T was set to 2 (the parameters are empirically selected by small-scale experiments and most of the settings achieved good results).", "The cross-lingual language model was used to pretrain the encoder and decoder of the whole UNMT model.", "All monolingual data, described in Table 1, were used in the pretraining and MUNMT training phase.", "The parameters of the multilingual and single models were the same.", "For evaluation, we used the case-sensitive BLEU scores computed by the Moses script multi-bleu.perl .", "We executed a single model (two languages) for 60,000 iterations, a small multilingual model (three to five languages) for 30,000 iterations, and a large multilingual model (13 languages) for 15,000 iterations.", "Eight V100 GPUs were used to train all UNMT models.", "The single model was trained for approximately two days; the multilingual model (13 languages) costs approximately six days since 13 languages participated in the training.", "Tables 2 and 3 present the detailed BLEU scores of all systems on the English and non-English language pairs, in each direction 1 .", "Our observations 1 The translation quality of pretrained model was not presented in the Tables 2 and 3. The result was poor because the pretrained model (cross-lingual language model) was trained within an encoder.", "The encoder and decoder of UNMT was Corpus SNMT Sen et al. (2019) Xu et al. (2019) SM LBUNMT MUNMT SKD LBKD En-Cs 19.20 -6.79 14.54 14.54 14.40 14.89 15.47 En-De 20.30 8.09 13.25 18.26 18.26 17.58 18.47 19.28 En-Es 30.40 14.82 20.43 25.14 25.40 25.05 25.61 26.79 En-Et 25.20 -14.86 15.02 14.09 15.03 15.62 En-Fi 27.40 -9.87 9.99 9.75 10.70 10.57 En-Fr 30.60 13.71 20.27 26.02 26.36 25.84 26.45 27.78 En-Hu --11.32 11.40 10.90 11.64 12.03 En-It --24.19 24.30 23.80 24.69 25.52 En-Lt 20.10 -0.79 8.29 10.07 11.15 11.11 En-Lv 21.10 -1.02 11.55 13.09 13.90 14.33 En-Ro 28.90 -29.44 29.58 28.82 29.65 31.28 En-Tr 20.00 -11.87 11.87 12.41 13.24 13.83 Average --15.61 17.21 17.15 17.95 18.63 Table 2: BLEU scores of all models on the English to non-English language pairs.", "are as follows: 1) Our proposed LBUNMT model trained in the same language branch performed better than the single model (SM) because similar languages have a positive interaction during the training process.", "Moreover, SM performed very poorly on low-resource language pairs such as En-Lt and En-Lv in the Baltic language branch.", "2) Our proposed MUNMT model trained in all languages significantly outperformed the previous work (Sen et al., 2019; Xu et al., 2019) by 4 12 BLEU scores.", "Moreover, the MUNMT model could alleviate the poor performance achieved with initialized with the same parameters of pretrained language model (just an encoder).", "3) Our proposed knowledge distillation methods outperformed the original MUNMT model by approximately 1 BLEU score.", "Moreover, our proposed MUNMT with knowledge distillation performed better than SM in all language pairs with fewer training iterations.", "Regarding our two proposed methods, LBKD achieved better performance since it could obtain much more knowledge distilled from LBUNMT model.", "4) There is a gap between the performance of our proposed MUNMT model and that of the supervised NMT systems.", "We also studied the zero-shot translation accuracy of the MUNMT model.", "Although MUNMT could be trained on all translation directions (ordered language pairs), it would require an extremely long training time.", "Our proposed MUNMT model was trained in 24 translation directions (all English and non-English language pairs, in each direction), whereas 156 translation directions exist.", "As the number of languages increases, the number of translation directions increases quadratically.", "Therefore, zero-shot translation accuracy is important to the MUNMT model.", "Table 4 shows the performance of translation between non-English language pairs in the zero-shot translation scenario.", "Note that Xu et al. (2019) (2019) shows the results of direct translation between the two languages, not the result of zero-shot translation.", "Compared with previous works, our MUNMT model outperformed the previous systems in almost all translation directions, particularly the direct translation results reported in Xu et al. (2019).", "Compared with the original MUNMT model, our proposed knowledge distillation methods further improved the performance of zero-shot translation.", "Regarding our two proposed methods, SKD significantly outperformed LBKD by approximately 3 BLEU scores since the third language was introduced during SKD translation training for two language pairs, achieving much more cross-lingual knowledge.", "To better assess the effectiveness of our proposed MUNMT model, we further trained the MUNMT and LBKD model individually on each language pair for 15,000 iterations.", "As shown in Tables 5 and 6, after further training, the model outperformed the original single model on each language pair by approximately 4 BLEU scores.", "Actually, the number of iterations of the whole process (includ-ing training the MUNMT model) is half that of the original single model.", "This demonstrates that our proposed MUNMT model is a robust system and contains substantial cross-lingual information that could improve translation performance.", "Dong et al. (2015) first extended NMT from the translation of a single language pair to multiple language pairs, using a shared encoder and multiple decoders and Corpus SM MUNMT +FT LBKD +FT Cs-En 20.62 20.09 21.50 21.25 22.17 De-En 21.31 21.95 22.41 22.81 23.07 Es-En 25.53 25.37 26.24 26.59 26.78 Et-En 19.48 19.60 21.61 21.31 22.61 Fi-En 7.62 7.19 8.06 7.80 8.34 Fr-En 25.86 25.41 26.30 26.48 26.76 Hu-En 14.48 14.54 15.99 15.34 16.07 It-En 24.33 24.77 25.54 25.35 25.86 Lt-En 1.72 14.04 15.27 15.84 16.86 Lv-En 0.95 14.90 15.57 15.33 15.87 Ro-En 28.52 28.38 29.61 30.18 30.39 Tr-En 12.99 15.65 18.47 17.35 19.48 Average 16.95 19.32 20.55 20.47 21.19 Table 6: The +FT column shows BLEU scores from further training of the MUNMT and LBKD model on the non-English to English language pairs.", "Luong et al. (2016) translated multiple source languages to multiple target languages using a combination of multiple encoders and multiple decoders.", "Firat et al. (2016) used a shared attention mechanism but multiple encoders and decoders for each language.", "Ha et al. (2016) and Johnson et al. (2017) proposed a simpler method to use one encoder and one decoder to translate between multiple languages.", "Recently, many methods (Lakew et al., 2018; Platanios et al., 2018; Sachan and Neubig, 2018; Blackwood et al., 2018; Lu et al., 2018; Wang et al., 2019a; Aharoni et al., 2019; Wang et al., 2019b; Wang and Neubig, 2019) have been proposed to boost multilingual NMT performance.", "In particular, Tan et al. proposed a knowledge distillation method (Tan et al., 2019b) and a language clustering method (Tan et al., 2019a) to improve the performance of multilingual NMT.", "Ren et al. (2018) propose a triangular architecture to tackle the problem of low-resource pairs translation by introducing another rich language.", "To further tackle the problem of low-resource pairs translation, UNMT (Artetxe et al., 2018; Lample et al., 2018a) has been proposed, using a combination of diverse mechanisms such as initialization with bilingual word embeddings, denoising auto-encoder (Vincent et al., 2010), back-translation (Sennrich et al., 2016a), and shared latent representation.", "Lample et al. (2018b) concatenated two bilingual corpora as one monolingual corpus, and used monolingual embedding pretraining in the initialization step, to achieve remarkable results with some similar language pairs.", "Lample and Conneau (2019) achieved better UNMT performance by introducing a pretrained language model.", "Sun et al. (2019, 2020) proposed to train UNMT with cross-lingual language representation agreement, to further improve UNMT performance.", "Moreover, an unsupervised translation task that evaluated in the WMT19 news translation task (Barrault et al., 2019) attracted many researchers to participate (Marie et al., 2019; Li et al., 2019).", "For Multilingual UNMT, Xu et al. (2019) exploited multiple auxiliary languages for jointly boosting UNMT models via the Polygon-Net framework.", "Sen et al. (2019) proposed an MUNMT scheme that jointly trains multiple languages with a shared encoder and multiple decoders.", "In contrast with their use of multiple decoders, we have constructed a simpler MUNMT model with one encoder and one decoder.", "Further, we have extended the four or five languages used in their work to thirteen languages, for training our MUNMT model.", "In this paper, we have introduced a unified framework, using a single encoder and decoder, for MUNMT training on a large scale of European languages.", "To further enhance MUNMT performance, we have proposed two knowledge distillation methods.", "Our extensive experiments and analysis demonstrate the effectiveness of our proposed methods.", "In the future, we intend to extend the work to include language types such as Asian languages.", "We will also introduce other effective methods to improve zero-shot translation quality.", "We are grateful to the anonymous reviewers and the area chair for their insightful comments and suggestions.", "The corresponding authors are Rui Wang and Tiejun Zhao.", "Rui Wang was partially supported by JSPS grant-in-aid for early-career scientists (19K20354): Unsupervised Neural Machine Translation in Universal Scenarios and NICT tenure-track researcher startup fund To-ward Intelligent Machine Translation.", "Tiejun Zhao was partially supported by National Key Research and Development Program of China via grant 2017YFB1002102.", "Masao Utiyama was partially supported by JSPS KAKENHI Grant Number 19H05660." ]
[ "abstain", "abstain", "abstain", "result", "objective", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "objective", "objective", "objective", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "objective", "result", "other", "other", "other", "other", "other" ]
[ "Few works in the literature of event extraction have gone beyond individual sentences to make extraction decisions.", "This is problematic when the information needed to recognize an event argument is spread across multiple sentences.", "We argue that document-level event extraction is a difficult task since it requires a view of a larger context to determine which spans of text correspond to event role fillers.", "We first investigate how end-to-end neural sequence models (with pre-trained language model representations) perform on document-level role filler extraction, as well as how the length of context captured affects the models' performance.", "To dynamically aggregate information captured by neural representations learned at different levels of granularity (e.g., the sentenceand paragraph-level), we propose a novel multi-granularity reader.", "We evaluate our models on the MUC-4 event extraction dataset, and show that our best system performs substantially better than prior work.", "We also report findings on the relationship between context length and neural model performance on the task.", "The goal of document-level event extraction 1 is to identify in an article events of a pre-specified type along with their event-specific role fillers, i.e., arguments.", "The complete document-level extraction problem generally requires role filler extraction , noun phrase coreference resolution and event tracking (i.e., determine which extracted role fillers belong to which event).", "In this work, we focus only on document-level role filler extraction.", "Figure 1 provides a representative example of this task.", "Given an article consisting of multiple para-graphs/sentences, and a fixed set of event types 1 The task is also referred to as template filling (MUC-4, 1992).", "(e.g., terrorist events) and associated roles (e.g., PERPETRATORINDIVIDUAL , VICTIM , WEAPON ), we aim to identify those spans of text that denote the role fillers for each event described in the text.", "This generally requires both sentence-level understanding and accurate interpretation of the context beyond the sentence.", "Examples include identifying Teofilo Forero Castro (mentioned in S3) as a victim of the car bomb attack event (mentioned in S2), determining there's no role filler in S4 (both of which rely mainly on sentence-level understanding, and identifying four terrorists in S1 as a perpetrator individual (which requires coreference resolution across sentence boundaries).", "Generating the document-level extractions for events is essential in facilitating downstream applications such as information retrieval and article summarization (Yang and Mitchell, 2016), and for real-life applications such as trends analysis of world events (Sundheim, 1992).", "Recent work in document-level event role filler extraction has employed a pipeline architecture with separate classifiers for each type of role and for relevant context detection (Patwardhan and Riloff, 2009; Huang and Riloff, 2011).", "However these methods: (1) suffer from error propagation across different pipeline stages; and (2) require heavy feature engineering (e.g., lexico-syntactic pattern features for candidate role filler extraction; lexical bridge and discourse bridge features for detecting event-relevant sentences at the document level).", "Moreover, the features are manually designed for a particular domain, which requires linguistic intuition and domain expertise (Nguyen and Grishman, 2015).", "Neural end-to-end models have been shown to excel at sentence-level information extraction tasks, such as named entity recognition (Lample et al., 2016; Chiu and Nichols, 2016) and ACE-type within-sentence event extraction (Chen et al., 2015; Nguyen et al., 2016; Wadden et al., 2019).", "However, to the best of our knowledge, no prior work has investigated the formulation of document-level event role filler extraction as an end-to-end neural sequence learning task.", "In contrast to extracting events and their role fillers from standalone sentences, document-level event extraction poses special challenges for neural sequence learning models.", "First, capturing long-term dependencies in long sequences remains a fundamental challenge for recurrent neural networks (Trinh et al., 2018).", "To model long sequences, most RNN-based approaches use backpropagation through time.", "But it's still difficult for the models to scale to very long sequences.", "We provide empirical evidence for this for event extraction in Section 4.3.", "Second, although pretrained bi-directional transformer models such as BERT (Devlin et al., 2019) better capture long-distance dependencies as compared to an RNN architecture, they still have a constraint on the maximum length of the sequence, which is below the length of many articles about events.", "In the sections below, we study how to train and apply end-to-end neural models for event role filler extraction.", "We first formalize the problem as a sequence tagging task over the tokens in a set of contiguous sentences in the document.", "To address the aforementioned challenges for neural models applied to long sequences, (1) we investigate the effect of context length (i.e., maximum input segment length) on model performance, and find the most appropriate length; and (2) propose a multi-granularity reader that dynamically aggregates the information learned from the local context (e.g., sentence-level) and the broader context (e.g., paragraph-level).", "A quantitative evaluation and qualitative analysis of our approach on the MUC-4 dataset (MUC-4, 1992) both show that the multi-granularity reader achieves substantial improvements over the baseline models and prior work.", "For replication purposes, our repository for the evaluation and preprocessing scripts will be available at https://github.com/xinyadu/doc_ event_role .", "Event extraction has been mainly studied under two paradigms: detecting the event trigger and extracting the arguments from an individual sentence (e.g., the ACE task (Doddington et al., 2004) 2 , vs. at the document level (e.g., the MUC-4 template-filling task (Sundheim, 1992)).", "Sentence-level Event Extraction The ACE event extraction task requires extraction of the event trigger and its arguments from a sentence.", "For example, in the sentence ... Iraqi soldiers were killed by U.S. artillery ..., the goal is to identify the die event triggered by killed and the corresponding arguments ( PLACE , VICTIM , INSTRUMENT , etc.).", "Many approaches have been proposed to improve performance on this specific task.", "Li et al. (2013, 2015) explore various hand-designed features; Nguyen and Grishman (2015); Nguyen et al. (2016); Chen et al. (2015); Liu et al. (2017, 2018) employ deep learning based models such as recurrent neural networks (RNNs) and convolutional neural network (CNN).", "Wadden et al. (2019) utilize pre-trained contextualized representations.", "The approaches generally focus on sentence-level context for extracting event triggers and arguments and rarely generalize to the document-event extraction setting (Figure 1).", "Only a few models have gone beyond individual sentences to make decisions.", "Ji and Grishman (2008) enforce event role consistency across documents.", "Liao and Grishman (2010) explore event type co-occurrence patterns to propagate event classification decisions.", "Similarly, Yang and Mitchell (2016) propose jointly extracting events and entities within a document context.", "Also related to our work are Duan et al. (2017) and Zhao et al. (2018), which utilize document embeddings to aid 2 https://catalog.ldc.upenn.edu/ LDC2006T06 event detection with recurrent neural networks.", "Although these approaches make decisions with cross-sentence information, their extractions are still at the sentence level.", "Document-level Event Extraction has been studied mainly under the classic MUC paradigm (MUC-4, 1992).", "The full task involves the construction of answer key templates, one template per event (some documents in the dataset describe more than one events).", "Typically three steps are involved role filler extraction, role filler mention coreference resolution and event tracking).", "In this work we focus on role filler extraction.", "From the modeling perspective, recent work explores both the local and additional context to make the role filler extraction decisions.", "GLACIER (Pat-wardhan and Riloff, 2009) jointly considers cross-sentence and noun phrase evidence in a probabilistic framework to extract role fillers.", "TIER (Huang and Riloff, 2011) proposes to first determine the document genre with a classifier and then identify event-relevant sentences and role fillers in the document.", "Huang and Riloff (2012) propose a bottom-up approach that first aggressively identifies candidate role fillers (with lexico-syntactic pattern features), and then removes the candidates that are in spurious sentences (i.e., not event-related) via a cohesion classifier (with discourse features).", "Similar to Huang and Riloff (2012), we also incorporate both intra-sentence and cross-sentence features (paragraph-level features), but instead of using manually designed linguistic information, our models learn in an automatic way how to dynamically incorporate learned representations of the article.", "Also, in contrast to prior work that is pipeline-based, our approach tackles the task as an end-to-end sequence tagging problem.", "There has also been work on unsupervised event schema induction (Chambers and Jurafsky, 2011; Chambers, 2013) and open-domain event extraction (Liu et al., 2019) from documents: the main idea is to group entities corresponding to the same role into an event template.", "Our models, on the other hand, are trained in supervised way and the event schemas are pre-defined.", "Apart from event extraction, there has been increasing interest on cross-sentence relation extraction (Mintz et al., 2009; Peng et al., 2017; Jia et al., 2019).", "This work assumes that mentions are provided, and thus is more of a mention/entity-level classification problem.", "Our work instead focuses on role filler/span extraction using sequence tagging approaches; role filler type is determined during this process.", "Capturing Long-term Dependencies for Neural Sequence Models For training neural sequence models such as RNNs, capturing long-term dependencies in sequences remains a fundamental challenge (Trinh et al., 2018).", "Most approaches use backpropagation through time (BPTT) but it is difficult to scale to very long sequences.", "Many variations of models have been proposed to mitigate the effect of long sequence length, such as Long Short Term Memory (LSTM) Networks (Hochre-iter and Schmidhuber, 1997; Gers et al., 1999; Graves, 2013) and Gated Recurrent Unit Networks (Cho et al., 2014).", "Transformer based models (Vaswani et al., 2017; Devlin et al., 2019) have also shown improvements in modeling long text.", "In our work for document-level event role filler extraction, we also implement LSTM layers in the models as well as utilize the pre-trained representations provided by the bi-directional transformer model BERT.", "From an application perspective, we investigate the suitable length of context to incorporate for the neural sequence tagging model in the document-level extraction setting.", "We also study how to mitigate problems associated with long sequences by dynamically incorporating both sentence-level and paragraph-level representations in the model (Figure 3).", "In the following we describe (1) how we transform the document into paired token-tag sequences and formalize the task as a sequence tagging problem (Section 3.1); (2) the architectures of our base k sentence reader (Section 3.2) and multi-granularity reader (Section 3.3).", "We formalize document-level event role filler extraction as an end-to-end sequence tagging problem.", "The Figure 2 illustrates the general idea.", "Given a document and the text spans associated with the gold-standard (i.e., correct) fillers for each role, we adopt the BIO (Beginning, Inside, Outside) tagging scheme to transform the document into paired token/BIO-tag", "sequences..", "end k -sentence readers (i.e., the single-sentence, double-sentence, paragraph and chunk readers).", "By chunk, we mean the chunk of contiguous sentences which is right within the sequence length constraint for BERT 512 in this case.", "Specifically, we use a sentence splitter 3 to divide the document into sentences s 1 , s 2 , ..., s n .", "To construct the training set, starting from each sentence i , we concatenate the k contiguous sentences ( s i to s i + k 1 ) to form overlapping candidate sequences of length k sequence 1 consists of { s 1 , ..., s k } , sequence 2 consists of { s 2 , ..., s k +1 } , etc.", "To make the training set balanced, we sample the same number of positive and negative sequences from the candidate sequences, where \"positive\" sequence contains at least one event role filler, and negative sequences contain no event role fillers.", "To construct the dev/test set, where the reader is applied, we simply group the contiguous k sentences together in order, producing nk sequences (i.e., sequence 1 consists of { s 1 , ..., s k } , sequence 2 consists of { s k +1 , ..., s 2 k } , etc.) For the paragraph reader, we set k to average paragraph length for the training set, and to the real paragraph length for test set.", "We denote the token in the sequence with x , the input for the k -sentence reader is X = { x (1)1 , x (1)2 , ..., x (1) l 1 , ..., x ( k ) 1 , x ( k ) 2 , ..., x ( k ) l k } ; where x ( k ) i is the i -th token of the k -th sentence, and l k is the length of the k -th sentence.", "Since our general k -sentence reader does not recognize sentence boundaries, we simplify the notation for the input sequence as { x 1 , x 2 , ..., x m } here.", "Embedding Layer In the embedding layer, we represent each token x i in the input sequence as the concatenation of its word embedding and contextual token representation: Word Embedding : We use the 100-dimensional GloVe pre-trained word embeddings (Pennington et al., 2014) trained from 6B Web crawl data.", "We keep the pre-trained word embeddings fixed.", "Given a token x i , we have its word embedding: xe i = E ( x i ) .", "Pre-trained LM representation : Contextualized embeddings produced by pre-trained language models (Peters et al., 2018; Devlin et al., 2019) have been proved to be capable of modeling context beyond the sentence boundary and improve performance on a variety of tasks.", "Here we employ the contextualized representations produced by BERT-base for our k -sentence labeling model, as well as the multi-granularity reader to be introduced next.", "Specifically, we use the average of all the 12 layers' representations and freeze the weights (Peters et al., 2019) during training after empirical trials 4 .", "Given the sequence { x 1 , x 2 , ..., x m } , we have: xb 1 , xb 2 , ..., xb m = BERT ( x 1 , x 2 , ..., x m ) We forward the concatenation of the two representations for each token to the upper layers: x i = concat ( xe i , xb i ) BiLSTM Layer To help the model better capture task-specific features between the sequence tokens.", "We use a multi-layer (3 layers) bi-directional LSTM encoder on top of the token representations, which we denote as BiLSTM : { p 1 , p 2 , ..., p m } = BiLSTM ( { x 1 , x 2 , ..., x m } ) CRF Layer Drawing inspirations for sentence-level sequence tagging models on tasks like NER (Lample et al., 2016).", "Modeling the labeling decisions jointly rather than independently improves the models performance (e.g., the tag I-Weapon should not follow B-Victim).", "We model labeling decisions jointly using a conditional random field (Lafferty et al., 2001).", "After passing { p 1 , p 2 , ..., p m } through a linear layer, we have P of size m size of tag space, where P i,j is the score of the tag j of the i -th token in the sequence.", "For a tag sequence y = { y 1 , ..., y m } , we have the score for the sequence-tag pair as: score ( X , y ) = m (cid:88) i =0 A y i ,y i +1 + m (cid:88) i =1 P i,y i A is the transition matrix of scores such that A i,j represents the score of a transition from the tag i to tag j .", "A softmax function is applied over scores for all possible tag sequences, which yield a probability for the gold sequence y gold .", "The log-probability of the gold tag sequence is maximized during training.", "During decoding, the model predicts the output sequence that obtains the maximum score.", "To explore the effect of aggregating contextualized token representations from different granularities", "(sentenceand paragraph-level), we propose the multi-granularity reader (Figure 3).", "Similar to the general k -sentence reader, we use the same embedding layer here to represent the tokens.", "But we apply the embedding layer to two granularities of the paragraph text (sentenceand paragraph-level).", "Although the word embeddings are the same for the embedding layers from different granularities, the contextualized representations are different for each token when the token is encoded in the context of a sentence, or in the context of a paragraph.", "Correspondingly, we build two BiLSTMs ( BiLSTM sent. and BiLSTM para. ) on top of the sentence-level contextualized token representations { x (1)1 , ..., x (1) l 1 , ..., x ( k ) l k , ..., x ( k ) l k } , and the paragraph-level contextualized token representations { x (1)1 , ..., x (1) l 1 , ..., x ( k ) l k , ..., x ( k ) l k } : Sentence-Level BiLSTM The BiLSTM sent.", "{ p (1)1 , p (1)2 , ..., p (1) l 1 } = BiLSTM sent.", "( { x (1)1 , x (1)2 , ..., x (1) l 1 } ) ... { p ( k ) 1 , p ( k ) 2 , ..., p ( k ) l k } = BiLSTM sent.", "( { x ( k ) 1 , x ( k ) 2 , ..., x ( k ) l k } ) Then we have the sentence-level representations for each token in the paragraph as { p (1)1 , ..., p (1) l 1 , ..., p ( k ) 1 , ..., p ( k ) l k } Paragraph-Level BiLSTM Another BiLSTM layer ( BiLSTM para. ) is applied to the entire paragraph (as compared to BiLSTM sent. , which is applied to each sentence), to capture the dependency between tokens in the paragraph: { p (1)1 , ..., p (1) l 1 , ..., p ( k ) 1 , ..., p ( k ) l k } = BiLSTM para.", "Fusion and Inference Layer For each token x ( j ) i (the i -th token in the j -th sentence), to fuse the representations learned at the sentence-level ( p ( j ) i ) and paragraph-level ( p ( j ) i ), we propose two options the first uses a sum operation, and the second uses a gated fusion operation:", "Simple Sum Fusion : p ( j ) i = p ( j ) i + p ( j ) i Gated Fusion : The gated fusion compute the gate vector g ( j ) i with its sentence-level token representation p ( j ) i and paragraph-level token representation p ( j ) i , to control how much information should be incorporated from the two representations.", "g ( j ) i = sigmoid ( W 1 p ( j ) i + W 2 p ( j ) i + b ) p ( j ) i = g ( j ) i (cid:12) p ( j ) i + (1 g ( j ) i ) (cid:12) p ( j ) i (cid:12) : element-wise product Similarly to in the general k -sentence reader, we add the CRF layer (section 3.2) on top of the fused representations for each token in the paragraph { p (1)1 , ..., p (1) l 1 , ..., p ( k ) 1 , ..., p ( k ) l k } , to help jointly model the labeling decisions between tokens in the paragraph.", "We evaluate our models' performance on the MUC-4 event extraction benchmark (MUC-4, 1992), and compare to prior work.", "We also report findings on the effect of context length on the end-to-end readers' performance on this document-level task.", "MUC-4 Event Extraction Dataset The MUC-4 dataset consists of 1,700 documents with associated answer key (role filler) templates.", "To make sure our results are comparable to the previously reported results on this dataset, we use the 1300 documents for training, 200 documents ( TST1+TST2 ) as the development set and the 200 documents ( TST3+TST4 ) as the test set.", "Evaluation Metrics Following the prior work, we use head noun phrase match to compare the extractions against gold role fillers for evaluation 5 ; besides noun phrase matching, we also report exact match accuracy, to capture how well the models are capturing the role fillers' boundary 6 .", "Our results are reported as Precision (P), Recall (R) and F-measure (F-1) score for the macro average for all the event roles.", "In Table 2, we also present the scores for each event role (i.e., PERPETRATOR INDIVIDUALS , PERPETRATOR ORGANIZATIONS , PHYSICAL TARGETS , VICTIMS and WEAPONS ) based on the head noun match metric.", "The detailed documentation and implementation for the evaluation script will be released.", "We compare to the pipeline and manual feature engineering based systems: GLACIER (Patward-han and Riloff, 2009) consists of a sentential event classifier and a set of plausible role filler recog-5", "recog-5 Duplicate role fillers (i.e., extractions for the same role that have the same head noun) are conflated before being scored; they are counted as one hit (if the system produces it) or one miss (if the system fails to produce any of the duplicate mentions).", "6 Similarly, duplicate extractions with the same string are counted as one hit or miss.", "nizers for each event role.", "The final extraction decisions are based on the product of normalized sentential and phrasal probabilities; TIER (Huang and Riloff, 2011) proposes a multi-stage approach.", "It processes a document in three stages: classifying narrative document, recognizing event sentence and noun phrase analysis.", "Cohesion Extract (Huang and Riloff, 2012) adopts a bottom-up approach, which first aggressively identifies candidate role fillers in the document and then refines the candidate set with cohesion sentence classifier.", "Cohesion Extract obtains substantially better precision and with similar level of recall as compared to GLACIER and TIER.", "To investigate how the neural models capture the long dependency in the context of variant length (single-sentence, double-sentence, paragraph or longer), we initialize the k in k -sentence reader to different values to build the: Single-Sentence Reader ( k = 1 ), which reads through the document sentence-by-sentence to extract the event role fillers; Double-Sentence Reader ( k = 2 ), which reads the document with step of two sentences; Paragraph Reader ( k = # sentences in the para-graph), which reads the document paragraph-by-paragraph; Chunk Reader ( k = maximum # of sentences that fit right in the length constraint for pretrained LM models), which reads the document with the longest step (the constraint of BERT model).", "The final row in Table 1&2 presents the results obtained with our Multi-Granularity Reader .", "Similar to the paragraph-level reader, it reads through document paragraph-by-paragraph, but learns the representations for both intra-sentence and inter-sentence context.", "We report the macro average results in Table", "1. To understand in detail how the models extract the fillers for each event role, we also report the per event role results in Table", "2. We summarize the results into important findings below: The end-to-end neural readers can achieve nearly the same level or significantly better results than the pipeline systems.", "Although our models rely on no hand-designed features, the contextualized double-sentence reader and paragraph reader achieves nearly the same level of F-1 compared to Cohesion Extraction (CE), judging by the head noun matching metric.", "Our multi-granularity reader performs significantly better ( 60) than the prior state-of-the-art.", "Contextualized embeddings for the sequence consistently improve the neural readers' performance.", "The results show that the contextualized k -sentence readers all outperform their non-contextualized counterparts, especially when k > 1 .", "The trends also exhibit in the per event role analysis (Table 2).", "To notice, we freeze the transformers' parameters during training (fine-tuning yields worse results).", "It's not the case that modeling the longer context will result in better neural sequence PerpInd PerpOrg Target Victim Weapon P R F-1 P R F-1 P R F-1 P R F-1 P R F-1 GLACIER(Patwardhan and Riloff, 2009) 51 58 54 34 45 38 42 72 53 55 58 56 57 53 55 TIER(Huang and Riloff, 2011) 54 57 56 55 49 51 55 68 61 63 59 61 62 64 63 Cohesion Extract (Huang and Riloff, 2012) 54 57 56 55 49 51 55 68 61 63 59 61 62 64 63 w/o contextualized embedding Single-Sentence Reader 38.38 50.68 43.68 40.98 69.05 51.44 62.50 42.76 50.78 36.69 55.79 44.27 64.91 62.30 63.58 Double-Sentence Reader 50.00 35.14 41.27 63.83 35.71 45.80 61.62 44.83 51.90 51.02 54.74 52.81 55.41 67.21 60.74 Paragraph Reader 42.51 51.35 46.52 44.80 54.76 49.28 70.33 43.45 53.71 53.75 47.37 50.36 54.55 68.85 60.87 Chunk Reader 65.63 26.19 37.44 50.00 45.45 47.62 77.78 22.62 35.05 55.00 21.15 30.56 60.42 69.77 64.76 w/ contextualized embedding C-Single-Sentence Reader 44.97 52.70 48.53 35.15 73.81 47.62 71.74 24.83 36.89 33.63 77.89 46.98 51.11 77.05 61.46 C-Double-Sentence Reader 63.49 31.76 42.34 53.25 48.81 50.93 69.52 50.34 58.40 44.03 62.11 51.53 55.56 73.77 63.38 C-Paragraph Reader 43.92 53.38 48.19 52.94 54.76 53.84 74.19 44.83 55.89 50.57 46.32 48.35 62.30 63.93 63.10 C-Chunk Reader 57.14 27.38 37.02 47.62 40.91 44.01 70.27 29.76 41.81 59.46 42.31 49.44 70.00 65.12 67.47 Multi-Granularity Reader 53.08 52.23 52.65 50.99 67.88 58.23 60.38 64.10 62.18 49.34 62.05 54.97 68.42 67.57 67.99 Table 2: Per event role results based on head noun match metric ( Cstands for contextualized ).", "tagging model on this document-level task.", "When increasing the input context from a single sentence to two sentences, the reader has a better precision and lower recall, resulting in no better F-1; When increase the input context length further to the entire paragraph, the precision increases and recall remains the same level, resulting in higher F-1; When we keep increasing the length of input context, the reader becomes more conservative and F-1 drops significantly.", "All these indicate that focusing on the local (intra-sentence) and broader (paragraph-level) context are both important for the task.", "Similar results regarding the context length have also been found in document-level coreference resolution (Joshi et al., 2019).", "Our multi-granularity reader that dynamically incorporates sentence-level and paragraph-level contextual information performs significantly better , than the non-end-to-end systems and our base k -sentence readers on the macro average F-1 metric.", "In terms of the per event role performance (Table 2), our reader: (1) substantially outperforms CE with a 7 F-1 gap on the PERPETRATORORGANIZATION role; (2) slightly outperforms CE ( 1 on the Target category); (3) achieves nearly the same-level of F-1 for PERPETRATORINDIVIDUAL and worse F-1 on VICTIM category.", "We conduct an ablation study on how modules of our multi-granularity reader affect its performance on this document-level extraction task (Table 3).", "From the results, we find that: (1) when replacing the gated fusion operation with the simple sum of the sentenceand paragraph-level token representations, the precision and F-1 drop substantially, which proves the importance of dynamically incorporating context; (2) when removing the BERT's contextualized representations, the model becomes more conservative and yields substantially lower recall and F-1; (3) when replacing the CRF layer and make independent labeling decisions for each token, both the precision and recall drops substantially.", "We also do an error analysis with examples and predictions from different models, to understand qualitatively the advantages and disadvantages of our models.", "In the first example below ( green span: gold extraction, the role after is the span's event role ), the multi-granularity (MG) reader and single-sentence reader correctly extracts the two target expressions, which the paragraph reader overlooks.", "Although only in the last sentence the attack and targets are mentioned, our MG reader successfully captures this with focusing on both the paragraph-level and intra-sentence context.", "... the announcer says president virgilio barco will tonight disclose his government's peace proposal.", "...... .", "Near the end, the announcer adds to the initial report on the el tomate attack with a 3-minute update that adds 2 injured, 21 houses Target destroyed, and 1 bus Target burned.", "In the second example ( red span: false positive perpInd extraction by the single-sentence reader ), although members of the civil group appears in a sentence about explosion, judging from paragraph-level context or reasoning about the expression itself should help confirm that it is not perpetrator individual.", "The MG and paragraph reader correctly handles this and also extracts the bomb.", ".... An attack came at approximately 22:30 last night.", "Members of the civil group and the peru-vian investigative police went to the site of the explosion.", "The members of the republican guard antiexplosives brigade are investigating to determine the magnitude of the bomb Weapon used in this attack.", "There's substantial improvement space for our MG reader's predictions.", "There are many role fillers which the reader overlooks.", "In the example below, La Tandona being a perpetrator organization is implicitly expressed in the document and the phrase did not appear elsewhere in the corpus.", "But external knowledge (e.g., Wikipedia) could help confirm its event role.", "...", "Patriotic officer, it is time we sit down to talk, to see what we can do with our fatherland, and what are we going to do with La Tandona PerpOrg.", ".... To continue defending what, we ask you.", "... .", "In the last example, there are no explicit expression such as kill or kidnap in the context for the target.", "Thus it requires deeper understanding of the entire narrative and reasoning about the surrounding context to understand that Jorge Serrano Gonzalez is involved in a terrorism event.", "release of Santander department senator Jorge Serrano Gonzalez Target, whom he described as one of the most important people that colombian democracy has at this moment.", "We have demonstrated that document-level event role filler extraction could be successfully tackled with end-to-end neural sequence models.", "Investigations on how the input context length affects the neural sequence readers' performance show that context of very long length might be hard for the neural models to capture and results in lower performance.", "We propose a novel multi-granularity reader to dynamically incorporate paragraphand sentence-level contextualized representations.", "Evaluations on the benchmark dataset and qualitative analysis prove that our model achieves substantial improvement over prior work.", "In the future work, it would be interesting to further explore how the model can be adapted to jointly extract role fillers, tackles coreferential mentions and constructing event templates.", "We thank the anonymous reviewers and Ana Smith for helpful feedback." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "objective", "other", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "method", "objective", "other", "method", "other", "method", "method", "other", "other", "other", "other", "method", "objective", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "other", "abstain", "method", "method", "method", "abstain", "method", "method", "other", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "result", "abstain", "abstain", "abstain", "result", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "result", "abstain", "other" ]
[ "Most dialog systems posit that users have figured out clear and specific goals before starting an interaction.", "For example, users have determined the departure, the destination, and the travel time for booking a flight.", "However, in many scenarios, limited by experience and knowledge, users may know what they need, but still struggle to figure out clear and specific goals by determining all the necessary slots.", "In this paper, we identify this challenge, and make a step forward by collecting a new human-to-human mixed-type dialog corpus.", "It contains 5k dialog sessions and 168k utterances for 4 dialog types and 5 domains.", "Within each session, an agent first provides user-goal-related knowledge to help figure out clear and specific goals, and then help achieve them.", "Furthermore, we propose a mixed-type dialog model with a novel Prompt-based continual learning mechanism.", "Specifically, the mechanism enables the model to continually strengthen its ability on any specific type by utilizing existing dialog corpora effectively.", "One of the overarching goals of Artificial Intelligence is to build an intelligent agent that can generate coherent multi-turn dialogs to meet user needs/goals.", "Recently, multiple dialog agents have been launched, such as Echo and Siri.", "These agents usually position themselves as some kind of do engines that act under users' clear instructions.", "Specifically, they posit users have figured out clear and specific goals by determining all the necessary aspects or slots of their goals.", "For example, before booking a flight, a user has determined the departure, the destination and the travel time.", "However, such assumption can not hold in many real-world scenarios.", "For example, a user wants to plan a trip to Beijing for relaxing, but he or she only has limited knowledge about Beijing.", "Thus it is difficult for him or her to decide which slots are needed to achieve this goal.", "Obviously, in this scene, the user needs additional consultant services from an agent to help figure out clear and specific goals.", "However, the aforementioned assumption hinders providing these services effectively.", "In this paper, we make a step towards solving the challenge.", "In order to facilitate the study of how to help users clarify ing their goals, we construct a new Dial og corpus at Bai du , denoted as DuClarifyDial .", "1 As shown in Figure 1, a user chats about feels anxious because of work pressure, and wants to relax himself or herself but have no clear idea about the trip.", "In the scenario, the agent conducts knowledge-grounded dialogs and question answering conversations to help the user learn more about goal-related knowledge, which helps figure out clear and specific goals.", "Finally, the user determines to visit Wangfujing Catholic Church and books a restaurant nearby.", "Specifically, in DuClarifyDial, besides basic social chitchat, an agent should help users figure out clear and specific goals by providing goal-related knowledge through coherent knowledge-grounded dialogs and question answering (QA) conversations.", "Then, upon request, it should also conduct task-oriented dialogs to help achieve user goals.", "To this end, we first collect a human-to-human mixed-type dialog dataset.", "It contains 5k dialog sessions and 168k utterances for 4 dialog types and 5 domains.", "Specifically, each session contains at least two of following four dialog types, i.e., social chitchat, question answering, knowledge-grounded dialog, and task-oriented dialog.", "Furthermore, in 1 https://github.com/PaddlePaddle/ Research/tree/master/NLP/ACL2022-DuClarifyDial 1024 DuClarifyDial Bot", "order to seamlessly blend different types of dialogs, we make efforts in both dataset collection and task definition.", "For dataset collection , we first collect human-to-human dialogs within the Wizard-of-Oz framework (Kelley, 1984).", "Then, we design a unified dialog state schema and dialog act schema for all types of dialogs.", "Here, the unification can (1) ease the dialog annotation procedures, (2) simplify dialog model design, and (3) facilitate wiser dialog management by bringing a shared dialog semantic space for different types of dialogs.", "Finally, we annotate dialog states and dialog acts.", "For task definition , we first unify the dialog modelling into three sub-procedures, which includes dialog state tracking, dialog act planning and response generation.", "Then, we define one sub-task for each sub-procedure.", "Besides, in order to facilitate end-to-end modelling, we also define an end-to-end dialog generation sub-task.", "To facilitate model comparison, we conduct bench-marking experiments on DuClarifyDial for the aforementioned four sub-tasks.", "Furthermore, since DuClarifyDial is a mixed-type dialog corpus, it is straightforward to explore effective methods for utilizing existing single-type or mixed-types dialog corpora in task modelling.", "Specifically, we propose a novel Prompt-based continual learning mechanism to strengthen the model ability, by continually utilizing existing different types of dialog corpora.", "Here, we equip a pre-trained dialog model (Bao et al., 2020) with (1) different prompt texts as input and (2) type, task and domain representation in embedding layer for different dialog types.", "Furthermore, we train our model by two steps with continual learning mechanism: first Prompting on existing dialog corpora and then fine-tuning on DuClarifyDial.", "We propose a large-scale Chinese mixed-type corpus, where each session weaves together multiple types of dialogs with natural cross-type transitions.", "Specifically, we design a unified dialog state (act) schema for all types of dialogs.", "Here, the unified organization first brings a shared semantic space for task-oriented and non-task-oriented dialogs.", "Then, it enables a unified dialog modelling procedures for all types of dialogs, which can facilitate more effective dialog management.", "We build benchmarking baselines on DuClarifyDial and propose a novel Prompt-based continual learning mechanism to utilize existing dialog corpora effectively.", "Task-oriented dialog systems have continued an active research area for decades and have been consistently supported by the development of new datasets.", "Recently, several large-scale multi-domain task-oriented dialog datasets have emerged (Budzianowski et al., 2018; Quan et al., 2020; Rastogi et al., 2020; Zhu et al., 2020; Jin et al., 2021; Chen et al., 2021).", "Specifically, MultiWOZ (Budzianowski et al., 2018) is a fully-labelled collection of human-human written conversations spanning over multiple domains and topics, which contains a size of 10k dialogs.", "Schema (Rastogi et al., 2020) proposes a schema-guided paradigm for task-oriented dialog, which contains over 16k multi-domain conversations spanning 16 domains.", "CrossWOZ (Zhu et al., 2020) and RiSAWOZ (Quan et al., 2020) are Chinese cross-domain task-oriented datasets, which contains 6K and 11k dialogs respectively.", "ABCD (Chen et al., 2021) includes over 10K dialogs that incorporate procedural, dual-constrained actions.", "Although achieved promising progress, these datasets usually posit that users have figured out clear and specific goals before staring an interaction, which is not hold in many practical scenarios.", "In this paper, we focus on providing additional consultant services for users, to help figure out clear and specific user goals.", "Open-domain dialog systems have attracted lots of interests in recent years.", "To develop more humanlike dialog models, several knowledge-grounded corpora have been proposed (Wu et al., 2019b; Moon et al., 2019; Liu et al., 2020b; Zhou et al., 2020; yang Wang et al., 2021; Komeili et al., 2021; Feng et al., 2020; Yoshino and Kawahara, 2015; Tanaka et al., 2021).", "The main purpose on these datasets is to generate more knowledgeable dialogs.", "In comparison, DuClarifyDial focuses on helping figure out clear and specific user goals.", "Moreover, DuClarifyDial is a mixed-type dialog dataset that contains four types of dialogs.", "Recently, there are multiple efforts on developing dialog systems that can multi-task on multiple types of dialogs (Kim et al., 2020; Smith et al., 2020;", "Mosig et al., 2020; Madotto et al., 2020; Saha et al., 2018; Sun et al., 2021; Young et al., 2021).", "Specifically, Kim et al. (Kim et al., 2020) propose to handle out-of-API requests, by accessing unstructured domain knowledge in task-oriented dialogs.", "Sun et al. (Sun et al., 2021) and Yong et al. (Young et al., 2021) propose to fuse task-oriented and open-domain dialogs in conversational agents, in order to generate more engaging and interactive dialogs.", "The DuClarifyDial dataset differs from these datasets in that we focus on helping figure out clear and specific user goals, rather than targeting at the out-of-API problem (Kim et al., 2020) or facilitating a more engaging and interactive dialog generation (Young et al., 2021).", "Furthermore, DuClarifyDial contains more types of dialogs than previous datasets.", "Moreover, in order to seamlessly blend different types of dialogs for efficient consulting, DuClarifyDial utilizes the same dialog state schema and dialog act schema for all types of dialogs, rather than utilizes different schema for different types of dialogs.", "DuClarifyDial is designed to collect a high quality mixed-type dialog dataset for helping figure out clear and specific goals.", "In DuClarifyDial, one person serves as the user and the other as the wizard (agent).", "In order to help figure out clear and specific goals, besides social chitchat, the agent provides user-goal-related information through knowledge grounded dialogs and QA conversations, and then help achieve the goals through task-oriented dialogs.", "Specifically, in order to effectively weave together multi types of dialogs for achieving this purpose, it is essential for different types of dialogs to share the same state space and action space.", "Thus, in Section 3.4, we utilize a unified dialog state schema and dialog act schema for the aforementioned four types of dialogs.", "In the following, we will introduce the four steps of DuClarifyDial collection: (1) building knowledge base to provide goal-related information; (2) constructing dialog templates to assist dialog collection; (3) collecting conversation utterances by crowdsourcing; (4) annotating dialog states and dialog acts.", "In order to create a knowledge base that includes five domains: hotel, attraction, restaurant, food, and movie, we collect publicly available information from the WEB.", "Specifically, for the hotel domain, we collect 1,133 entities and their related knowledge from two famous online accommodation reservation websites, Qunar and Ctrip.", "23 For the attraction domain, we collect 435 entities and their related knowledge from the famous travelling website, Mafengwo.", "4 For the restaurant domain, we collect 122 entities and their related knowledge from the famous shopping platform, Meituan.", "5 For the food domain, we collect 1,971 entities and their related knowledge from the famous online encyclopedia, Baidu Baike.", "6 Finally, for the movie domain, we collect 224 entities and their related knowledge from two famous social networking websites, Mtime and Douban.", "78 2 https://www.qunar.com/ 3 https://www.ctrip.com/ 4 http://www.mafengwo.cn/ 5 https://www.meituan.com/ 6 https://baike.baidu.com/ 7 http://www.mtime.com/ 8 https://www.douban.com/ 3.2 Dialog Template Construction Based on the collected knowledge base, we generate dialog templates to guide crowdsourcing workers, which is in line with previous work(Budzianowski et al., 2018; Liu et al., 2020b).", "Here, each template consists of a sequence of dialog sub-scenarios, and each sub-scenario is defined by a dialog type, a dialog topic and a detailed description text.", "Table 1 shows an example dialog template.", "Specifically, in order to better imitate the real scenarios, dialog templates should introduce different interaction behaviours.", "For example, a user may ask for reserving a ticket during conducting an in-depth knowledge-grounded dialogs around a certain entity, e.g., an attraction.", "Furthermore, a user may interrupt a task-oriented dialog by chatting about some instant content in mind, and then continue the task-oriented dialog.", "In order to construct dialog templates, we first utilize heuristic rules to automatically enumerate candidate sub-scenarios sequences that have natural topic transitions.", "Then, we utilize pre-defined templates to generate detailed descriptions for these sub-scenarios.", "Finally, to further ensure natural topic transitions, we manually filter out a few incoherent dialog templates, such as descriptions that contain inconsistent facts.", "In order to collect high quality dialogs, we set a strict annotation procedure to guide workers to annotate dialogs based on the given templates.", "Specifically, the collection procedure includes three stages: (1) reliable crowdsourcing workers recruitment, (2) dialog generation, and (3) quality verification.", "In the worker recruitment stage , in order to select reliable workers, we recruit 100 candidates in a famous crowdsourcing platform.", "9 Then, we ask each candidate to label 10 dialog sessions based on given templates.", "Lastly, we employ the top-40 candidates with the highest labelling quality to serve as crowdsourcing workers.", "In the dialog generation stage , we develop a labelling interface for crowdsourcing workers to converse synchronously.", "Then, we randomly pair up two crowdsourcing workers and set each of them a role of the user or the wizard (bot).", "Lastly, the two crowdsourcing workers generate dialogs with 9 https://test.baidu.com/ 1027 {general:{_user_profile:{_mood:_redcent_event:}}attraction:{_booked:[{_name:_date:_people:}]_semi:{_type:_score:_price:_area:_duration:}_entities:[ { _name:_description:_history:_alias:_location:_question_answer: [] _attitude: } ] }...} { general:{_greet:_mood:_chat:} } { attraction:{_request:{}_inform:{}_recommend:{}_no-offer: } } ...", "the help of the aforementioned knowledge base and dialog templates.", "User Side For a given dialog template, in order to prevent information overload, we only provide a sub-scenario to the user at a time.", "During dialog collection, a user first reads though the detailed description to understand the provided sub-scenario.", "Then, based on the given sub-scenario, the user communicates with the wizard turn by turn.", "Finally, the user may require for another sub-scenario if he or she believes the current sub-scenario has been accomplished.", "Specifically, in order to diversify the corpus, we encourage the users to follow their own speaking style in communication.", "Wizard Side A wizard is required to serve as a consultant, who is responsible for helping users figure out clear and specific goals.", "At each sub-scenario, the wizard can get access to the associated knowledge in the interface, which is extracted from the knowledge base automatically.", "When receiving an utterance from the user side, the wizard needs to respond appropriately.", "In the quality verification stage , we manually check the collected dialogs.", "Specifically, if a dialog is considered as unqualified, we will ask the two crowdsourcing workers to revise the dialog until it is qualified.", "After collecting the conversation data, we recruit crowdsourcing workers to annotate dialog states and dialog acts.", "Specifically, in order to seamlessly blend multi types of dialogs for helping users figure out clear and specific goals, we first design a unified dialog states schema and dialog act schema for all types of dialogs, and then annotate the dialogs based on the schema.", "The unified dialog state consists of a list of domain-states, as shown in Figure 2.", "Specifically, we add a general domain to store user-profile related states, e.g., user mood.", "The general domain is important, since user-profile may have a significant impact on his or her goal.", "For other domains, we split domain-states into three parts: (1)_booked for storing booked orders in this domain.", "Each booked order contains all the necessary information for finishing the order; (2) _semi for storing the important but not necessary information for an order; (3) _entities for storing all the mentioned entities and the mentioned specific pieces of information about these entities.", "Specifically, we store an _attitude slot in each mentioned entity to capture user interest directly.", "The values of the _attitude slot contain two types: positive and negative.", "Here, the _booked part is mainly corresponding to the task-oriented dialog, the _en-tities part is mainly corresponding to the knowledge grounded dialog and question answering dialog, and the _semi part corresponding to all the aforementioned three dialog types.", "The unified dialog act schema consists of domains, intents, slots and values.", "Specifically, we add a general domain to store intents that are not directly related to user goals.", "For other domains, they usually contain four intents: _re-quest, _inform, _recommend and _no-offer.", "Specifically, the classical knowledge selection in knowledge-ground dialog is treated as an _inform action in this unified act schema.", "Based on the unified schema, we recruit 10 crowdsourcing workers to annotate these dialog states and dialog acts.", "Specifically, before formal annotation, each worker must pass a labelling test.", "Here, we first annotate 10 dialogs manually.", "Then, we ask workers to annotate these dialogs.", "Lastly, a worker passes the test if his annotations are the same as our annotations.", "The overall collected data consists of 5,052 dialog sessions in total, with 3,000 sessions in the training set, and validation and test sets of 500 and 1,052 sessions, respectively.", "Overall statistics can be found in Table 2.", "We conduct human evaluations for data quality.", "Specifically, if a dialog follows the instruction in task templates and the utterances are fluent and grammatical, it will be rated 1, otherwise 0.", "Then we ask three workers to judge the quality of 200 randomly sampled dialogs.", "Finally we obtain an average score of 0.83 on this evaluation set.", "Recently, large scale pre-trained dialog models have achieved impressive performance, both in task-oriented dialog (Heck et al., 2020; Yang et al., 2021) and open-domain chitchat (Adiwardana et al., 2020; Roller et al., 2021; Bao et al., 2020).", "Meanwhile, the methodologies for different types of dialogs have gradually shifted to generative and end-to-end modelling.", "Following these trends, we propose a pre-trained mixed-type dialog model based on (Bao et al., 2020), denoted as PLATO-MT.", "Furthermore, we equip our model with a novel Prompt-based continual learning mechanism to strengthen the model ability by continually utilizing external existed different types of dialog corpora.", "Figure 3 shows an overview of the proposed PLATO-MT model.", "As shown in Figure 3 (1), the model is a multi-layer transformer-based neural network.", "Furthermore, the inputs and outputs of all dialog sub-tasks are formalized as simple text sequences.", "In order to effectively blend the abilities of mixed-type dialog in one model, we follow the Prompt + LM Fine-tuning strategy (Liu et al., 2021).", "Specifically, we design different Prompt texts as input for different dialog types.", "For example, for knowledge-based dialogs, the Prompt text of input is [Knowledge] context.", "Here [Knowl-edge] refers to knowledge sentences used for context.", "Similarly, the Prompt text of QA is [Ques-tion|Answer] context and the Prompt text of task-oriented dialog is [Domain|Slot|Value] context.", "Furthermore, we add type, task and domain embedding representation in embedding layers to further differentiate the characters of different dialog types.", "Meanwhile, we train the PLATO-MT model with continuous learning mechanism, as shown in Figure 3 (2).", "In particular, we first carry on prompting on existing dialog corpora, such as CrossWOZ (Zhu et al., 2020), RiSAWoz (Quan et al., 2020), BiToD (Lin et al., 2021), Kdconv (Zhou et al., 2020) and DurecDial (Liu et al., 2020b).", "Thus we strengthen our model ability by contin-1029 ually utilizing external existed different types of dialog corpora.", "Then we finetune the prompted model on our proposed dialog corpus DuClarifyDial.", "We break down the mixed-type dialog modelling task into three sub-tasks: dialog state tracking, dialog act planning, and dialog-act-to-text generation.", "Besides, in order to facilitate end-to-end dialog modelling, we define an end-to-end dialog-context-to-text generation sub-task.", "For each of the four sub-tasks, we report benchmark results on the following dialog models, which have achieved promising performance in the popular MultiWOZ dataset (Budzianowski et al., 2018).", "Specifically, we use the original codes released by the authors.", "UBAR (Yang et al., 2021) UBAR is a fully end-to-end task-oriented dialog model that takes a pre-trained model as backbone.", "Here, since DuClarifyDial is a Chinese dataset, we utilize a Chinese large-scale pre-trained model, ERNIE (Xiao et al., 2020), to initialize UBAR.", "MinTL (Lin et al., 2020) MinTL is a strong model that utilizes effective transfer learning to plug-and-play pre-trained models.", "Here, instead of utilizing BART (Lewis et al., 2020) as in the original paper, we utilize the multi-lingual version, mBART (Liu et al., 2020a), for initialization.", "PLATO (Bao et al., 2020) PLATO is the state-of-the-art Chinese pre-trained dialog model.", "We use the released parameters.", "10 PLATO-MT It is the proposed unified mixed-type dialog model with Prompt-based Continual Learning mechanism.", "Here, the Prompt-related parameters are random initialized.", "PLATO-MT w/o Prompt It is the PLATO-MT model without Prompting.", "We first fine-tune it on the same set of existing dialog corpus as in PLATO-MT, and then fine-tune it on DuClarifyDial.", "For building a successful dialog system, a robust dialog state tracking ( DST ) is considered as the first step.", "It takes previous dialog utterances and the recent dialog state as input, and then outputs the current dialog state.", "To evaluate the performance on dialog state tracking, we utilize both slot-level metric and 10 https://github.com/PaddlePaddle/Knover/tree/luge-dialog/luge-dialog dialog-level metrics.", "For slot-level metric, we measure the slot accuracy ( Slot Acc. ).", "Specifically, the slot accuracy is measured by individually comparing each (domain, slot, value) triplet to its ground truth label.", "For dialog-level metric, besides dialog type accuracy ( Type Acc. ) and dialog domain accuracy ( Domain Acc. ), we also measure the joint goal accuracy ( Joint Acc. ) (Wu et al., 2019a).", "It compares the predicted dialog states to the ground truth at each turn, and the output is considered correct if and only if all the predicted values exactly match the ground truth.", "Table 3 shows the evaluation results.", "We can see all the models achieve promising results in terms of Type Acc. and Domain Acc..", "It indicates the effectiveness of utilizing large-scale pre-trained models as backbone.", "Furthermore, we notice that PLATO-MT outperforms all the baselines, especially in terms of Slot Acc. and Joint Acc..", "It demonstrates that PLATO-MT can track dialog states effectively.", "The dialog act planning ( DAP ) sub-task takes dialog context, current dialog state and retrieved coarse knowledge as input, and then outputs system act.", "Specifically, for each dialog session, we first extract all the entities in it, and then retrieve all the related knowledge about these entities to serve as the retrieved coarse knowledge.", "To evaluate the performance on dialog act planning, we measure the dialog act accuracy ( Act Acc. ) and the BLEU-1/2 (Papineni et al., 2002) score.", "Table 3 shows the evaluation results.", "We notice that PLATO-MT outperforms all the baselines, especially in terms of Act Acc..", "It demonstrates that PLATO-MT can plan appropriate dialog acts effectively.", "The dialog act to text generation ( RG ) sub-task aims to transform a structured dialog act into a response.", "It takes dialog context and delexicalized dialog act as input, and then outputs a response.", "To evaluate performance on generation, we utilize both automatic metrics and manual metrics.", "For automatic evaluation, we use several classical metrics, including BLEU (Papineni et al., 2002), METEOR (Banerjee and Lavie, 2005), CIDER (Vedantam et al., 2015) and Distinct ( Dist. ) (Li et al., 2016).", "For manual evaluation, we conduct evaluation on randomly sampled 50 sessions at the level of both turns and dialogs.", "For turn-level human evaluation, the generated responses are evaluated by three annotators in terms of appropriateness ( Appr. ) and informativeness ( Info. ).", "For dialog-level human evaluation, we measure hallucination ( Hallu. ) that measures information accuracy in generated responses, and dialog success ( Suc. ) that measures whether an agent helps users figure out clear goals.", "Specifically, if a user has not completed any order during a session, the success score is 0; Otherwise, the success score equals to the information accuracy in a session.", "Table 4 shows the evaluation results.", "We find PLATO-MT significantly outperforms all the baselines in terms of all the metrics except Dist-1/2 (sign test, p-value < 0.01).", "It indicates that PLATO-MT can generate dialogs with higher qualities.", "This end-to-end dialog generation sub-task ( E2E-DG ) takes dialog context as input, and then outputs an utterance for responding.", "Specifically, in the end-to-end settings, since the dialog domain and type information are not available at each turn, we do not use them as input information.", "Here, we consider the same set of evaluation settings as in Section5.3.", "baselines in terms of all the metrics except Dist-1/2 (sign test, p-value < 0.01).", "Specifically, in terms of Hallu. and Suc. in manual evaluation, PLATO-MT outperforms other models by a large margin.", "It indicates that PLATO-MT is much more competent in helping users learn about correct goal-related knowledge, which is essential for helping users figure clear and specific goals.", "In order to evaluate the contribution of the proposed Prompt-based continual learning mechanism, we remove the mechanism from PLATO-MT, denoted as PLATO-MT-w/o Prompt.", "Here, we first finetune PLATO on the same set of existing dialog corpus as in PLATO-MT, and then fine-tune it on DuClarifyDial.", "For evaluation, we consider the same set of settings as in Section5.3.", "As shown in Table 3, Table 4 and Table 5, its performance drops in terms of most metrics in all the four sub-tasks.", "Specifically, in manual evaluation in Table 5, we notice a sharp performance degradation in terms of Hallu. and Suc..", "It demonstrates the Prompt-based mechanism is essential for effectively utilizing existing dialog corpora, which enables PLATO-MT can continually strengthen its ability on any specific dialog type.", "Furthermore, we find that, in terms of most metrics, the mechanism gains more in the end-to-end conversation generation sub-task than in the other three sub-tasks.", "This is because there are no avail-1031 Methods Automatic Metrics Manual Metrics BLEU-1/2 METEOR CIDER Dist-1/2 Appr.", "able annotated information in the end-to-end conversation generation sub-task, which makes it a more difficult task.", "Thus, the effect of Prompt-based continual mechanism appears relatively more significant.", "In this paper, we first identify the challenge that users may struggle to figure out clear and specific goals in many real scenarios.", "Then, we make a step forward by collecting a new human-to-human mixed-type dialog corpus, which contains 5k dialog sessions and 168k utterances for 4 dialog types and 5 domains.", "Furthermore, we setup benchmarks based on the corpus.", "Moreover, we propose a mixed-type dialog generation model with a novel Prompt-based continual learning mechanism.", "Finally, experimental results demonstrate the effectiveness of the mechanism.", "We make sure that DuClarifyDial has been collected in a manner that is consistent with the terms of use of any sources and the intellectual property and privacy rights of the original authors of the texts.", "And crowd workers were treated fairly.", "This includes, but is not limited to, compensating them fairly and ensuring that they were able to give informed consent, which includes, but is not limited to, ensuring that they were voluntary participants who were aware of any risks of harm associated with their participation.", "Please see Section 3 for more details characteristics and collection process of DuClarifyDial ." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "other", "abstain", "objective", "method", "abstain", "abstain", "objective", "method", "method", "method", "abstain", "objective", "method", "objective", "objective", "method", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "objective", "abstain", "method", "abstain", "abstain", "abstain" ]
[ "Lexical ambiguity poses one of the greatest challenges in the field of Machine Translation.", "Over the last few decades, multiple efforts have been undertaken to investigate incorrect translations caused by the polysemous nature of words.", "Within this body of research, some studies have posited that models pick up semantic biases existing in the training data, thus producing translation errors.", "In this paper, we present DIBIMT, the first entirely manually-curated evaluation benchmark which enables an extensive study of semantic biases in Machine Translation of nominal and verbal words in five different language combinations, namely, English and one or other of the following languages: Chinese, German, Italian, Russian and Spanish.", "Furthermore, we test state-of-the-art Machine Translation systems, both commercial and non-commercial ones, against our new test bed and provide a thorough statistical and linguistic analysis of the results.", "We release DIBIMT at https:// nlp.uniroma1.it/dibimt as a closed benchmark with a public leaderboard.", "The polysemous nature of words poses a longstanding challenge in a wide range of Natural Language Processing (NLP) tasks such as Word Sense Disambiguation (Navigli, 2009; Bevilacqua et al., 2021) (WSD), Information Retrieval (Krovetz and Croft, 1992) (IR) and Machine Translation (Emelin et al., 2020) (MT).", "In MT, some research works have addressed the ability of systems to disambiguate polysemous words.", "For instance, given the sentence He poured a shot of whiskey , the polysemous target word shot unequivocally means a small quantity and therefore a possible translation into Italian could be: Vers un goccio di whiskey .", "However, some MT systems propose the following translation: Vers uno sparo Equal contribution.", "di whiskey in which the noun sparo means gunshot .", "This is one of many examples that seem to encourage a deeper performance analysis in scenarios in which MT systems are required to deal with polysemous words and, specifically, with infrequent meanings of polysemous words.", "Although state-of-the-art MT systems, both commercial and non-commercial ones, achieve impressive BLEU scores on standard benchmarks, in our work we demonstrate that they still present significant limitations when dealing with infrequent word senses, which standard metrics fail to recognize.", "In the last few decades, attempts have been made to investigate the aforementioned phenomena.", "In fact, recent studies have observed a direct correlation between semantic biases in the training data and semantic errors in translation.", "However, their findings are limited by the following shortcomings:", "i) they are not based on entirely manually-curated benchmarks;", "ii) they rely heavily on automatically-generated resources to determine the correctness of a translation; and", "iii) they do not cover multiple language combinations.", "In this work, we address the aforementioned drawbacks and present DIBIMT, to the best of our knowledge the first fully manually-curated evaluation benchmark aimed at investigating the impact of semantic biases in MT in five language combinations, covering both nouns and verbs.", "This benchmark allows the community not only to better explore the described phenomena, but also to devise innovative MT systems which better deal with lexical ambiguity.", "Specifically, the contributions of the present work are threefold: We present DIBIMT, a novel gold-quality test bed for semantic biases in MT that goes beyond a simple accuracy score, covering five language combinations, namely English and one or other of the following languages: Chinese, German, Italian, Russian and Spanish; He poured a shot of whiskey.", "goccio bicchierino iniezione sparo trago chupito pistolero tiro SPANISH a RUSSIAN CHINESE Schlckchen Schuss Injektion Schlag GERMAN ITALIAN Figure 1: Example of an annotated dataset item.", "We define four novel metrics that better clarify the semantic biases within MT models; We provide a thorough statistical and linguistic analysis in which we compare 7 state-of-the-art MT systems, including both commercial and non-commercial ones, against our new benchmark.", "Furthermore, we extensively discuss the results.", "To enable further research, we release DIBIMT as a closed benchmark with a public leaderboard at https://nlp.uniroma1.it/dibimt .", "Over the course of the last few decades, several approaches to the evaluation of the lexical choice in MT have been proposed.", "To this end, cross-lingual benchmarks were created in which systems were required to provide the translation or a substitute for a given target word in context in a target language (Vickrey et al., 2005; Mihalcea et al., 2010; Lefever and Hoste, 2013).", "More recently, Gonzales et al. (2017) put forward ContraWSD, a dataset which includes 7,200 instances of lexical ambiguity for German English, and 6,700 for German French.", "This dataset pairs every reference translation with a set of contrastive examples which contain incorrect translations of a polysemous target word.", "For each instance, the answer provided by systems is considered correct if the reference translation is scored higher.", "Based on a denoised version of the ContraWSD dataset and focusing on the language combination German English, Gonzales et al. (2018) present the Word Sense Disambiguation Test Suite which, unlike ContraWSD, evaluates MT output directly rather than by scoring translations.", "The suite consists of a collection of 3,249 sentence pairs in which the German source sentences contain one ambiguous target word.", "As target words, the authors considered only words in German whose translation into English does not cover multiple senses, thus making the evaluation more straightforward.", "Despite their effectiveness, such benchmarks do not allow systems to be tested in multiple language combinations, and only cover a very limited number of words and senses.", "To address these limitations, Raganato et al. (2019) proposed MuCoW, an automatically-created test suite covering 16 language pairs, with more than 200,000 sentence pairs derived from word-aligned parallel corpora.", "Other research studies investigated the disambiguation capabilities of MT systems by exploring their internal representations (Marvin and Koehn, 2018; Michel et al., 2019), or improving them via context-aware word embeddings (Liu et al., 2018).", "More recently, Emelin et al. (2020) introduced a statistical method for the identification of disambiguation errors in neural MT (NMT) and demonstrated that models capture data biases within the training corpora, which leads these models to produce incorrect translations.", "Although the authors expected their approach to be transferable to other language combinations, they only focused on German English.", "Based on the findings and open research questions raised in the aforementioned works, the present paper aims at investigating not only the presence, but also, most importantly, the nature and properties of semantic biases in MT in multiple language combinations, via a novel entirely manually-curated benchmark called DIBIMT and a thorough performance analysis.", "The DIBIMT benchmark focuses on detecting Word Sense Disambiguation biases in NMT, i.e., biases of certain words towards some of their more frequent meanings.", "The creation of such a dataset requires", "i) a set of unambiguous and grammatically-correct sentences containing a polysemous target word;", "ii) a set of correct and incorrect translations of each target word into the languages to be covered.", "Figure 1 depicts an example of a dataset item.", "BabelNet Similarly to previous studies, we rely on BabelNet 1 (Navigli et al., 2021), a large multilin-1", "gual encyclopedic dictionary whose nodes are concepts represented by synsets, i.e., sets of synonyms, containing lexicalizations in multiple languages and coming from various heterogeneous resources, including, inter alia, WordNet (Miller et al., 1990) and Wiktionary.", "2 Let us define B as an abstraction used to query the subset of synsets in BabelNet that contain at least one sense 3 from WordNet and one or more senses in languages other than English, 4 while only considering senses coming from high-quality sources, i.e., language-specific wordnets.", "Formal Notation Given an arbitrary synset , we define L ( ) as the set of lexicalizations of in language L contained within B .", "As an example, let us consider the synset corresponding to the drink meaning of the word shot .", "contains lexicalizations in different languages, including: Shot DE , shot EN , nip EN , chupito ES , trago ES , bicchierino IT and goccio IT .", "Hence, EN ( ) = { shot , nip } , while ES ( ) = { chupito , trago } .", "Furthermore, let P represent a (lemma, part of speech) pair, where P is the part of speech.", "We denote L ( P ) = { 1 , . . . , n } as the set of synsets which contain P as a lexicalization in language L according to B .", "Additionally, we define L ( P ) = | L ( P ) | as the polysemy degree, i.e., the number of senses, of P in language L .", "For example, given P = shot NOUN , EN ( P ) would be the set of synsets associated with the nominal term shot (e.g., the act of firing, a photograph and a drink, among others).", "In this section, we detail the creation process of our dataset, i.e., the selection of our sentences as well as the construction and filtering of our items.", "Item Structure and Notation Before we proceed, let us formally state how each item in the dataset is structured: given a source sentence s = [ w 1 , . . . , w n ] as a sequence of words, and given a target word 5 w i in s tagged with some synset , we consider X = ( s, w i , ) as an initial item of the dataset, i.e., an instance composed of 2 https://www.wiktionary.org/ 3 A sense is a lexicalization of a specific synset in some language.", "Henceforth, we will refer to lexicalizations and senses interchangeably.", "4 Specifically, we consider synsets that have lexicalizations in English, Italian, German, Russian, Spanish and Chinese.", "5 For simplicity, we use the term word here, but our work focuses on multi-word expressions as well (both in source and target sentences).", "an English sentence s , a target word w i and its associated synset ; this instance can be annotated for candidate translations of w i in some language L .", "We also denote XP as the (lemma, POS) pair of w i .", "We collect our initial items from two main sources: WordNet and Wiktionary.", "6 Specifically, we use the examples from WordNet Tagged Glosses (Langone et al., 2004), where each sentence's target word was manually associated with its synset 7 , thereby readily providing the first batch of initial items .", "As for Wiktionary, instead, we start by obtaining every usage example s and its associated definition d (filtering out archaic usages and slang), then, we automatically extract the target words from the corresponding example.", "8 Now, the only step that remains in order to construct an initial item is to associate a synset with the word w i used in the example s .", "We perform this association in two phases: first, we try to map the definition d related to the example s to a BabelNet synset by relying on the automatic mappings available in BabelNet 5 between WordNet and Wiktionary, discarding examples for which this association can not be found; second, we manually validate and correct these successful associations to ensure that our initial items are of high quality.", "We apply a filtering step to the original sentences in order to select examples that are likely to be more challenging for the models to translate:", "i) we discard every initial item X for which EN ( XP ) < 3 , i.e., we retain only sentences whose associated (lemma, POS) pair has a polysemy degree of at least 3 in BEN ;", "ii) we retain at most only one sentence per sense per source 9 ;", "iii) differently from previous works, which impose a strict requirement on synsets that are monosemous in the target language, we retain sentences satisfying the following requirement.", "Let us consider the nominal senses of the word bank : among them, one represents a specific aviation maneuver.", "In Italian, this synset 6 We use the dump of September 2021.", "7 Which we convert from WordNet to BabelNet.", "8 In Wiktionary, target words are marked in bold inside the example sentence.", "9 The reasoning for this choice is twofold: on the one hand, oftentimes Wiktionary has multiple examples for the same synset, that differ in only one or two words, thus we skip them to avoid repetitions; on the other hand, we obtain an increase in sense coverage without worsening the annotator load.", "includes one lexicalization, avvitamento ; although this is not monosemous in Italian (e.g., avvitamento might also refer to a screw thread ), neither of the other possible senses of avvitamento has bank as an English lexicalization, which, for Italian, satisfies our third condition.", "If the same holds true for all languages, the synset passes the test and thus the sentence is retained.", "Once the set of initial items is ready, we can proceed with the annotation phase, which will produce our annotated items .", "Specifically, given a language L and an initial item X = ( s, w i , ) , we associate a set of good ( GL ) and bad ( BL ) translation candidates with X , which represent words that, respectively, we do, and do not, expect to see in a translation of sentence s in language L .", "Finally, we refer to XL as an annotated item , i.e., the tuple ( s, w i , , GL , BL ) .", "Before moving forward with the annotation phase, we pre-populate the sets of good ( GL ) and bad ( BL ) lexicalizations for a given initial item X in language L extracting them from B .", "Formally, we assign GL = L ( ) , i.e., the set of lemmas in language L of the BabelNet synset associated with ; furthermore, we set BL = S L ( XP ) \\{ } L ( ) , i.e., the set of all lemmas in language L of BabelNet synsets associated with any excluding .", "With this step, we produce an automatically populated version of our annotated items .", "We instruct annotators to update the set of good ( GL ) and bad ( BL ) lexicalizations of w i s such that each lexicalization contained in the respective set can be considered a good or a bad translation equivalent for the target word in the provided sentential context.", "10 We also instruct annotators to discard sentences in which", "i) the target word w i is an idiomatic expression or a proper noun, and", "ii) the semantic context is not sufficient to properly disambiguate w i .", "Given the expertise required to carry out this task, we rely on three highly qualified translators: one for Italian, German and Russian; one for Spanish and one for Chinese.", "Our annotators satisfy the 10 Any lexicalization of in L that is removed from GL is automatically placed in BL .", "following requirements: they are native speakers or hold C2-level certifications and work as professional translators in the given language combinations.", "The full instructions provided to the annotators can be found in Appendix C. 3.3.3 Resulting Dataset Our annotators analyzed around 800 sentences, discarding 200 of them, finally obtaining approximately 600 annotated items in 5 languages.", "Due to a coverage issue of the Russian language in BabelNet, we retain only sentences tagged with nominal or verbal synsets.", "Dataset statistics are reported in Table 1. As expected, we note that the lexicalizations found in B have been substantially refined by our annotators in all languages, as reported in Table 2. Indeed, across languages, on average, 54% of the good lexicalizations have been added by our annotators, while 42% of the pre-existing lexicalizations have been removed.", "More importantly, given a language and two sentences containing words referring to the same synset, on average only in 55% of cases do they also share those words' good lexicalizations, confirming that the assumption that all synonyms of a word are valid replacements can lead to incorrect results.", "These statistics lead us to a straightforward, but important, conclusion: only in a limited number of cases is a lexicalization belonging to a given synset to be considered as a suitable translation equivalent for the provided target word and its context.", "Examined jointly, these metrics suggest that relying on synset lexicalizations from BabelNet alone is prone to producing errors, either due to BabelNet's intrinsic noise, or due to the lack of different granularity of synsets and contextualized words.", "Sentences' Properties Description As we stated in Section 3.2.1, the sentences we annotate are all usage examples of specific concepts obtained from WordNet or Wiktionary.", "Such examples are typically short main clauses with no subordinates, featuring on average 9 words (around 50 characters per sentence).", "All selected sentences include a semantic context which allows the meaning of the target word to be properly identified.", "DIBI MT's analysis procedure is fairly simple: given an annotated item XL = ( s, w i , , GL , BL ) and a translation model M , we compute t L = ML ( s ) , i.e., the translation of s in language L according to M .", "Then, we use Stanza (Qi et al., 2020) to perform tokenization, part-of-speech tagging and lemmatization of t L and, finally, we check if there is any match 11 between the lemmas of the translated sentence and those contained in GL or BL .", "In case there is no match, we mark the translation as a MISS ; otherwise, we mark it as GOOD or BAD depending on which set matched the lemma.", "This produces an analyzed item , which for simplicity we denote as XM L = ( XL , t L , R , L ) , where R is one of GOOD , BAD or MISS and L represents the matched lemma in case there was a match ( GOOD or BAD ), otherwise.", "We now:", "i) use DIBIMT to carry out an evaluation of 7 different machine translation systems;", "ii) report the obtained results, including a thorough statistical and linguistic evaluation;", "iii) extensively discuss our findings, providing multiple measures 11 A more detailed description of the analysis procedure is provided in Appendix A. of semantic bias; and", "iv) offer some insights into the causes of such biases.", "In Appendix D we include a model-specific breakdown of the various scores and metrics reported throughout this section.", "We test a wide range of models, both commercial and non-commercial ones, and report their performances on DIBI MT's evaluation metrics:", "DeepL Translator 12 , a state-of-the-art commercial NMT system.", "Google Translate 13 , arguably the most popular commercial NMT system.", "OPUS (Tiedemann and Thottingal, 2020), the smallest state-of-the-art NMT model available to date, a base Transformer (each model has approximately 74M parameters) trained on a single language pair on large amounts of data.", "MBart50 (Tang et al., 2021), multilingual BART fine-tuned on the translation task for 50 languages (610M parameters).", "We refer to MBart50 as the English-to-many model, and to MBart50 MTM as the many-to-many model.", "M2M100 (Fan et al., 2021), a multilingual model able to translate from/to 100 languages.", "We test both versions of the model, the 418M parameter one (which we dub M2M100 ) and the 1.2B parameter one (dubbed M2M100 LG ).", "Figure 2 reports general results of the analysis per (model, language) pair.", "Given the high percentage of analyzed items classified as MISS , we asked our annotators to perform an inspection on a random sample of 70 items per language in order to unearth the reasons, with varying results.", "We identified multiple causes, namely:", "i) word omission in the translation (around 19% of items, mostly in Chinese and Italian);", "ii) issues with Stanza's tokenization (around 11%, mostly Chinese and Russian) and lemmatization (around 12%, mostly Italian and German);", "iii) words translated as themselves (approximately 5%, often in multilingual neural models);", "iv) translations which have nothing to 12 https://deepl.com/ 13 We used the =GOOGLETRANSLATE function available in Google Sheets.", "do with the source text 14 (around 23%); and", "v) missing terms from either BL (around 18%) or GL (around 11%).", "We intend to thoroughly investigate and tackle these issues and translation phenomena as future work.", "Table 3 reports accuracy for nonMISS analyzed items (i.e., # GOOD # GOOD +# BAD ).", "With the sole exception of DeepL, which greatly outperforms every other competitor, models achieve extremely low scores, in the range of 20%-33%.", "Surprisingly, Google Translate performs worst across languages.", "In addition to accuracy, DIBIMT analyzes the semantic biases of a translation model via four novel metrics, which we define in detail in what follows.", "Sense Frequency Index Influence (SFII) We study the sensitivity of models to disambiguating senses with respect to their frequency.", "To do this, we define P ( ) as the index of synset in EN ( P ) ordered according to WordNet's sense frequency, as computed from SemCor.", "That is, in-14 An example is the sentence he is a crack shot , where the word shot is translated by MBart50 into Italian as schianto, which can be interpreted in this case as someone very good looking.", "dex k means that synset is the k -th most frequent meaning for P .", "In Figure", "3(a), we plot the number and percentage of errors made on average by the models, grouping items by XP ( X ) , where X is a nonMISS analyzed item .", "As expected, the less frequent a meaning for a given word is, the harder it is for the model to correctly disambiguate it.", "Finally, given a (model, language) pair, we define the Sense Frequency Index Influence (SFII) as the average percentage of errors, for each group, that we detected.", "Values are reported in Table 4. Interestingly, DeepL proves once again to be the best, obtaining a score of 51%, far below the average 80% achieved by the other models, with most non-commercial models performing 80%.", "Sense Polysemy Degree Importance (SPDI) Similarly to SFII, we also study the extent to which the polysemy degree, i.e., how many senses a given word can have, impacts the models' disambiguation capabilities.", "This experiment mirrors SFII, but groups items by their lemma's polysemy degree EN ( XP ) instead of .", "Figure", "3(b) reports the results on all items.", "Unsurprisingly, similarly to the frequency index, we observe that higher polysemy leads to more errors, confirming that models still struggle with very polysemous words.", "Similarly to SFII, SPDI is defined as the average percentage of DeepL Google M2M100 M2M100 LG MBart50 MBart50 MTMOPUS Mean MFS MFS+ MFS MFS+ MFS MFS+ MFS MFS+ MFS MFS+ MFS MFS+ MFS MFS+ MFS MFS+ DE 53.68 84.21 56.76 86.82 61.28 87.23 59.13 87.30 58.89 89.72 55.82 89.56 56.98 87.92 57.51 87.54 ES 59.89 87.91 61.96 89.05 61.81 89.37 61.78 88.03 60.17 91.10 63.09 91.85 64.47 91.21 61.88 89.79 IT 68.08 86.38 61.96 87.23 60.75 86.79 62.82 88.81 62.90 87.50 68.97 91.81 64.48 89.66 64.28 88.31 RU 50.00 83.33 48.12 83.28 47.87 83.41 45.25 84.16 47.39 87.20 44.91 87.96 48.40 84.04 47.42 84.77 ZH 49.07 88.89 56.05 88.20 59.06 91.34 59.35 92.45 50.66 89.87 54.17 90.28 51.71 87.45 54.30 89.78 Mean 56.14 86.15 56.97 86.92 58.15 87.63 57.66 88.15 56.00 89.08 57.39 90.29 57.21 88.06 57.08 88.04 Table 5: Frequency Analysis: MFS represents the average percentage of times the model mistakenly translates the target word into a lexicalization belonging to the Most Frequent Sense associated with P .", "errors at varying polysemy degrees, and its values are reported in Table 4: once again, DeepL outperforms all other systems by a large margin, confirming that it is the least biased across the board.", "Most and More Frequent Senses To further corroborate our findings about semantic biases, we study how often models predict senses that are more frequent than the target one.", "Given a BAD analyzed item XM L , we denote as the synset associated with the wrongly translated lemma L .", "15 Then, we check the frequency of and with respect to XP : if XP ( ) < XP ( ) , then the sys-tem's disambiguation steered towards a sense that is more frequent than the target one, which we 15 In the case in which there are multiple possible synsets, we take the most frequent according to XP , as we need to rely on the assumption that the surface form represents the intrinsic disambiguation performed by the NMT system.", "dub More Frequent Sense (MFS+); additionally, if XP ( ) = 1 , then the model disambiguated the source word w i to the Most Frequent Sense (MFS) of the associated lemma XP .", "The results of both these analyses are reported in Table 5. We can observe a few interesting results: first, on average, almost 60% of the time a mistake reflects the Most Frequent Sense of the target word (second-last column); second, almost 90% of the errors concern translations towards more frequent senses of the target word (last column).", "Importantly, these results are consistent across systems, whether commercial or not.", "Although it might seem straightforward, NMT models are still strongly biased towards senses that are more likely to be encountered during training; while this could be related to the pattern-matching nature of neural networks, it also depends heavily on the training data the model was trained upon, and this needs to be further investigated in future research.", "The existing literature in WSD points to the fact that verbs are generally harder than nouns, mostly due to their highly polysemous nature (Barba et al., 2021b).", "We try to analyze whether MT models 0 10 20 30 40 0 500 1000 1500 0 0.2 0.4 0.6 0.8 1 # i t e m s", "are affected by the same phenomenon: in Table 6, we report the average results obtained by running DIBIMT on all its sentences (column ALL) and the subset of sentences whose target word was either a NOUN or a VERB.", "In general, we observe an average drop of accuracy of 4 points, as well as an astounding difference of 18 percentage points in MISS handling, which we will investigate more thoroughly in future work.", "Interestingly, MT models are much more inclined to translate nouns into their most frequent sense; we attribute this difference to the generally higher polysemy of verbs compared to nouns, which increases the size of the space of possible translations for a given verb, thus decreasing the chance that it gets translated into the MFS.", "Aside from this, we draw the same conclusion as that drawn by previous works in the field of WSD, with nouns being generally easier to translate than verbs.", "We try to assess to what extent, in a multilingual encoder-decoder architecture, the encoder is determining the implicit disambiguation of the source sentence before generating the translation.", "For instance, we ask ourselves this question: given an ambiguous word w i in the source sentence s , how often does the model translate it into a lexicalization representing the same sense, if prompted to translate s into different languages?", "Intuitively, if the encoder was the sole contributor to the implicit disambiguation performed by the model, we would expect to see the meaning to always be the same, regardless of the target language.", "and L 2 and an initial item X , we take M 's analyzed items XM L 1 and XM L 2 17 and check if translations in L 1 and L 2 have a synset in common, i.e., | L 1 ( L 1 ) L 2 ( L 2 ) | > 0 .", "The results of this experiment are reported in Figure 4. We observe that, on average, this phenomenon occurs around 70% of the time.", "Hence, it is safe to assume that, while the encoder certainly plays an important role in the disambiguation of the input sentence, the decoder is also contributing sig-nificantly.", "Another interesting observation is that the alphabet of the target language does not seem languages.", "We also disregard DeepL and Google Translate as their architecture is proprietary.", "17 We skip item X if either XML 1 or XM L 2 is a MISS .", "to have any influence, as language pairs involving Russian display scores that are very similar to those of the other three European languages.", "We attribute lower scores in Chinese to coverage issues in BabelNet, which would hinder a correct fulfillment of the condition defined for this experiment.", "Given the low performances achieved by MT models, we test a WSD system on the English sentences within DIBIMT, both to assess the toughness of our system and to establish an additional baseline.", "We use ESCHER 18 (Barba et al., 2021a), a state-of-the-art model on English WSD.", "Interestingly, ESCHER achieves an overall accuracy score of 66.33, almost 15 points lower than the results on the standard WSD benchmark (80.7 on ALL, Raganato et al., 2017), therefore confirming the challenging nature of DIBIMT.", "Furthermore, in order to estimate the difference in disambiguation capability between NMT models and a dedicated WSD system, we compute ESCHER's performances on the set of English sentences of nonMISS analyzed items for each (model, language) pair.", "We report these results in Table 7, whose accuracy scores can be directly compared to those in Table 3.", "As expected, the average MT accuracy is sig-nificantly lower than ESCHER's, with the sole exception of DeepL, which manages to surpass it on German and Russian.", "These results clearly demonstrate that current NMT models are still not on par with dedicated WSD systems, and thus that they might benefit from the inclusion of such WSD systems within the NMT ecosystem.", "As a final experiment, we assess whether the semantic biases are caused by search errors (i.e., failures of the decoding algorithm), or model errors (i.e., the models deemed their translations the best 18 The publicly available version trained on SemCor data only. M2M100 M2M100 LG MBart50 MBart50 MTMOPUS Mean DE 98.00 98.00 92.00 94.00 84.00 93.20 ES 100.00 98.00 88.00 90.00 94.00 94.00 IT 94.00 90.00 86.00 100.00 88.00 91.60 RU 94.00 90.00 98.00 92.00 88.00 92.40 ZH 96.00 98.00 94.00 98.00 92.00 95.60 Mean 96.40 94.80 91.60 94.80 89.20 93.36 Table 8: Model Errors: percentage of times a model thought its BAD translation was better than a GOOD one. possible).", "For each (model M , language L ) pair, we sample a BAD translation ( t BAD ), pair it with a GOOD translation ( t GOOD ) produced by another model (prioritizing DeepL), and ask annotators to check their correctness and apply corrections where needed, 19 then compute the perplexities according to M with the corresponding English sentence, and call them p GOOD and p BAD respectively.", "We repeat this sampling 50 times per ( M , L ) pair and check how often p BAD < p GOOD .", "Table 8 shows that, on average, this happens in 93% of cases, thus confirming that most semantic biases are embedded within models and are not caused by the decoding strategy.", "In this work, we presented DIBIMT, a novel benchmark for measuring and understanding semantic biases in NMT, which goes beyond simple accuracy and provides novel metrics that summarize how biased NMT models are.", "We tested DIBIMT on 7 widely adopted NMT systems, extensively discussing their performances and providing novel insights into the possible causes and relations of semantic biases within NMT models.", "Furthermore, statistics of our annotations suggest that, when dealing with translations, synsets' lexicalizations cannot be used interchangeably, as their choice depends heavily on the context.", "In the future, we plan to improve DIBIMT by introducing better heuristics to recognize and handle MISS cases, especially covering the linguistic phenomena we described (see Section 4.2); we also aim at widening language coverage and increasing the number of sentences in the benchmark, consequently improving word and sense coverage.", "To enable further research, we release DIBIMT as a closed benchmark with a public leaderboard at: https://nlp.uniroma1.it/dibimt .", "19 We do this to make the translations more grammatically fluent, and not to correct the disambiguation of the target term, which was never detected as being wrong in the sampled cases.", "The authors gratefully acknowledge the support of the ERC Consolidator Grant MOUSSE No. 726487 and the ELEXIS project No. 731015 under the European Union's Horizon 2020 research and innovation programme, and the PerLIR project (Personal Linguistic resources in Information Retrieval) funded by the MIUR Pro-getti di ricerca di Rilevante Interesse Nazionale programme (PRIN 2017).", "This work was also partially supported by the MIUR under the grant Dipartimenti di eccellenza 2018-2022\" of the Department of Computer Science of Sapienza University. References Edoardo Barba, Tommaso Pasini, and Roberto Navigli. 2021a. Esc: Redesigning WSD with extractive sense comprehension. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , pages 46614672. Edoardo Barba, Luigi Procopio, and Roberto Navigli. 2021b. ConSeC: Word Sense Disambiguation as continuous sense comprehension. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing , pages 14921503, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Michele Bevilacqua, Tommaso Pasini, Alessandro Raganato, Roberto Navigli, et al. 2021. Recent trends in word sense disambiguation: A survey. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21 . International Joint Conference on Artificial Intelligence, Inc. Denis Emelin, Ivan Titov, and Rico Sennrich. 2020. Detecting word sense disambiguation biases in machine translation for model-agnostic adversarial attacks. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) , pages 76357653, Online. Association for Computational Linguistics. Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, et al. 2021. Beyond English-Centric Multilingual machine translation. Journal of Machine Learning Research , 22(107):148. Annette Rios Gonzales, Laura Mascarell, and Rico Sen-nrich. 2017. Improving Word Sense Disambiguation in Neural Machine Translation with Sense Embeddings. In Proceedings of the Second Conference on Machine Translation , pages 1119. Annette Rios Gonzales, Mathias Mller, and Rico Sen-nrich. 2018. The Word Sense Disambiguation Test Suite at WMT18. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers , pages 588596. Robert Krovetz and W Bruce Croft. 1992. Lexical ambiguity and information retrieval. ACM Transactions on Information Systems (TOIS) , 10(2):115 141. Helen Langone, Benjamin R Haskell, and George A Miller. 2004. Annotating WordNet. In Proceedings of the Workshop Frontiers in Corpus Annotation at HLT-NAACL 2004 , pages 6369. Els Lefever and Vronique Hoste. 2013. Semeval-2013 task 10: Cross-lingual Word Sense Disambiguation. In Second Joint Conference on Lexical and Computational Semantics (* SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013) , pages 158166. Frederick Liu, Han Lu, and Graham Neubig. 2018. Handling Homographs in Neural Machine Translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers) , pages 13361345, New Orleans, Louisiana. Association for Computational Linguistics. Rebecca Marvin and Philipp Koehn. 2018. Exploring Word Sense Disambiguation Abilities of Neural Machine Translation Systems. In Proceedings of the 13th Conference of the Association for Machine Translation in the Americas (Volume 1: Research Track) , pages 125131. Paul Michel, Xian Li, Graham Neubig, and Juan Pino. 2019. On Evaluation of Adversarial Perturbations for Sequence-to-Sequence Models. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) , pages 31033114, Minneapolis, Minnesota. Association for Computational Linguistics. Rada Mihalcea, Ravi Sinha, and Diana McCarthy. 2010. Semeval-2010 task 2: Cross-Lingual Lexical Substitution. In Proceedings of the 5th international workshop on semantic evaluation , pages 914. George A Miller, Richard Beckwith, Christiane Fell-baum, Derek Gross, and Katherine J Miller. 1990. Introduction to wordnet: An on-line lexical database. International journal of lexicography , 3(4):235 244. Roberto Navigli. 2009. Word Sense Disambiguation: A Survey. ACM computing surveys (CSUR) , 41(2):169. Roberto Navigli, Michele Bevilacqua, Simone Conia, Dario Montagnini, and Francesco Cecconi. 2021. Ten Years of Babelnet: A Survey. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21 , pages 45594567. Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A Python Natural Language Processing Toolkit for Many Human Languages. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations . Alessandro Raganato, Jose Camacho-Collados, and Roberto Navigli. 2017. Word Sense Disambiguation: A Unified Evaluation Framework and Empirical Comparison. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers , pages 99110, Valencia, Spain. Association for Computational Linguistics. Alessandro Raganato, Yves Scherrer, and Jrg Tiedemann. 2019. The MuCoW test suite at WMT 2019: Automatically harvested multilingual contrastive word sense disambiguation test sets for machine translation. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1) , pages 470480. Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, and Angela Fan. 2021. Multilingual Translation from De-noising Pre-Training. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 , pages 34503466, Online. Association for Computational Linguistics. Jrg Tiedemann and Santhosh Thottingal. 2020. OPUS-MT Building open translation services for the World. In Proceedings of the 22nd Annual Con-ferenec of the European Association for Machine Translation (EAMT) , Lisbon, Portugal. David Vickrey, Luke Biewald, Marc Teyssier, and Daphne Koller. 2005. Word-Sense Disambiguation for Machine Translation. In Proceedings of human language technology conference and conference on empirical methods in natural language processing , pages 771778. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier-ric Cistac, Tim Rault, Rmi Louf, Morgan Fun-towicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Can-wen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-Art Natural Language Processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations , pages 3845, Online. Association for Computational Linguistics. A Analysis Procedure Details Our analysis procedure, which we described in Section 3.4, involves steps that go beyond simple lemma matching. For instance, in case of multiword expressions, we allowed annotators to specify a wildcard, i.e., any number of tokens (including zero) were allowed to expand and still trigger a match. Additionally, since Stanza has multi-word expansion tokenization for some of the languages in our list, when available, we try to perform matching on both the list of words (alongside the list of tokens) in the translated sentence. Finally, in case no match is produced by the aforementioned steps, we apply a surface-level string matching heuristic which, especially in Chinese, helps us increase coverage. B Neural Models Implementation We use HuggingFace's Transformers library (Wolf et al., 2020) for all neural models. As per standard practice, we generate translations using beam search as decoding algorithm with beam size 5. C Instructions for Dataset Annotation In this work, we investigate semantic biases in Machine Translation across languages. You are provided with a spreadsheet containing 300 instances, each including the following information: a lemma, its part of speech, a definition and some good and bad translation candidates derived from BabelNet. Your task is to manually verify the correctness of the good candidates and add new good candidates if deemed necessary. Furthermore, you are asked to verify that all bad candidates are wrong. From a translation perspective, a good candidate is a word which correctly translates the English target word in the given context. Instead, a bad candidate is a wrong translation of the English target word in the given context. Please adopt the following guidelines while annotating: Do not annotate idioms. Do not annotate instances in which the semantic context does not allow us to unequivocally determine the meaning of the target word. Do not annotate proper names, e.g., Run in the sentence The military campaign near that creek was known as The battle of Bull Run ." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "objective", "objective", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "objective", "objective", "method", "objective", "other", "method", "other", "other" ]
[ "Speech directed to children differs from adult-directed speech in linguistic aspects such as repetition, word choice, and sentence length, as well as in aspects of the speech signal itself, such as prosodic and phonemic variation.", "Human language acquisition research indicates that child-directed speech helps language learners.", "This study explores the effect of child-directed speech when learning to extract semantic information from speech directly.", "We compare the task performance of models trained on adult-directed speech (ADS) and child-directed speech (CDS).", "We find indications that CDS helps in the initial stages of learning, but eventually, models trained on ADS reach comparable task performance, and generalize better.", "The results suggest that this is at least partially due to linguistic rather than acoustic properties of the two registers, as we see the same pattern when looking at models trained on acoustically comparable synthetic speech.", "Speech directed to children (CDS) differs from adult-directed speech (ADS) in many aspects.", "Linguistic differences include the number of words per utterance, with utterances in CDS being considerably shorter than utterances in ADS, and repetition, which is more common in child-directed speech.", "There are also paralinguistic, acoustic factors that characterize child-directed speech: people speaking to children typically use a higher pitch and exaggerated intonation.", "It has been argued that the properties of CDS help perception or comprehension.", "Kuhl et al. (1997) propose that CDS is optimized for learnability .", "Optimal learnability may, but does not necessarily align with optimization for perception or comprehension.", "Although speech with lower variability may be easiest to learn to understand, higher variability may provide more learning opportunities, leading to more complete language knowledge.", "In this paper, we explore how learning to extract meaning from speech differs when learning from CDS and ADS.", "We discuss task performance on the training register as well as generalization across registers.", "To tease apart the effect of acoustic and linguistic differences, we also report on models trained on synthesized speech, in which linguistic differences between the registers are retained, but the acoustic properties are similar.", "The characteristics of child-directed speech are a major topic of study in language acquisition research.", "For a comprehensive overview, see Soder-strom (2007) and Clark (2009, Ch. 2, p. 32-41).", "With regards to acoustics, CDS is reported to have exaggerated intonation and a slower speech rate (Fernald et al., 1989).", "Kuhl et al. (1997) show that CDS contains more extreme' realizations of vowels.", "McMurray et al. (2013) show that these increased means are accompanied by increased variance, and argue that any learning advantage of CDS due to extreme vowel realizations is counteracted by increased variance.", "However, it has also been argued that increased variance may be beneficial to learning in the long run, as it gives the learner a more complete set of examples for a category, which helps generalization.", "Guevara-Rukoz et al. (2018) show that word forms in child-directed speech are acoustically more diverse.", "At the utterance level, child-directed language consists of shorter sentences and simpler syntax (New-port et al., 1977; Fernald et al., 1989), and words more often appear in isolation (Ratner and Rooney, 2001).", "Studies on home recordings show that the availability of CDS input accounts for differences in vocabulary growth between learners, whereas overheard speech is unrelated (Hoff, 2003; Weisleder and Fernald, 2013).", "This does not necessarily mean that it is easier to learn from CDS.", "Psycholinguistic research has shown that infants across the world show a CDS preference, paying more attention to it than to ADS (ManyBabies Consortium, 2020).", "Learning advantages of CDS in children may therefore simply be because they grant it more attention, rather than to properties of CDS that are advantageous for learning.", "Computational models, however, have no choice in where they allocate attention.", "Any learning advantages we find of either ADS or CDS in computational studies must be due to properties that make speech in that register more learnable to the model.", "There has been some computational work comparing learning from ADS and CDS at the level of word learning and phonetic learning.", "Studies on segmentability use algorithms that learn to identify word units, with some studies reporting higher segmentability for CDS (Batchelder, 2002; Daland and Pierrehumbert, 2011), while Cristia et al. (2019) report mixed results.", "Kirchhoff and Schimmel (2005) train HMM-based speech recognition systems on CDS and ADS, and test on matched and crossed test sets.", "They find that both ADS and CDS trained systems perform best on the matching test set, but CDS trained systems perform better on ADS than systems trained on ADS peform on CDS.", "They show that this is likely caused by phonetic classes have larger overlaps in CDS.", "To the authors' knowledge, the current work is the first to computationally explore learnability differences between ADS and CDS considering the process of speech comprehension as a whole: from audio to semantic information.", "In recent years, several studies have worked on machine learning tasks in which models directly extract semantic information from speech, without feedback on the word, character, or phoneme level.", "Most prominently, work on weakly supervised' speech recognition includes work in which accompanying visual information is used as a proxy for semantic information.", "By grounding speech in visual information accompanying it, models can learn to extract visually relevant semantic information from speech, without needing symbolic annotation (Harwath et al., 2016; Harwath and Glass, 2017; Chrupaa et al., 2017; Merkx et al., 2019).", "The topic is of interest for automatic speech recognition, as it provides potential ways of training speech recognition without the need for vast amounts of annotation.", "The utilization of nonlinguistic information as supervision is particularly useful for low-resource languages.", "For the purpose of this study, however, we are interested in this set of problems because of the parallel to human language acquisition.", "A language learning child does not receive explicit feedback on the words or phonemes it perceives.", "Rather, they learn to infer these structural properties of language, with at their disposal only the speech signal itself and its weak and messy links to the outer world.", "The task is to match speech to a semantic representation of the language it contains, intuitively grounding' it to the semantic context.", "The design of this task is inspired by work in visual grounding.", "However, the availability of CDS data accompanied by visual data is very limited.", "Instead of visual representation, we use semantic sentence embeddings of the transcriptions.", "Rather than training our model to imagine the visual context accompanying an utterance, as in visual grounding, we train it to imagine the semantic content.", "Note that since the semantic embeddings are based on the transcriptions of the sentences themselves, they have a much closer relation to the sentences than visual context representations would have.", "The semantic sentence representations were obtained using SBERT, a BERT-based architecture that yields sentence embeddings, which was fine-tuned on the STS benchmark of SemEval (Reimers and Gurevych, 2019).", "This particular encoding was chosen because it harnesses the semantic strength of BERT (Devlin et al., 2019) in an encoding of the sentence as a whole.", "Speech is converted Mel-frequency cepstrum coefficients.", "Since we are interested in the effect of learning from childversus adult directed speech, we select data that differs in register, but is otherwise as comparable as possible.", "The NewmanRatner Dataset CDS ADS Vocabulary size 3,170 5,665 Total nr. of words 97,118 203,084 Type/token ratio .033 .028 Words per utterance 4.52 9.46 Utterance length in seconds 3.37 3.46 Words per second 1.34 2.74 Table 1: Descriptive statistics of the data corpus contains annotated recordings of caregivers in conversation with their children and with experimenters (Newman et al., 2016).", "This dataset is suitable to our set-up, as it contains a reasonable amount of transcribed CDS and ADS by the same speakers, which is rare; and it is in English, for which pretrained state-of-the-art language models such as (S)BERT (Devlin et al., 2019; Reimers and Gurevych, 2019) are readily available.", "Child-directed speech in the NewmanRatner corpus takes place in free play between caregiver and child, whereas adult-directed speech is uttered in the context of an interview.", "Stretches of speech have been transcribed containing one or more utterances.", "We selected only utterances by caregivers and excluded segments with multiple speakers.", "As the CDS portion of the corpus is larger than the ADS portion, we randomly selected 21,465 CDS segments, matching the number of ADS segments by caregivers.", "Validation and test sets of 1,000 segments were held out, while the remaining 19,465 segments were used for training.", "Table 1 lists some characteristic statistics of the CDS and ADS samples that were used.", "The ADS sample contains a larger vocabulary than the CDS sample.", "On average, ADS segments contain more than twice as many words, although they are only 88 milliseconds longer on average.", "Therefore, the number of words per second is twice as high in ADS as it is in CDS.", "To tease apart effects of the acoustic properties of speech and properties of the language itself, we repeat the experiment using synthesized version of the ADS and CDS corpora.", "For this variant, we feed the transcriptions to the Google text2speech API, using the 6 available US English WaveNet voices (van den Oord et al., 2016).", "Note that the synthetic speech is much cleaner than the natural speech, which was recorded using a microphone attached to clothing of the caregiver, and contains a lot of silence, noise, and fluctuations in volume of the speech.", "Since synthetic speech for ADS and CDS is generated using the same pipeline, the acoustic properties of these samples are comparable, but linguistic differences between them are retained.", "Differences remain in the vocabulary size, number of words per utterance and type token ratio, but the number of words per second is now comparable.", "This means the length of utterances is much larger for synthetic ADS sentences, since the average ADS sentence contains approximately twice as many words as the average CDS sentence.", "The model and training set-up is based on Merkx et al. (2019).", "This model is suited to our task, as it allows to learn to extract semantic information from speech by grounding it in another modality, without requiring the speech to be segmented.", "The speech encoder comprises a convolutional filter over the speech input, feeding into a stack of 4 bidirectional-GRU layers followed by an attention operator.", "The difference in our set-up is the use of SBERT sentence embeddings instead of visual feature vectors.", "Using a margin loss, the model is trained to make the cosine distance between true pairs of speech segments and SBERT embeddings smaller than that between random counterparts.", "We train for 50 epochs and following Merkx et al. (2019) we use a cyclic learning rate schedule.", "1 6 Results 6.1 Performance Trained models are evaluated by ranking all SBERT embeddings in the test set by cosine distance to speech encodings.", "Reported metrics are recall@1, recall@5, and recall@10, which are the proportion of cases in which the correct SBERT embedding is among the top 1, 5, or 10 most similar ones; and the median rank of the correct SBERT embedding.", "Test results are reported for the training epoch for which recall@1 is highest on validation data.", "We have trained 3 differently randomly initialized runs for all four datasets, and report the average scores on the test split of the dataset the model was trained on, as well as its CDS or ADS counterpart, and a 1 Code is available through Github: https://github.com/lgelderloos/cds ads Model trained on CDS Testset Med.r.", "As can be observed in table 2, on the combined test set, models trained on adult directed speech slightly outperform models trained on child-directed speech.", "However, models in the two registers perform very similarly when we test them on the test set in the same register, with ADS having higher recall@1, but CDS scoring better on the other metrics.", "When we test ADS models on CDS, performance is lower than that of models that have been trained on CDS.", "However, the drop on ADS between models trained on ADS and models trained on CDS is even larger.", "The better performance on the combined test set, then, seems to come from ADS models generalizing better to CDS than the other way around.", "General performance of all models trained and tested on synthetic speech, which is much cleaner than the natural speech and more similar across registers, is much higher than performance on natural speech (see table 3).", "However, the same pattern can be observed: on the combined test set, ADS 0 2 4 6 8 10 0.0 0.2 0.4 0.6 0.8 1.0 ADS CDS Recall@1 Recall@5 Recall@10 Figure 1: Validation performance in early training on natural speech 0 2 4 6 8 10 0.0 0.2 0.4 0.6 0.8 1.0 ADS CDS Recall@1 Recall@5 Recall@10 Figure 2: Validation performance in early training on synthetic speech models perform better than CDS models.", "When tested on the register they were trained on, the models perform similarly, but models trained on ADS perform better when tested on CDS than the other way around.", "To summarize, models trained on ADS and CDS reach comparable scores when evaluated on the same register they are trained on.", "However, training on ADS leads to knowledge that generalizes better than training on CDS does.", "This pattern holds even when training and evaluating on synthetic speech, when the two registers are acoustically similar.", "Learnability is not just about eventual attainment: it is also about the process of learning itself.", "Although ADS and CDS models eventually perform similarly, this is not necessarily the case during the training process.", "Figures 1 and 2 show the trajectory of recall performance on the validation set after the first 10 epochs of training.", "During these early stages of learning, the models trained on ADS (dotted lines) are outperformed by those trained on CDS (solid lines).", "This pattern is more pronounced in the models trained on synthetic speech, but also present for models trained on natural speech.", "After five epochs of training, average recall@1 is 0.12 for CDS models and 0.09 for ADS models.", "For models trained on synthetic speech, average recall@1 on validation data is 0.51 for ADS models and 0.59 for CDS models.", "In later stages of training, models trained on ADS outperform CDS models on validation data.", "At epoch 40, close to the optimally performing epoch for most models, average recall@1 is 0.31 for ADS models and 0.28 for CDS models, and 0.86 and 0.81 for the synthetic counterparts, respectively.", "Although models trained on adult-directed speech eventually catch up with models trained on child-directed speech, CDS models learn more quickly at the start.", "We find indications that learning to extract meaning from speech is initially faster when learning from child-directed speech, but learning from adult-directed speech eventually leads to similar task performance on the training register, and better generalization to the other register.", "The effect is present both in models trained on natural speech and in models trained on synthetic speech, suggesting that it is at least partly due to differences in the language itself, rather than acoustic properties of the speech register.", "Our finding that models trained on ADS generalize better to CDS than the other way around contrasts with the findings of Kirchhoff and Schimmel (2005).", "Our results are in contrast to the idea that CDS is optimized for leading to the most valuable knowledge, as it is the models trained on ADS that lead to better generalization.", "Our finding that learning is initially faster for CDS is more in line with the idea of learnability as easy to learn'.", "The better generalization of models trained on ADS may be due to ADS having higher lexical and semantic variability, reflected in the larger vocabulary and higher number of words per utterance.", "Since there is simply more to learn, learning to perform the task is more difficult on ADS, but it leads to more valuable knowledge.", "It is also possible that SBERT is better suited to encode the semantic content of ADS, as ADS uterrances are likely to be more similar to the sentences SBERT was trained on than CDS utterances are.", "We must be prudent in drawing conclusions from the apparent effects we see in this study, as the results on different datasets cannot be interpreted as being on the same scale.", "Although all metrics are based on a rank of the same number of competitors, the distribution of similarities and differences between the semantic representations of these competitors may differ across datasets.", "The combined test set scores are more directly comparable, but ideally, we would like to compare the generalization of both models on an independent test set.", "In future work, we intend to curate a test set with data from separate sources, which can serve as a benchmark for the models we study.", "We intend to explore how a curriculum of CDS followed by ADS affects learning trajectories and outcomes.", "We also intend to use tools for interpreting the knowledge encoded in neural networks (such as diagnostic classifiers and representational similarity analysis) to investigate the emergent representation of linguistic units such as phonemes and words." ]
[ "abstain", "abstain", "objective", "method", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "method", "abstain", "abstain" ]
[ "In this work, we study hallucinations in Neural Machine Translation (NMT), which lie at an extreme end on the spectrum of NMT pathologies.", "Firstly, we connect the phenomenon of hallucinations under source perturbation to the Long-Tail theory of Feldman (2020), and present an empirically validated hypothesis that explains hallucinations under source perturbation.", "Secondly, we consider hallucinations under corpus-level noise (with-out any source perturbation) and demonstrate that two prominent types of natural hallucinations (detached and oscillatory outputs) could be generated and explained through specific corpus-level noise patterns.", "Finally, we elucidate the phenomenon of hallucination amplification in popular data-generation processes such as Backtranslation and sequence-level Knowledge Distillation.", "We have released the datasets and code to replicate our results at https://github.com/vyraun/ hallucinations .", "Neural Machine Translation (NMT) enjoys tremendous success, far surpassing the performance of previous statistical approaches in high-to-moderate resource settings (Koehn and Knowles, 2017).", "However, NMT suffers from well known pathologies such as coverage (Tu et al., 2016), mistranslation of named entities (Ugawa et al., 2018), etc.", "In terms of adequacy of the generated output (Martindale et al., 2019), hallucinations are egregious mistakes that lie at the extreme end of NMT pathologies.", "Such hallucinated outputs are characterized as being decoupled from the source sequence, despite being (fully or moderately) fluent in the target language (Mller et al., 2020).", "Two main hallucination phenomena have been reported in the existing literature:", "outputs under certain cases of source perturbation (Lee et al., 2018).", "2. NMT models have a propensity to hallucinate more frequently under out-of-domain inputs (Mller et al., 2020).", "However, a plausible theory to explain the generation of different types of hallucinations, including the above two results is still lacking in the NMT literature.", "Lee et al. (2018) posited that hallucinations could be happening due to decoder instability, however, their experiments to engineer solutions based on this proved inconclusive.", "In this work, we present a systematic study of different kinds of hallucinations, studying them through the lens of generalization, memorization and optimization in sequence to sequence models.", "Our key contributions are as follows:", "1. We extend the Memorization Value Estimator proposed in Feldman and Zhang (2020) to the sequence to sequence setting and demonstrate that hallucinations under source-side perturbations could be explained through the long-tail theory they propose.", "2. We introduce corpus-level noise into NMT parallel corpora and show that specific noise patterns interact with sequence to sequence training dynamics in different ways to generate the prominent hallucination patterns reported in the literature (Lee et al., 2018).", "3. We demonstrate the phenomenon of hallucination amplification in the outputs generated using Backtranslation (Edunov et al., 2018) and Knowledge Distillation (Kim and Rush, 2016), two widely used data generation algorithms for MT. 2 Related Work Our work connects hallucinations in NMT to the problem of generalization in Deep Learning.", "In this section, we briefly survey the two areas.", "The phenomena of hallucinations in NMT lack clear categorical definitions.", "Lee et al. (2018) de-fine hallucinations as the model producing a vastly different (inadequate) output when the source is perturbed under a specific noise model and present an algorithm to detect such cases.", "Subsequently, approaches to making NMT models more robust to small perturbations in the input have been actively explored (Cheng et al., 2019), however, no coherent theory to explain the phenomena of hallucinations has been empirically validated in the existing literature.", "Our work differs from Lee et al. (2018) in that we not only study hallucinations under source side perturbations but also under corpus-level noise.", "Further, we build on their work by filling in the gap for a plausible hypothesis that explains various types of hallucinations.", "Wang and Sennrich (2020) consider hallucinations as outputs detached from the source, and demonstrate that NMT models are more prone to hallucinations under out-of-domain settings by manually ascertaining whether an output generated is hallucinated or not.", "Manual detection of hallucinations, however, is an impediment for fast experimental cycles, and in this work, besides explaining the generation of such natural hallucinations (i.e. hallucinations generated without any source per-turbation), we also propose an approximate corpus level hallucination detection algorithm to aid faster analysis.", "Feldman (2020) studies label memorization in deep learning, and explains how memorization could be essential for achieving close-to-optimal generalization when the data distribution is long-tailed; since memorizing a representative of a rare subpopulation from the long-tail could significantly increase the prediction accuracy on its subpopulation, thereby improving the generalization error.", "Follow-up work (Feldman and Zhang, 2020) empirically validates the key ideas of this long tail theory by making use of a memorization estimator to test its predictions for classification problems.", "To the best of our knowledge, our work presents the first study that connects Feldman's long-tail theory to the problem of hallucinations in NMT.", "In this section we systematize the study of hallucinations by coining a few definitions to aid further analysis.", "Firstly, we categorize hallucinations in NMT into two primary categories:", "1. Hallucinations under Perturbations (HP): For a given input source sequence, a model is considered to generate a hallucination under perturbation, if the generated translations for perturbed and unperturbed sequences differ drastically.", "More precisely, we refer to the algorithm proposed by Lee et al. (2018) for detecting hallucinations under perturbation.", "2. Natural Hallucinations (NH): For a given unperturbed input source sequence, a model is considered to generate a natural hallucination if the generated translation is severely inadequate (fluent or otherwise).", "Source : das kann man nur feststellen , wenn die kontrollen mit einer groen intensitt durchge-fhrt werden .", "Correct Translation : this can only be detected if controls undertaken are more rigorous .", "Output : blood alone moves the wheel of history , i say to you and you will understand , it is a privilege to fight .", "Source : 1995 das produktionsvolumen von 30 millionen pizzen wird erreicht .", "Correct Translation : 1995 the production reached 30 million pizzas .", "Output : the us , for example , has been in the past two decades , but has been in the same position as the us , and has been in the united states .", "1. Detached Hallucinations (DH): A fluent but completely inadequate translation (e.g. Figure 1).", "2. Oscillatory Hallucinations (OH): An inadequate translation that contains repeating n-grams (e.g. Figure 2).", "Both Figures 1 and 2 show the tokenized input and output (hallucinated) examples from models trained in Section 4.2, to illustrate the above two definitions.", "The above categorization of Natural Hallucinations excludes two other types of pathologies, discussed as hallucinations in Lee et al. (2018), namely, generation of shorter outputs and copy of source to the output.", "The proposed categorization allows us to quantitatively disentangle the study of hallucinations from other NMT pathologies, without losing any generality.", "In this section, we propose and empirically validate two hypotheses in order to explain the two categories of hallucinations described in section", "Hypothesis 1 (H1) The samples memorized by a NMT model are most likely to generate hallucinations when perturbed.", "To validate H1, we adapt the Memorization Value Estimator (MVE) proposed by Feldman and Zhang (2020) to the sequence to sequence setting, by replacing the accuracy metric they use with a sequence overlap metric such as chrF (Popovic, 2015) or BLEU (Papineni et al., 2002) 1 .", "We then compare the hallucination behaviour under perturbation of the most-memorized samples with random samples using the hallucination detection algorithm proposed in Lee et al. (2018).", "Memorization Value Estimation The modified Memorization Value Estimator (MVE) is described in algorithm", "1. MVE computes the memorization value of a sample as the change in average prediction metric M (for which we use metrics such as chrF, BLEU) for the given sample between the models trained with the sample included in the training set and the models trained with the sample excluded.", "Hallucination Detection The HP detection algorithm used is presented as algorithm", "2. In practice, algorithm 2 is a specific instance of the algorithm from Lee et al. (2018), wherein we make the following three changes: 1 In practice, other MT metrics such as METEOR or BERT-Score (Banerjee and Lavie, 2005; Zhang et al., 2019) could also be used as empirical extensions of MVE for sequences, however, word/character n-gram overlap provides a stronger indication of memorization than soft-overlap methods like BERT-Score.", "1. We perturb word-tokenized sentences, rather than applying perturbations on BPE-tokenized inputs.", "2. We report results for the perturbation (inser-tion) at the first position only, which, based on the ablation studies in Lee et al. (2018), is the most reliable way to generate hallucinations.", "3. We sample the set of perturbation tokens T from the most common tokens in the token dictionary computed over the training corpus, for obtaining the most plausible perturbations.", "To compute the memorization values, mem in algorithm 1, we train t = 10 NMT models using fairseq (Ott et al., 2019) on different randomly selected subsets of sentence pairs (each about 101 K samples) from the IWSLT-2014 De-En dataset ( 160 K samples).", "BPE (Sennrich et al., 2016) with a joint token vocabulary of 10K is applied over lower-cased tokenized text.", "The NMT model is a six-layer Transformer model with embedding size 512, FFN layer dimension 1024 and 4 attention heads (42M parameters), and the checkpoint with the best validation BLEU (detokenized, with beam=5) is selected.", "In each case, a batch size of 4K tokens, dropout of 0.3 and tied encoder-decoder embeddings is used.", "Then, the MVE (algorithm 1) is applied on the training samples using the above t trained models to compute the memorization values, mem for each source sample i .", "For further analysis, we do not consider any sample which hasn't been excluded from the random training sets at least twice.", "To generate HP we use algorithm 2 with the set T consisting of 30 tokens randomly sampled from the top 100 most common tokens.", "We apply algorithm 2 to two sets of training samples a Memorized set comprising of training samples with the highest hundred (100) memorization values, and a Random set (of the same size) sampled from the rest of the training samples.", "Since, each input sentence can appear in the Hallucinated Samples set H multiple times in algorithm 2, we report both Unique and Total number of Hallucinations (HP) generated.", "We report results using chrF, BLEU as well as the prediction accuracy computed by matching the entire output string to the reference, as the metric M used in computing the memorization values.", "Table 1 shows that the difference between the counts of unique HP between the Memorized and Random set is very high.", "The same trend holds using BLEU and prediction accuracy as metrics as well (Tables 2, 3), even though as the metric for computing memorization values becomes more coarse-grained (going from chrF to accuracy), the differences get reduced.", "The figure shows that as the memorization values increase, the number of unique (Unique HP) as well as total hallucinations (Total HP) keeps increasing as well, demonstrating a strong positive correlation between hallucination frequency and memorization values.", "Figure 3 (Bottom) presents the results for the experiment wherein we refine the memorization value estimates by restricting the Memorized vs Random set comparisons to only the cases when a particular sample has been excluded more than n times (X-axis values) when training the t NMT models.", "Here, we find that the trend of large differences between the counts of unique hallucinations generated for the two sets stays consistent as the memorization value estimates are made more accurate.", "In fact, when the two sets (Random, Memorized) are constructed only over the samples which have been excluded at least 4 times, we find zero unique HP for the Random set.", "Encoder-Decoder Attention Analysis To further analyze how memorized samples suffer more hallucinations under perturbations, we compare the cross-attention heads of the last layer of the decoder for the Random and Memorized sets.", "Table 4 presents a comparison of the average entropy of the attention matrix, averaged diagonal attention and the average attention paid to the last source token, aggregated over the entire sets.", "The results show that the two sets differ considerably in terms of the attention distribution, with the memorized set having more fixed (lower-entropy) average attention distributions.", "Although this result is known for hallucinated translations (Lee et al., 2018; Voita et al., 2020; Berard et al., 2019), which have a tendency of producing deficient attention maps, the fact that this phenomenon extends to memorized samples as well further helps establish the link between memorization and hallucination under perturbation.", "Hypothesis 2 (H2) Corpus-level noise patterns (comprised of invalid source-target pairs) dictate the type of natural hallucinations generated by the NMT model.", "Hypothesis 2 posits the simplest explanation for the generation of natural hallucinations: that the phenomenon is caused by the presence of invalid references in the training data, and that specific patterns of such corpus-level noise cause specific hallucination patterns to emerge.", "Establishing a causal link between corpus-level noise patterns and hallucination types could greatly ease diagnosing the origins of such cases.", "We try to validate H2 by construction: first, we build four different types of the corpus-level noise patterns, and then we analyze the resulting models in terms of the generated translations.", "We train 5 models on the IWSLT 2014 corpus, where the training data consists of 160K samples.", "We train a baseline model with no noise, while the other 4 models are trained with specific patterns of added noise.", "The model and training settings are the same as in section 4.1, except that BPE is now learnt on the noise-added corpus for the 4 models.", "Corpus-Level Noise Model In order to generate the noise sets to be added to the training parallel data, we first construct an invalid reference set (IRS), a small set of detached source-target pairs and use the larger WMT 2014 De-En corpus as an additional data source (the size of the constructed IRS is 21 for the below experiments).", "Then, the different noise sets (of the same size) are constructed using different sampling strategies for sources and targets, which combine source-target sequences drawn from the IRS and the WMT 2014 DeEn training corpus into noise sets with particular characteristics.", "Specifically, we generate the noise sets as follows:", "1. Unique-Unique (UU): We sample 21K 2 random unique source sentences from WMT, and pair each with an unrelated unique random target sentence from WMT.", "2. Repeat-Repeat (RR): We sample 21 unique source sentences from IRS, and pair each with unrelated unique random target sentence from IRS, and repeat each such pair 1000 times.", "3. Repeat-Unique (RU): We use the same 21 random unique source sentences as RR.", "We repeat each 1000 times, and pair each repeat with unrelated unique random target sentence from WMT.", "4. Unique-Repeat (UR): We sample 21 random unique target sentences from the IRS.", "Each such target sentence is repeated 1000 times.", "Each repeat is paired with an unrelated unique random source sentence from WMT.", "Evaluation We train NMT models with each of the above four noise sets added to the IWSLT De-En parallel corpus, and report the results for both De-En and En-De translation directions.", "Specifically, we investigate the behavior of models trained on each of the above noise sets using the following evaluation sets:", "2 21K amounts to approximately 12% noisy samples, when combined with the 160K parallel training samples for the IWSLT De-En corpus.", "2. Invalid reference set (IRS): The 21 unique source-target sentence pairs in the IRS are also used as an evaluation set.", "Due to the way the noise sets are built, the IRS overlaps with the various training sets: it is contained in the RR training data, its source sentences are present in the RU training data and its target sentences are present in the UR training data, while there is no overlap for the UU training data.", "The main purpose of evaluating models on this set is to measure memorization of the overlapping source/targets.", "3. Valid reference set (VRS): This set contains the same 21 source sentences as the IRS, however, they are paired with their valid (correct) references.", "The VRS set is used to measure whether the NMT model can generalize despite the presence of source/targets associated with the noise sets.", "BLEU: BLEU score for each evaluation set.", "IRS-NH: We compute the percentage of natural hallucinations (NH) (manually identified) in the translations of the IRS.", "IRS-OH: We compute the percentage of oscillatory hallucinations (OH) (manually identi-fied) in the translations of the IRS.", "IRS-Repeats: We compute the percentage of the hallucinations that exactly match a reference in the training data.", "IRS-Unique Bigrams: We compute the number of unique bigrams in the translations of the IRS, as a fraction of total possible unique bigrams in sentences of the same length.", "Design of Noise patterns While the above noise patterns are quite plausible in a web-based corpus collection process, due to the widespread adoption of automatic bitext mining algorithms (Schwenk, 2018) applied over noisy sources, our primary motivation behind constructing these four types of noise patterns is to present different optimization scenarios for the NMT model under training.", "In each of the four noise patterns, the source-target pairs are invalid', but the difference lies in the number of representation pathways (contexts) each Source : das ist eine unerfreuliche situation , die wir knftig vermeiden wollen .", "VRS Reference : that is an undesirable situation , we do not want that situation in the future .", "No Noise Output : this is an unpleasant situation that we &apos;re trying to avoid in the future .", "UU Output : the us , in particular , is not alone .", "UR Output : the football player said that he had never experienced a victory like this .", "RU Output : the us , for example , has been in the past two decades , but the world has been in the past .", "RR Output : that is what she said .", "set offers for the invalid error' to propagate to the different layers, imposing a different set of requirements on the underlying optimization process.", "We posit that the four different noise patterns (RU, UR, UU, RR) interact in different ways with the encoder and decoder of an NMT model, e.g. for the RU noise pattern, the decoder is required to generate unique translations for the same sources, thereby encouraging decoder instability, whereas under the UR noise pattern, the encoder is required to produce the same representations for unique inputs, allowing the invalid error' to propagate to lower encoder layers.", "In UU noise as well, the model is required to produce encoder representations that are vastly different in the representation similarity space (when compared to the rest of the training corpus), while offering multiple contexts for the invalid error to propagate, while in the case of RR noise, the invalid error propagation is quite restricted.", "Further, we can test whether the above hypotheses have any predictive power through the properties of the generated translations of noisily trained models.", "However, a rigorous exploration of the impact of noise patterns on encoder-decoder training dynamics is out of scope for this work.", "Results Tables 5 and 6 show the results for both the De-En and the En-De translation directions.", "The boxes marked with -' are the cases where the associated metric computation does not convey any useful information.", "We see the following patterns in the results:", "1. The Test-BLEU is not greatly affected by the noise, except in the UR case, with the models matching the baseline (trained with no noise).", "2. When we consider the IRS-BLEU, we find that the RR model has fully memorized this data.", "This is to be expected as it has seen this set repeated 1000 times.", "number of repeated outputs (IRS Repeats) from the training corpus.", "4. On the IRS set, the RU model produces a very high percentage of oscillatory hallucinations (OH).", "Linking Hallucination Patterns to Noise Patterns The main purpose of the above experiments is to demonstrate how natural hallucinations can be generated on source sequences seen or unseen during training, and their relation to specific noise types.", "The link between noise patterns and specific types of hallucinations in the output could be used as very effective diagnostic tool to trace hallucinated outputs to corpus-level noise, with the goal of removing the noise from the training dataset.", "In this regard, two important observations further emerge from Tables 5 and 6.", "First, that in the case of UR noise, a considerable percentage of natural hallucinations (IRS NH) manifests as a direct copy of a training reference (without any of the IRS source sequences being present in the training set).", "Second, for the case of RU noise, oscillatory hallucinations (OH) are very prominent, as evident by the number IRS Unique-Bigrams, which are considerably lower when compared to the other noise types.", "Figure 5 presents the comparisons for counts of the top 5 bigrams present in the translations of the IRS set, showing how among the 4 noise patterns, RU leads to the most oscillatory hallucinations.", "Resulting sets of translations for a source sequence present in the IRS is shown in Figure 4, while Figure 6 presents a qualitative comparison of the attention patterns for this source sequence.", "In this section, we analyze how hallucinations caused due to corpus-level noise get amplified when a model trained on a noisy MT corpus is used for downstream data generation in algorithms such as Sequence-level Knowledge Distillation (KD) (Kim and Rush, 2016) and Backtranslation (BT) (Edunov et al., 2018).", "To analyze this, we need to compute NH at scale.", "So, firstly, we propose an automatic NH detection algorithm based on the analysis that hallucinations often occur in terms of oscillations or repeats of the target sequences.", "The proposed NH Estimator (algorithm 3) is reference-free and works at the corpus-level.", "One simplifying assumption used in algorithm 3 is that the repeats are now computed on the translations generated over the source set rather than on the training set (as in Tables 5 and 6 for the IRS-Repeats metric).", "The motivation behind this assumption is that given a sufficiently large source set, the translated output (if hallucinated as a direct copy of one of the training set targets), will appear more than once in the decoded set (since UR noise is one of its causes).", "We use algorithm 3 to measure NH caused by using the models trained on the noisy corpora (as explored in section 4.2 and analyzed in Tables 5 and 6) for BT and Sequence-level KD.", "For BT, we use 1 million English sentences from the WMT 17 De-En dataset as the monolingual corpus and generate back-translations via sampling (Edunov et al., 2018), using the different types of noisily trained models (RR, UU, UR, RU) for En-De.", "For constructing a sequence-level KD dataset we generate the translations over the initial IWSLT 2014 De-En corpus training corpus (the initial parallel data, with no noise) with a beam size of 5 (Kim and Rush, 2016).", "The results of applying the NH estimator (with (cid:15) = 1 , i.e. 1%, n = 4 , t = 2 and LASER as the cross-lingual similarity scoring Model M (Artetxe and Schwenk, 2019)) on the outputs generated using KD and BT are presented in Table 7 and Table 8 respectively.", "We find that the UR models lead to severe am-plifications for both BT and KD.", "For KD, we find that all noisy models lead to increase in NH when compared to the initial parallel corpus (implying amplification), which itself contains a non-trivial number of repeated targets.", "For BT, both UU and UR models lead to large number of repeated generations.", "RR models however cause the least hallucinations for both KD and BT.", "Our proposed NH estimator is not able to detect many OH however, in any of the cases due to very little overlap with the bottom (cid:15) = 1% similarity scores, even though the F 1 column indicates amplification of translations with repeated n-gram patterns ( F 1 ) in the KD datasets.", "Further, since, there is hallucination amplification going from a parallel corpus to the KD data generated (using noisy models trained on the parallel corpus), downstream systems trained on the KD data will be impacted in terms of hallucinations as well.", "We leave further downstream analysis to future work.", "In this section, we present a qualitative analysis of a few topics discussed in section 4, along with a discussion on some future research directions.", "Table 9 presents some examples from the most memorized training samples, thereby representing the samples from the long-tail of the data that is likely to have been memorized by the model.", "Qualitatively, the examples appear to be different (in terms of source/target syntax) from a random subset of training samples (e.g. in Appendix A, Table 10), although we leave further quantitative analysis of the differences to future work.", "Similarly, the link between out-of-domain and memorized samples needs to be ascertained quantitatively.", "Data-Augmentation To prevent hallucinations under perturbation resulting from memorization of the samples in the long-tail of the dataset (Feld-man, 2020), a simple iterative solution could be to analyze the long-tail (using Algorithm 1), and implement data-augmentations specific to the characteristics of such samples (e.g. as in Table 9), with the goal of bringing such samples out of the longtail (Raunak et al., 2020).", "Further work is required to determine the dynamics of such transition.", "Ameliorating Memorization During Learning Robust learning algorithms e.g. Robust Early learning (Xia et al., 2021) that are designed to prevent memorization specifically are likely to prevent perturbation based hallucinations.", "Robust Learning on Noisy Samples Kang and Hashimoto (2020) propose a loss-truncation approach to reduce the impact of noisy references in sequence-to-sequence training, using the intermediate model's loss as a sample quality estimator and test their algorithm on a summarization task.", "Li et al. (2021) present a modification to Expected Risk Minimization (ERM), namely Tilted-ERM to reduce the impact of outliers during training.", "Such techniques could be useful in increasing learning robustness to corpus-level noise in NMT as well.", "Corpus-Level Filtering Incorporating heuristics or filters (Junczys-Dowmunt, 2018; Zhang et al., 2020) to remove invalid source-target pairs, especially the noise patterns explored in section 4.2 (or to remove bitext indeterminacy in general) could be effective in reducing natural hallucinations.", "In this work we demonstrated that memorized training samples are far more likely to hallucinate under perturbation than non-memorized samples, under an extension of the Memory Value Estimator proposed in Feldman and Zhang (2020).", "We also showed that specific noise patterns in the training corpora lead to specific well-known hallucination patterns.", "Finally, we demonstrated that these patterns can be amplified by popular data-generation processes such as backtranslation and sequence-level knowledge distillation.", "Due to the compute-intensive algorithms involved in our analysis, we conduct most of our experiments using the IWSLT 2014 corpus.", "However, long-tailed phenomena are a characteristic of natural language and even scaling the size of the corpus doesn't alleviate the characteristic Zipfian distribution of the occurrence of words/tokens in the NMT corpora; which, according to the central thesis of the long-tail theory (Feldman, 2020), would lead to memorizations.", "Similarly, noise in the form of invalid references is an artifact of the scale at which web-based corpora are collected and given that both hallucinations under perturbations and natural hallucinations are widely reported in large-scale NMT systems, our insights should be directly applicable to larger-scale models as well.", "We hope that our work serves as a useful step towards a detailed understanding of hallucinations in NMT and in other sequence to sequence models.", "Among the numerous interesting directions for follow-up work, in future, we would like to explore learning-centric fixes to ameliorate the impact of memorization and corpus-level noise patterns in NMT training." ]
[ "method", "method", "objective", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "method", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "method", "abstain", "method", "abstain", "objective" ]
[ "Abstract Models pretrained with self-supervised objectives on large text corpora achieve state-of-the-art performance on English text summarization tasks.", "However, these models are typically fine-tuned on hundreds of thousands of data points, an infeasible requirement when applying summarization to new, niche domains.", "In this work, we introduce a novel and generalizable method, called WikiTransfer, for fine-tuning pretrained models for summarization in an unsupervised, dataset-specific manner.", "WikiTransfer fine-tunes pretrained models on pseudo-summaries, produced from generic Wikipedia data, which contain characteristics of the target dataset, such as the length and level of abstraction of the desired summaries.", "WikiTransfer models achieve state-of-the-art, zero-shot abstractive summarization performance on the CNN-DailyMail dataset and demonstrate the effectiveness of our approach on three additional diverse datasets.", "These models are more robust to noisy data and also achieve better or comparable few-shot performance using 10 and 100 training examples when compared to few-shot transfer from other summarization datasets.", "To further boost performance, we employ data augmentation via round-trip translation as well as introduce a regularization term for improved few-shot transfer.", "To understand the role of dataset aspects in transfer performance and the quality of the resulting output summaries, we further study the effect of the components of our unsupervised fine-tuning data and analyze few-shot performance using both automatic and human evaluation.", "Automatic text summarization aims to distill the most salient content of a given text in a compact form.", "Recent advances in summarization have been driven by the availability of large-scale datasets such as the CNN-DailyMail (CNNDM) corpus (Nallapati et al., 2016) and the New York Times corpus (Sandhaus, 2008) as well as by the introduction of large pretrained models such as BART (Lewis et al., 2020) and Pegasus (Zhang et al., 2019), in some cases resulting in summaries which are even favored over the human-written reference summaries.", "Creating data for every new domain, however, is infeasible and highly costly.", "Thus, the ability to transfer large pretrained models to new domains with little or no in-domain data is necessary, especially as such models make their way into real-world applications.", "Unsupervised summarization approaches include autoencoders to mirror the information compression inherent in summarization (Baziotis et al., 2019; Chu and Liu, 2019; Brainskas et al., 2020b) as well as large-scale pretraining for domain-specific adaptation (Yang et al., 2020).", "However, little work has focused on domain adaptation in summarization.", "Wang et al. (2019) examine domain adaptation for extractive summarization.", "Hua and Wang (2017) showed that summarization models have difficulty generating text in the style of the target domain, while more recently, Zhang et al. (2019) report strong performance of pretrained models when trained in few-shot settings and (Brainskas et al., 2020a) fine-tune dataset-specific components of a model for few-shot learning.", "We aim to build on recent work in pretrained models and improve zero-shot and few-shot summarization by encoding characteristics of the target summarization dataset in unsupervised, intermediate fine-tuning data.", "Summarization can be seen as a function of subfunctions of the input, called subaspects, which determine the output form.", "Jung et al. (2019) de-fine three subaspects for summarization: position, importance, and diversity, and study how these subaspects manifest themselves in summarization corpora and model outputs.", "For example, a common subaspect for the CNNDM dataset is position; earlier sentences tend to constitute a good summary.", "Inspired by this view of summarization as subaspects, we aim to encode subaspects of a target dataset into unlabeled data to allow a model fine-tuned on this data to learn characteristics of the target dataset to improve zero-shot and few-shot transfer of the model.", "In our work, we focus on the subaspects of extractive diversity , as determined by how well an extractive model performs on the data, compression ratio between the source document and summary, and, in the case of CNNDM, the lead bias .", "We assume knowledge of the target dataset such as the size of input documents, the size of the desired summaries, and the extent to which the summary is abstractive, all of that can be treated as prior knowledge if the task is to be well-defined (Kryscinski et al., 2020).", "We encode this knowledge into Wikipedia article data by extracting summaries of the desired output length and filtering examples based on the desired level of abstraction.", "Our contributions are the following: 1) We introduce a novel method, called WikiTransfer, which creates pseudo-summaries with subaspects of the target dataset which can be used as unlabeled data for intermediate fine-tuning.", "We show that this method improves zero-shot domain transfer over transfer from other domains, achieving state-of-the-art unsupervised abstractive summarization performance on the CNNDM dataset while generalizing to other domains, and we perform extensive hyperparameter studies on the factors influencing zero-shot performance 2) We demonstrate the benefits of WikiTransfer in few-shot settings, and show additional improvements when applying WikiTransfer with data augmentation and a regularization term for training with potentially noisy augmented data.", "We show robustness in these settings and analyze differences in performance in both automatic and human assessments.", "While advances have been made in neural techniques for summarization due in part to large datasets, less work has focused on domain adaptation of such methods in the zero and few-shot settings.", "Wang et al. (2019) examine domain adaptation, but in extractive summarization.", "Hua and Wang (2017) examine domain adaptation between opinion and news summarization, observing that models trained on one domain and applied to another domain can capture relevant content but differ in style in generating the summary.", "Brainskas et al. (2020a) introduce plug-in networks, small finetune-able layers that aim to reproduce characteristics of the target dataset as seen in a small set of labeled examples.", "In contrast, we aim to encode the characteristics of our target dataset, such as level of extraction and compression, a priori in the intermediate training phase.", "In other work, Lebanoff et al. (2018) adapt a single-document summarization model to multi-document settings, while Zhu et al. (2019) use Wikipedia reference data for downstream query-based summarization Several approaches for unsupervised summarization have made use of variational autoencoders (Baziotis et al., 2019; Chu and Liu, 2019; Brainskas et al., 2020b).", "Zhou and Rush (2019) makes use of pretrained language models for unsupervised text summarization by aligning the coverage of the generated summary to the source document.", "Laban et al. (2020) train an unsupervised summarization model with reinforcement learning rewards.", "In another line of work, extractive models such as TextRank, (Mihalcea and Tarau, 2004), LexRank (Erkan and Radev, 2004), and more recently Pac-Sum (Zheng and Lapata, 2019), make use of graph centrality for modeling salience.", "The power of pretrained models for few-shot transfer was shown for abstractive summarization in Zhang et al. (2019) and extractive summarization in Desai et al. (2020).", "Our work focuses on the zero-shot abstractive summarization setting and the transferability of models fine-tuned on task-specific data from a generic corpus, rather than just the transferability of a single pretrained model.", "The closest work to ours for zero-shot transfer is Yang et al. (2020), which uses the lead-bias in news to pretrain an unsupervised model on a large dataset of news articles.", "Our approach, however, focuses on fine-tuning an already-pretrained model specifically for summarization on a downstream dataset by leveraging a generic text corpus (Wikipedia) to create auxiliary fine-tuning data that transfers across domains, allowing for more fine-grained control over the transfer process.", "We show the generalizability of such fine-tuning across domains.", "BART (Lewis et al., 2020) is a pretrained denoising autoencoder and achieved state-of-the-art performance when fine-tuned on summarization tasks at the time.", "In this work, we use BART as our base pretrained model but in future work will experiment with other pretrained models.", "WikiTransfer Intermediate Fine-tuning: propose a method for fine-tuning pretrained models using unsupervised Wikipedia data.", "We create dataset-specific unsupervised data for this intermediate fine-tuning, by making use of characteristics of the target dataset such as the average length of input documents, the average summary length, and the general bin of whether the summaries desired are very abstractive or very extractive, as discussed above.", "Assume that we want a summary of M sentences from source documents of N sentences on average, and that we know approximately how extractive the summaries are in the target dataset, as defined as the upper bound ROUGE (Lin, 2004) performance of an extractive model, the extractive oracle, on that dataset.", "We bin the level of extraction of the target summaries into extremely abstractive (ROUGE oracle 10-30), more abstractive (ROUGE oracle 20-30), more extractive (ROUGE oracle 30-50), and extremely extractive (ROUGE oracle 40-60).", "We then iterate the following procedure on all Wikipedia articles available in a Wikipedia dump: We remove the first M sentences from the Wikipedia article for use as a summary and the following N sentences for use as a source document.", "Then, we want to check whether this pseudo data point matches the level of extraction of the target dataset.", "We select the M sentences in the pseudo source document with the highest individual ROUGE scores against the pseudo summary and calculate the ROUGE score between those M sentences concatenated and the pseudo summary, which amounts to a greedy upper bound of the performance of an extractive model on this example.", "The example will be kept if this ROUGE score falls into the general range of the extractive oracle of the target dataset defined previously and otherwise discarded.", "We use knowledge of how abstractive a dataset is as a type of summary style which an end-user would know ahead of time.", "We filter the Wikipedia data points so that only those which fall into the bin for a given dataset are used for fine-tuning.", "For datasets that are extremely abstractive, such examples may be hard to find, so we remove high-ROUGE sentences from the input until the desired ROUGE oracle score is reached.", "From here on we refer to data created through this process as WikiTransfer .", "We then fine-tune a pretrained model on this dataset-specific WikiTransfer data to transfer to a target domain.", "Data Augmentation via Round-Trip Translation: In addition to fine-tuning on WikiTransfer data for zero-shot domain transfer, we test the ability of our model to transfer when we have few examples and whether data augmentation further improves these results.", "In few-shot fine-tuning, we conduct data augmentation to reduce brute-force memorization and introduce a regularization effect.", "Specifically, we perform round-trip translation (Yu et al., 2018) to generate paraphrases of both the source documents and summaries, as previous work has found this approach creates diverse paraphrase for augmentation while preserving semantic meaning (Yu et al., 2018; Xie et al., 2019).", "Our examination found that round-trip translation increased the number of novel n-grams while preserving semantic meaning.", "Given a dataset of N data points, we translate the source and target sentence-wise into a non-English language and keep the top k beam hypotheses from beam search as output.", "We then do likewise for the backtranslation to English.", "This results in N k 2 augmented data points in addition to the N original supervised data points.", "We align a single beam from the translation to non-English text to a single beam in the backtranslation to English; using all combinations of beams for augmented data did not result in an improvement in initial experiments.", "We refer to the training setting of N supervised data points with this additional augmented data as N-a .", "Data Augmentation Consistency: While data augmentation may introduce a regularization effect, naively training with augmented data does not necessarily account for noise introduced in the augmented examples.", "To balance learning from the examples while not overfitting to the small number of supervised samples, the model must learn to be robust to small changes in input examples.", "We thus investigate the effect of using a consistency loss (Xie et al., 2019; Athiwaratkun et al., 2019) for few-shot training which enforces consistency between the original and round-trip translated documents with respect to the original summary.", "Let x = { x 1 , x 2 , ..., x i , ..., x n } be a source document with n words and N sentences, where x i represents the i -th word in x .", "It could also be represented as { s 1 , s 2 , ..., s j , ..., s N } , where s t represents the j -th sentence in x .", "The corresponding target summary y contains m words and M sentences, and y i denotes the i -th token of y .", "Standard training, used in the above sections, minimizes the negative log-likelihood loss using supervised teacher forcing (Williams and Zipser, 1989), which we label L sup : L sup ( x, y ) = m (cid:88) t =1 log( f ( y t | y 0: t 1 , x, )) (1) where f ( | , ) represents the distribution among the vocabulary predicted by our model with parameter .", "In our formulation, the output (sum-mary) distribution given an augmented (round-trip translated) example should not diverge much from the distribution given the original document, with teacher forcing, so that the model learns to be resilient to small perturbations.", "Let x be a paraphrase of input document x generated via round-trip translation as described in the previous section.", "In addition to the supervised loss L sup ( x, y ) , we introduce another loss L cons ( x, x, y ) : m (cid:88) t =1 KL ( f ( | y 0: t 1 , x ) || f ( | y 0: t 1 , , x ) , )) (2) where KL is the KL divergence, which penalizes the model if the probability distribution of the output using the original input is far from the distribution using the round-trip translated input document.", "Following Xie et al. (2019), the gradient does not backpropagate through the model for the distribution of the original input while it does propagate through to the round-trip translated input.", "The total loss L (cid:48) for training with consistency then is: L (cid:48) ( x, x, y ) = L sup ( x, y ) + L cons ( x, x, y ) (3) We note that the original formulation of Unsupervised Data Augmentation (UDA) (Xie et al., 2019) enforces consistency in a semi-supervised framework.", "We also experiment with this setup using unlabeled examples from the target dataset with pseudo labels (for teacher forcing) generated by a model trained on the associated few-shot subset, although this approach is very sensitive to the quality of the pseudo labels (see Appendix).", "We refer to the training setting of N supervised data points with consistency training as N-c .", "Datasets: We experiment with four datasets, CNNDM, XSum (Narayan et al., 2018), Reddit_tifu (Reddit) (Kim et al., 2019), and BigPatent (Sharma et al., 2019).", "The datasets were chosen as they all differ in their abstractiveness, output length (from one sentence in XSum to on average four in Big-Patent), and cover multiple domains from news (CNNDM and XSum) to social media (Reddit) to patent documents (BigPatent), to show the generalizability of our results.", "Each of the datasets falls into a different extractive bin, from the most extractive CNNDM to the more abstractive XSum; we discuss these settings further in the Appendix.", "Model Selection and Metric: For the experiments which follow, we first choose the model with the best zero-shot performance on a given domain.", "We test the zero-shot performance from all four domains onto every other domain.", "For models from our WikiTransfer subset, we choose the best model based on performance on an unsupervised WikiTransfer validation subset.", "We find that fine-tuning the model longer does not result in performance gains in few-shot transfer, and the checkpoints chosen were typically fine-tuned from 2 to 5 epochs.", "Results from hyperparameter studies for zero-shot transfer from WikiTransfer data are shown on the validation set of that given target dataset.", "Unless otherwise stated, all results reported are ROUGE-1/2/L.", "We run all few-shot transfer experiments on five subsets of supervised data, and the reported numbers, unless zero-shot, are the average of the top three results of the five runs following previous work (Gunel et al., 2020).", "The 10 data point sets are subsets of the 100 data point sets.", "Data Augmentation Parameters: For data augmentation via round-trip translation, we use a beam size of 10 and k of 10 on German and Russian translation models; fairseq provides bidirectional pretrained translation models (Edunov et al., 2018) from WMT19 (Ng et al., 2019) for these language pairs.", "For both 10 and 100 data points, this resulted in 2010 and 20100 total data points.", "For consistency loss, we use the same augmented data.", "Model Hyperparameters: We use the fairseq codebase (Ott et al., 2019) for our experiments.", "Our base abstractive text summarization model is BART-large (Lewis et al., 2020), a pretrained denoising autoencoder with 336M parameters that builds off of the sequence-to-sequence transformer of Vaswani et al. (2017).", "We fine-tune BART using a polynomial decay learning rate scheduler using the Adam optimizer (Kingma and Ba, 2015).", "We mainly vary the learning-rate scheduler, warmup updates, and total updates.", "As in the previous few-shot summarization work (Zhang et al., 2019) and work in unsupervised machine translation (Conneau and Lample, 2019), we use a subset of the target-domain validation set for early stopping based on the validation loss.", "We used the following (warmup updates, total updates, learning rate) parameter tuples based on an examination of the validation curves in initial experiments: 10: (25, 100, 3e-5); 10-a: (20, 200, 3e-5); 100 (20, 200, 3e-5); 100-a: (200, 1000, 1e-5).", "For consistency loss experiments, we use the values of 0.1 and 0.5 for experiments with 10 and 100 data points, respectively, chosen manually based on Xie et al. (2019).", "See the Appendix for more details.", "We compare the zero-shot performance of BART fine-tuned on WikiTransfer data to that of one transferred from other summarization datasets.", "We also show the effect of different choices for WikiTransfer fine-tuning data on CNNDM and XSum.", "We aim to show that a model fine-tuned on WikiTransfer data has better zero-shot performance than models transferred from other summarization datasets.", "We fine-tune BART on WikiTransfer data for each of the four target datasets described above and also fine-tune a model on each of the fully-supervised datasets.", "We compare the zero-shot performance of transferring from WikiTransfer against the best zero-shot transfer performance from another dataset in Table", "1. Zero-shot transfer from WikiTransfer notably outperforms transferring from other datasets on CNNDM, XSum, and BigPatent.", "On Reddit, we perform better on ROUGE-1 and comparably on ROUGE-2/L, which may be due to distinct writing style on Reddit data, as noted in Zhang et al. (2019).", "We also experimented with training a model on data combined from multiple datasets for zero-shot transfer, but this does not report improved results, so for the experiments which follow we use the best performing single-domain transfer model.", "Details of the fully-supervised BART models are in the Appendix.", "We compare our model to the state-of-the-art unsupervised abstractive model on CNNDM in Table", "2. We outperform the recently-introduced TED model (Yang et al., 2020) which was specifically motivated for the news domain.", "We believe the creation of task-specific data from a generic corpus Target Dataset WikiTransfer Other Transfer CNNDM 39.11 17.25 35.73 36.81 14.18 32.62 (Reddit) XSum 31.85 10.44 23.75 24.04 6.43 18.99 (Reddit) Reddit 21.47 4.10 17.62 21.37 4.14 17.76 (CNNDM) BigPatent 35.58 10.91 31.53 33.57 9.34 25.76 (CNNDM) Table 1: Comparison of ROUGE-1/2/L zero-shot transfer performance from dataset-specific WikiTransfer vs. transfer from another dataset.", "such as Wikipedia allows for more control over the transfer process than relying on the autoencoder objective of TED, and more generalizable cross-domain results.", "We study the effect the characteristics of our intermediate fine-tuning data have on downstream zero-shot performance on CNNDM and XSum to compare highly extractive and abstractive datasets.", "Effect of learning rate in intermediate fine-tuning: We examine the extent to which overfitting to the unsupervised WikiTransfer data occurs by examining the effect of the learning rate in intermediate fine-tuning on zero-shot transfer performance.", "We finetune the models on the CNNDM and XSum WikiTransfer data respectively each with a maximum learning rate of 3e-6 and 3e-5.", "Results are shown in Table", "3. Using a smaller learning rate in intermediate fine-tuning improves results on CNNDM, but not on XSum, likely due to the simple extractive and lead bias objectives which can easily overfit during fine-tuning.", "We see a similar trend with the effect of dataset size.", "For datasets other than CNNDM, we use a learning rate of 3e-5 in intermediate fine-tuning.", "Effect of extractive oracle bin use and the choice of M: We tested whether using the extractive bin to filter examples in the unsupervised data affected zero-shot transfer.", "For this experiment, we used the first M sentences from the Wikipedia article as the summary and the remaining N as the source, but do not filter examples according to how extractive they are.", "From Table 3, we see that the extractive bin has a very noticeable effect on Ablation CNNDM XSum LR=3e-6 40.14 17.71 36.66 27.60 8.62 20.93 LR=3e-5 39.73 16.94 36.24 31.80 10.46 23.66 LR=3e-6, No-bin 39.11 16.98 35.66 22.78 5.66 17.16 LR=3e-6, bin, M=1 37.45 14.72 32.52 27.60 8.62 20.93 LR=3e-6, bin, M=3 40.14 17.71 36.66 27.98 9.59 23.11 Table 3: Ablation studies on the effect of learning rate, the use of extractive bin for data filtering and the choice of M in intermediate fine-tuning on ROUGE-1/2/L performance on CNNDM and XSum validation sets.", "transfer results for XSum and a moderate effect on CNNDM.", "This is to be expected, as the model otherwise is missing information about XSum's distinctive output style.", "We examine how the choice of M affected performance.", "We set M = 1 for CNNDM and M = 3 for XSum and filtered examples in a similar way based on the extractive bin of the target dataset.", "We see that the choice of M has a large impact on CNNDM performance but no decrease on XSum.", "This result, combined with the effect of filtering examples based on the extractive bin, gives insight into the importance of the subaspect of abstractiveness over compression for XSum performance.", "Effect of intermediate pretraining dataset size: We examined the effect of the size of the WikiTransfer data on downstream performance.", "Results are shown in Table", "4. We see a general increase with the addition of more data, although smaller increases after 100k data points and even a decrease in 250k on XSum, likely due to noise variation.", "The performance with 10k data points on CNNDM is already much closer to the best performance than the XSum case.", "We believe that this is due to the highly extractive nature of CNNDM, which is especially easy for a model such as BART to learn, as it is pretrained as a denoising autoencoder.", "For XSum, we see a noticeable improvement from 10k to 100k examples.", "We suspect that the abstractive objective is harder for the model to learn with small datasets.", "As we add more examples, we do not see a noticeable improvement.", "Such observations agree with our observation of the effect of learning rate and overfitting to the easier CNNDM objective.", "For the remaining experiments, we use 400k data points based on initial experiments.", "Effect of summary sentence choice: The first M sentences of a given Wikipedia article were chosen as this introduction intuitively form a coherent summary of the article.", "We examine the effect of choosing the first sentences compared to choosing based Intermediate Dataset Size CNNDM XSum 10k 39.48 17.79 36.3 21.59 4.85 16.28 100k 39.92 17.65 36.5 31.52 10.86 23.94 250k 40.10 17.70 36.62 31.39 10.27 23.43 400k 40.14 17.71 36.66 31.80 10.46 23.66 Table 4: A comparison of the effect of dataset size of the unsupervised intermediate fine-tuning data on the zero-shot transfer ROUGE-1/2/L performance.", "on other criteria.", "As an alternative, we pick the sentences with the highest self-ROUGE (ROUGE score of a sentence when using all other sentences as the reference summary) in a greedy fashion (the equivalent of the IND-ORIG settings in Zhang et al. (2019)).", "As in Zhang et al. (2019), we use ROUGE-1 F1.", "The sentences chosen under this heuristic consistently corresponded to those which were longest, and the resulting summaries were hence longer.", "Thus, we also experimented with choosing important sentences by using ROUGE-1 Precision, IND-ORIG-P .", "The comparison of these methods is shown in Table", "5. The choice of the summary sentence has a noticeable impact on performance.", "We hypothesize that the coherence lost in the summaries is especially important for the longer CNNDM summaries.", "Using important sentences other than the first sentence likely adds more diversity in the data, and finding a balance between coherence and output style is an interesting direction for additional work (Christensen et al., 2013).", "Effect of lead bias on CNNDM fine-tuning: We examined the effect of selecting the M sentences greedily chosen for calculating the extractive oracle and inserting them at the beginning of the unsupervised source document versus leaving them in place for CNNDM intermediate fine-tuning.", "This is meant to mirror the lead bias present in the dataset.", "This had a slight impact on performance (40.14 vs 39.74 without this bias), and thus we keep the lead bias for CNNDM experiments.", "Wikipedia vs target domain unlabeled data: While Wikipedia is a natural source of unlabeled data, we tested whether creating unsupervised data from unlabeled in-domain data improved results.", "We performed the same dataset creation treating the source data of the target domain as we did the Wikipedia data.", "This resulted in about 60k examples for CNNDM and 200k examples for XSum.", "Fine-tuning on this data, however, resulted in a performance of 38.08/25.83 ROUGE-1 for CNNDM and XSum (vs 39.11/31.85 on WikiTransfer data).", "The removal of the first sentences may remove too much information in the case of CNNDM, while for XSum, which already has an initial sentence headline removed as the summary, the first sentence may not constitute a very good summary of the remaining document.", "Wikipedia data often contains multi-paragraph introductions; thus the removal of the first few sentences may still leave a pyramid-structured document with coherent informative content placed at the front.", "This result supports the emphasis on learning the subaspects of the target domain over simply in-domain training.", "An analysis of the output of intermediate fine-tuning on CNNDM reveals that the output was more abstractive, due to information present in the summary not being directly stated in the source, than fine-tuning on Wikipedia.", "We also experiment with further in-domain pretraining of BART before zero-shot transfer, but this does not result in consistent improvements across datasets.", "We examine whether zero-shot transfer improvements also carry over to the few-shot setting.", "Also, we explore the effect of data augmentation and consistency regularization techniques.", "The results of our experiments with varying training data sizes and augmentation methods for all 4 datasets are shown in Figure 1 and the Appendix.", "10 and 100-shot performance with round-trip translation augmentation: We see that in few-shot settings, without data augmentation or consistency training, our model outperforms transferring from another domain or vanilla BART.", "In the case of transfer to Reddit, we observe that despite similar zero-shot performance with transfer from CNNDM, there is a more sizeable gap with 10-shot transfer.", "This suggests that our intermediate fine-tuning does more closely align the BART model with the target domain.", "Furthermore, when training on augmented data from round-trip translation, we see the best performance in transfer from WikiTransfer in all cases except BART transfer to CNNDM on 10-aug, which is likely due to the autoencoder pretraining objective of BART which biases it towards copying and lead bias, allowing it to perform well in applications to CNNDM.", "We see improvements when training with augmented data in 10-example cases and most 100-example cases for WikiTransfer.", "Less improvement is seen in the 100-aug setting when transferring from BART or another domain.", "We hypothesize that the noise present in the larger augmented dataset causes this occasional performance drop, while the WikiTransfer models appear more robust to potential noise.", "We also found model robustness as the standard deviation of top-performing WikiTransfer models was least among all models in the majority of cases.", "Interestingly, for transfer from BART and another domain 100-aug only improves on CNNDM, the most extractive dataset, while the largest drop in performance from augmented data occurs on XSum.", "This XSum performance drop may be caused by the high compression in the XSum summaries which leaves less room for noisy output when compared to the longer CNNDM and BigPatent summaries which may still preserve the main meaning of the original summary better despite backtranslation noise.", "In most cases, 100-aug with WikiTransfer results in the best performance, only several points from the state-of-the-art supervised performance.", "Transfer with Consistency Training: We find contrasting trends with the added consistency loss compared to data augmentation via round-trip translation.", "We note the most sizeable improvements in the more abstractive cases of XSum and Reddit.", "We hypothesize that the consistency loss promotes better abstraction as the model learns to be invariant to noise which does not change the meaning of the text, and is thus equipped with a better notion of paraphrasing.", "The consistency loss allows for better training of vanilla BART as well as in general better transfer from other domains than without consistency loss.", "The loss likely provides a regularization factor which prevents the models from overfitting to the supervised examples.", "As the WikiTransfer model is already more closely tuned to the target domain, this regularization may not make as large of a difference.", "This aligns with our observation of WikiTransfer models being more robust to noisy backtranslated data on XSum and Reddit.", "Transfer to Reddit shows similar results across models for consistency loss with 100 examples (better ROUGE-L for WikiTransfer, better ROUGE-1/2 for Reddit); vanilla BART's strong performance at 100 examples suggests that the in-Figure 1: ROUGE-1 scores across datasets, training dataset size, data augmentation ( *-a ), and consistency loss ( *-c ) showing the generalizable and robust performance of models transferred from WikiTransfer.", "Standard deviation bars are also plotted.", "formation provided in this subset is sufficient for good performance, thus diminishing the gains from the head-start the WikiTransfer model provides in zero and 10-shot transfer.", "We leave aspects of the consistency training such as the role of the quality of the round-trip translation data and its relation to the transfer domain to future work.", "We examine how the improved performance from WikiTransfer manifests itself in qualitative annotations when varying the amount of training data.", "We collect human judgment annotations for two of the four quality dimensions studied in Kryscinski et al. (2019); Fabbri et al. (2020), namely consistency and relevance.", "Consistency is defined as the factual alignment between the summary and the summarized source text, while relevance is defined as the selection of important content; only relevant Target Dataset CNNDM XSum Relevance Consistency Relevance Consistency 0 4.37 4.71 3.75* 3.75 10-a 4.31 4.76 3.77* 4.10 100-a 4.25 4.86 4.00 4.04 Full supervision 4.31 4.86 4.11 3.98 Table 6: Summary relevance and factual consistency across CNNDM and XSum datasets with varying amounts of training data.", "information should be included in the summary.", "We did not include fluency as a dimension as an initial inspection of the data found fluency to be of very high quality, and we did not include coherence due to our inclusion of single-sentence XSum summaries where coherence is not a factor.", "We randomly select 50 examples per dataset and collect the model output from the best-performing zero-shot, 10-aug, 100-aug, and fully supervised models on CNNDM and XSum.", "The annotator sees the source article and randomly-ordered output from the four models rates the summaries for relevance and consistency on a Likert from 1-5, with 5 being the best score.", "We averaged the score of two native English-speaking annotators on each example and then across examples, and found moderate and strong annotator correlations for relevance and consistency, respectively.", "Results are shown in Table", "6. For CNNDM, we see an increase in consistency as more training data is added but not a statistically significant difference (using a Student's t-test with a p-value of 0.05) between 100 and full supervision for any of the relevance or consistency results.", "The relevance of the full model does not outperform the others, likely because the model output was more concise and was judged as not including source information, while the zero-shot output more closely resembles the lead-three bias, so was judged as more informative.", "For XSum, we see that relevance improves noticeably as more training data is used.", "We see varied results for consistency, although without statistically significant differences.", "This fluctuation in scores may be due to the transition of the model from using knowledge from pretraining in its output versus knowledge from the target dataset obtained during fine-tuning, which we discuss in the Appendix.", "We introduced WikiTransfer, a novel and generalizable method for fine-tuning pretrained models on dataset-specific unsupervised data obtained from generic Wikipedia data.", "WikiTransfer models achieve state-of-the-art zero-shot abstractive summarization performance on the CNN-DailyMail dataset and generalize across three additional datasets.", "In few-shot settings, WikiTransfer models are robust to noise introduced through data augmentation and benefit from consistency loss on more abstractive datasets.", "Furthermore, human assessments of the resulting summaries do not show significant differences between the WikiTransfer few-shot summaries and fully-supervised summaries, demonstrating the efficiency of our approach.", "We make use of existing datasets available through libraries such as huggingface's datasets library.", "Biases may exist in the datasets, such as political bias in the news datasets as well as gender bias in potentially all of the datasets.", "Thus, models trained on these datasets may propagate these biases.", "When used as intended, applying the summarization models described in this paper can save people much time.", "However, the current models are still prone to producing hallucinated summaries, and in such a case may contribute to misinformation on the internet.", "Further research is needed for ensuring the faithfulness of abstractive summaries to address this issue, as this issue is present among all current abstractive summarization models.", "The experiments make use of V100 GPUs.", "We used up to 8 GPUs per experiment (depending on the experiment; sometimes a single GPU was used to run the maximum number of experiments in parallel).", "The experiments may take from several minutes in the case of few-shot experiments without augmentation to a couple of hours for the larger augmented datasets, and up to one day for full-dataset training.", "Over 400 experiments were run due to our requirement of averaging across multiple experiments.", "Future work should experiment with distilled models for more light-weight training.", "We note that while our work required extensive experiments to draw sound conclusions, future work will be able to draw on these insights and need not run as many large-scale comparisons, and models in production may be trained once for use using the most promising settings.", "We thank Griffin Adams, Shrey Desai, and the NAACL reviewers for their constructive feedback." ]
[ "abstain", "abstain", "objective", "abstain", "objective", "abstain", "method", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "method", "method", "method", "objective", "other", "result", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "method", "other", "method", "abstain", "other", "method", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "result", "result", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "other" ]
[ "In this work we explore Unsupervised Domain Adaptation (UDA) of pretrained language models for downstream tasks.", "We introduce UDALM, a fine-tuning procedure, using a mixed classification and Masked Language Model loss, that can adapt to the target domain distribution in a robust and sample effi-cient manner.", "Our experiments show that performance of models trained with the mixed loss scales with the amount of available target data and the mixed loss can be effectively used as a stopping criterion during UDA training.", "Furthermore, we discuss the relationship between A-distance and the target error and explore some limitations of the Domain Adversarial Training approach.", "Our method is evaluated on twelve domain pairs of the Amazon Reviews Sentiment dataset, yielding 91 .", "74% accuracy, which is an 1 .", "11% absolute improvement over the state-of-the-art.", "Deep architectures have achieved state-of-the-art results in a variety of machine learning tasks.", "However, real world deployments of machine learning systems often operate under domain shift, which leads to performance degradation.", "This introduces the need for adaptation techniques, where a model is trained with data from a specific domain, and then can be optimized for use in new settings.", "Effi-cient techniques for model re-usability can lead to faster and cheaper development of machine learning applications and facilitate their wider adoption.", "Especially techniques for Unsupervised Domain Adaptation (UDA) can have high real world impact, because they do not rely on expensive and time-consuming annotation processes to collect labeled data for domain-specific supervised training, further streamlining the process.", "UDA approaches in the literature can be grouped in three major categories, namely pseudo-labeling techniques (e.g. Yarowsky, 1995; Zhou and Li, 2005), domain adversarial training (e.g. Ganin et al., 2016) and pivot-based approaches (e.g. Blitzer et al., 2006; Pan et al., 2010).", "Pseudo-labeling approaches use a model trained on the source labeled data to produce pseudo-labels for unlabeled target data and then train a model for the target domain in a supervised manner.", "Domain adversarial training aims to learn a domain-independent mapping for input samples by adding an adversarial cost during model training, that minimizes the distance between the source and target domain distributions.", "Pivot-based approaches aim to select domain-invariant features (pivots) and use them as a basis for cross-domain mapping.", "This work does not fall under any of these categories, rather we aim to optimize the fine-tuning procedure of pretrained language models (LMs) for learning under domain-shift.", "Transfer learning from language models pretrained in massive corpora (Howard and Ruder, 2018; Devlin et al., 2019; Yang et al., 2019; Liu et al., 2019; Brown et al., 2020) has yielded significant improvements across a wide variety of NLP tasks, even when small amounts of data are used for fine-tuning.", "Fine-tuning a pretrained model is a straightforward framework for adaptation to target tasks and new domains, when labeled data are available.", "However, optimizing the fine-tuning process in UDA scenarios, where only labeled out-of-domain and unlabeled in-domain data are available is challenging.", "In this work, we propose UDALM, a fine-tuning method for BERT (Devlin et al., 2019) in order to address the UDA problem.", "Our method is based on simultaneously learning the task from labeled data in the source distribution, while adapting to the language in the target distribution using multitask learning.", "The key idea of our method is that by simultaneously minimizing a task-specific loss on the source data and a language modeling loss on the target data during fine-tuning, the model will be able to adapt to the language of the target domain, while learning the supervised task from the available labeled data.", "Our key contributions are:", "(a) We introduce UDALM, a novel, simple and robust unsupervised domain adaptation procedure for downstream BERT models based on multitask learning,", "(b) we achieve state-of-the-art results for the Amazon reviews benchmark dataset, surpassing more complicated approaches and", "(c) we explore how A-distance and the target error are related and conclude with some remarks on domain adversarial training, based on theoretical concepts and our empirical observations.", "Our code and models are publicly available 1 .", "Traditionally, UDA has been performed using pseudo-labeling approaches.", "Pseudo-labeling techniques are semi-supervised algorithms that either use the same model (self-training) (Yarowsky, 1995; McClosky et al., 2006; Abney, 2007) or multiple ensembles of models (tri-training) (Zhou and Li, 2005; Sgaard, 2010) in order to produce pseudo-labels for the target unlabeled data.", "Saito et al. (2017) proposed an asymmetric tri-training approach.", "Ruder and Plank (2018) introduced a multi-task tri-training method.", "Rotman and Reichart (2019) and Lim et al. (2020) study pseudo-labeling with contextualized word representations.", "Ye et al. (2020) combine self-training with XLM-R (Conneau et al., 2020) to reduce the produced label noise and propose CFd, class aware feature self-distillation.", "Another line of UDA research includes pivot-based methods, focusing on extracting cross-domain features.", "Structural Correspondence Learning (SCL) (Blitzer et al., 2006) and Spectral Feature Alignment (Pan et al., 2010) aim to find domain-invariant features (pivots) to learn a mapping between two domain distributions.", "Ziser and Reichart (2017, 2018, 2019) combine SCL with neural network architectures and language modeling.", "Miller (2019) propose to jointly learn the task and pivots.", "Li et al. (2018b) learn pivots with hierarchical attention networks.", "Pivot-based methods have also been used in conjunction with BERT (Ben-David et al., 2020).", "Domain adversarial training is a dominant approach for UDA (Ramponi and Plank, 2020), in-1 https://github.com/ckarouzos/slp_daptmlm spired by the theory for learning from different domains introduced in Ben-David et al. (2007, 2010).", "Ganin et al. (2016); Ganin and Lempitsky (2015) propose to learn a task while not being able to distinguish if samples come from the source or the target distribution, through use of an adversarial cost.", "This approach has been adopted for a diverse set of problems, e.g. sentiment analysis, tweet classification and universal dependency parsing (Li et al., 2018a; Alam et al., 2018; Sato et al., 2017).", "Du et al. (2020) pose domain adversarial training in the context of BERT models.", "Zhao et al. (2018) propose multi-source domain adversarial networks.", "Guo et al. (2018) propose a mixture-of-experts approach for multi-source UDA.", "Guo et al. (2020) explore distance measures as additional losses and use them to construct dynamic multi-armed bandit controller for the source domains.", "Shen et al. (2018) learn domain invariant features via Wasserstein distance.", "Bousmalis et al. (2016) introduce domain seperation networks with private and shared encoders.", "Unsupervised pretraining on domain-specific corpora can be an effective adaptation process.", "For example BioBERT (Lee et al., 2020) and SciB-ERT (Beltagy et al., 2019) are specialized BERT variants, where pretraining is extended on large amounts of biomedical and scientific corpora respectively.", "Sun et al. (2019) propose continuing the pretraining of BERT with target domain data and multitask learning using relevant tasks for BERT fine-tuning.", "Xu et al. (2019) introduce a review reading comprehension task and a post-training approach for BERT with an auxiliary loss on a question-answering task.", "Continuing pretraining on multiple phases, from general to domain specific (DAPT) and task specific data (TAPT), further improves performance of pretrained language models, as shown by Gururangan et al. (2020).", "Han and Eisenstein (2019) propose AdaptaBERT, which includes a second phase of unsupervised pretraining, in order to use BERT in a unsupervised domain adaptation context.", "Recent works have highlighted the merits of using Language Modeling as an auxiliary task during fine-tuning.", "Chronopoulou et al. (2019) use an auxiliary LM loss to avoid catastrophic forgetting in transfer learning and Jia et al. (2019) adopt this approach for cross-domain named-entity recognition.", "We draw inspiration from these approaches and utilize auxiliary Language Modeling for UDA.", "Let X be the input space and Y the set of labels.", "For binary classification tasks Y = { 0 , 1 } .", "In domain adaptation there are two different distributions over X Y , called the source domain DS and the target domain DT .", "In the unsupervised setting labels are provided for samples drawn from DS , while samples drawn from DT are unlabeled.", "The goal is to train a model that performs well on samples drawn from the target distribution DT .", "This is summarized in Eq.", "1. S = ( x i , y i ) ni =1 ( DS ) n T = ( x i ) n + m i = n +1 ( DXT ) m (1) where DXT is the marginal distribution of DT over X , n is the number of samples from the source domain and m is the number of samples from the target domain.", "Fig. 1 gives an overview of the proposed Unsupervised Domain Adaptation through Language Modeling (UDALM).", "Starting from a model that is pretrained in general corpora (Fig. 1a), we keep pretraining it on target domain data using the masked language modeling task (Fig. 1b).", "On the final fine-tuning step (Fig. 1c) we update the model weights using both a classification loss on the labeled source data and Masked Language Modeling loss on the unlabeled target data.", "In Fig. 1a we see the BERT general pretraining phase.", "BERT (Devlin et al., 2019) is based on the Transformer architecture (Vaswani et al., 2017).", "During BERT pretraining, input tokens are randomly selected to be masked.", "BERT is trained using the Masked Language Modeling (MLM) objective, which consists of predicting the most probable tokens for the masked positions.", "Additionally it uses a Next Sentence Prediction (NSP) loss, which classifies whether the pair of input sentences are continuous or not.", "If a labeled dataset is available, a pretrained BERT model can be fine-tuned for the downstream task in a supervised manner with the addition of an output layer.", "In Fig. 1b we initialize a model using the weights of a generally pretrained BERT and continue pretraining on an unsupervised set of in-domain data, in order to adapt to the target domain.", "This step does not require use of supervised data, since we use the MLM objective.", "For the final fine-tuning step, shown in Fig. 1c we perform supervised fine-tuning on the source data, while we keep the MLM objective on the target data as an auxiliary task.", "Following standard practice, we use the [CLS] token representation for classification.", "The classifier consists of a single feed-forward layer.", "During this procedure the model learns the task through the classification objective using the labeled source domain samples, and simultaneously it adapts to the target domain data through the MLM objective.", "The model is trained on the source domain labeled data for the classification task and target domain unlabeled data for the masked language modeling task.", "We mask only the target domain data.", "During training we interleave source and target data and feed them to the BERT encoder.", "Features extracted from the source data are then used for classification, while target features are used for Masked Language Modeling.", "The mixed loss used for the fine-tuning step, is the sum of the classification loss LCLF and the auxiliary MLM loss LMLM .", "LCLF is a cross-entropy loss, calculated on labeled examples from source domain, while LMLM is used to predict masked tokens for unlabeled examples from target domain.", "We train the model over mixed batches, that include both source and target data, used for the respective tasks.", "The mixed loss is presented in Eq.", "2: L ( s , t ) = L CLF ( s ) + (1 ) LMLM ( t ) (2) We process n labeled source samples s DS and m unlabeled target samples t DT on a batch.", "The weighting factor is selected as the ratio of labeled source data over the sum of labeled source and unlabeled target data, as stated in Eq.", "3: = n n + m (3) 5 Experiments 5.1 Dataset We evaluate UDALM on the Amazon reviews multi-domain sentiment dataset (Blitzer et al., 2007), a standard benchmark dataset for domain adaptation.", "Reviews with one or two stars are labeled as negative, while reviews with four or five stars are labeled as positive.", "The dataset contains reviews on four product domains: Books (B), DVDs (D), Electronics (E) and Kitchen appliances (K), yielding 12 adaptation scenarios of source-target domain pairs.", "Balanced sets of 2000 labeled reviews are available for each domain.", "We use 20000 (randomly selected) unlabeled reviews for (B), (D) and (E).", "For (K) 17805 unlabeled reviews are available.", "For each of the 12 adaptation scenarios we use 20% of both labeled source and unlabeled target data for validation, while labeled target data are used for testing exclusively and are not seen during training or validation.", "We use BERTBASE (uncased) as the Language Model on which we apply domain pretraining.", "The BERTBASE original English model is a 12-layer, 768-hidden, 12-heads, 110M parameter transformer architecture, trained on the BookCorpus with 800M words and a version of the English Wikipedia with 2500M words.", "We convert source and target sentences to WordPieces (Wu et al., 2016).", "For target sentences we randomly mask 15% of WordPiece tokens, as in (Devlin et al., 2019).", "If a token in a specific position is selected to be masked 80% of the time is replaced with a [MASK] token, 10% of the time with a random token and 10% of the time remains unchanged.", "The maximum sequence length is set to 512 by truncation of inputs.", "During domain pretraining we train with batch size of 8 for 3 epochs (2 hours on two GTX-1080Ti cards).", "During the final fine-tuning step of UDALM we train with batch size 36, consisting of n = 1 source sub-batch of 4 samples and m = 8 target sub-batches of 4 samples each.", "We update parameters after every 5 accumulated sub-batches.", "We train for 10 epochs with early stopping on the mixed loss in Eq.", "2. For all experiments we use AdamW optimizer (Loshchilov and Hutter, 2018) with learning rate 10 5 .", "Each adaptation scenario requires one hour on one GTX-1080Ti.", "For the domain adversarial experiments we set d = 0 .", "01 in Eq.", "4 2 and train for 10 epochs.", "Models are developed with PyTorch (Paszke et al., 2019) and HuggingFace Transformers (Wolf et al., 2019).", "We select three state-of-the-art methods for comparison.", "Each of the selected methods represents a different line of UDA research, namely domain adversarial training BERT-DAAT (Du et al., 2020), self-training XLM-R based p+CFd (Ye et al., 2020) and pivot-based R-PERL (Ben-David et al., 2020).", "We report results for the following settings with BERT models: Source only (SO) : We fine-tune BERT on source domain labeled data, without using target data.", "Domain Adversarial (DAT) : Domain Adversarial Training with BERT.", "Starting from the domain pretrained BERT (as in Fig. 1b), we then fine-tune the model with domain adversarial training as in Ganin et al. (2016).", "For a BERT model with parameters , with LCLF being a cross-entropy loss for supervised task prediction, LADV being a cross-entropy loss for domain prediction and d being a weighting factor, domain adversarial training consists of the minimization criterion described in Eq.", "4.", "UDALM : The proposed method, where we fine-tune the model created in the domain pretraining step using the mixed loss in Eq.", "2. 6 Experimental Results 6.1 Comparison to state-of-the-art We present results for all 12 domain adaptation settings in Table", "1. Results for SO BERT, DAT BERT, DPT BERT and UDALM are averaged over five runs and we include standard deviations The last line of Table 1 contains the macro-averaged accuracy and deviations over all domain pairs.", "UDALM surpasses all other techniques, yielding an absolute improvement of 1 .", "81% over the SO BERT baseline.", "For fair comparison, we compare only with methods based on pretrained models, mostly BERT.", "We observe that BERT fine-tuned only with the source domain labeled data, without any knowledge of the target domain, is a competitive baseline.", "This source-only model even surpasses state-of-the-art methods developed for UDA, e.g. R-PERL (Ben-David et al., 2020).", "We reproduce the domain adversarial training procedure and present results in the DAT BERT column of Table", "1. Adversarial training proved to be unstable in our experiments, even after careful tuning of the adversarial loss weighting factor d .", "This is evidenced by the high standard deviations in the DAT BERT experiments.", "We observe that adversarial training does not manage to outperform the source-only baseline.", "3 Domain pretraining increases the average accuracy with an absolute improvement of 0 .", "85% over the source-only baseline.", "Continuing MLM pretraining on the target domain data leads to better model adaptation, and therefore improved performance, on the target domain.", "This is consistent with previous works on supervised (Gururangan et al., 2020; Xu et al., 2019; Sun et al., 2019) and unsupervised settings (Han and Eisenstein, 2019; Du et al., 2020).", "UDALM yields an additional 0 .", "96% absolute improvement of average accuracy over domain pretraining.", "Keeping the MLM loss during fine-tuning therefore, leads to better adaptation and acts as a regularizer that prevents the model from overfitting on the source domain.", "We also observe smaller standard deviations when using UDALM, which indicates that including the MLM loss during fine-tuning can result to more robust training.", "UDALM surpasses in terms of macro-average accuracy all other approaches for unsupervised domain adaptation on the Amazon reviews multi-domain sentiment dataset.", "Specifically, our method improves on the state-of-the-art pseudo-labeling 3 Note that we did not have to perform extensive tuning for the other methods, including UDALM.", "(p+CFd Ye et al., 2020), domain adversarial (DAAT Du et al., 2020) and pivot-based (R-PERL Ben-David et al., 2020) approaches by 1 .", "11% , 1 .", "62% and 4 .", "24% respectively.", "We further investigate the impact of using different amount of target domain unlabeled data on model performance, to study the sample efficiency of UDALM.", "We experiment with settings of 500, 2000, 6000, 10000 and 14000 samples, by randomly limiting the number of unlabeled target domain data.", "For each setting we conduct three experiments with BERT models: (1) DPT, (2) DAT and (3) UDALM.", "When no target data are available, all methods are equivalent to a source only fine-tuned BERT.", "Again, we do not tune the hyper-parameters for DPT or UDALM.", "Fig. 2 shows the average accuracy on the twelve adaptation scenarios of the studied dataset.", "We see that UDALM produces robust performance improvement when we limit the amount of target data, indicating that it can be used in low-resource settings.", "However, training BERT in a domain adversarial manner shows instabilities.", "This is further discussed in Section 7.", "A common problem when performing UDA is the lack of target labeled data that can be used for hyperparameter validation.", "For example, Ruder and Plank (2018) use a small set of labeled target data for validation, putting the problem in a semisupervised setting.", "When training under a domain shift, optimization of model performance on the source data may not result to optimal performance for the target data.", "To illustrate this, we examine if the minimization of the mixed loss can be used as a stopping criterion for UDA training.", "We compare five stopping criteria: (1) fixed training for 1 epoch, (2) fixed training for 3 epochs, (3) fixed training for 10 epochs, (4) stop when the minimum classification loss is reached for the source data and (5) stop when the minimum mixed loss ( Eq. 2) is reached.", "For (4) and (5) we train for 10 epochs with patience 3.", "We report average accuracy of the five stopping criteria over the twelve adaptation scenarios of Amazon Reviews dataset on Table", "2. Training for a fixed number of 10 epochs and stopping when the minimum mixed loss perform best, yielding comparable accuracies of 91 .", "75% and 91 .", "73% respectively.", "Note that stopping when the minimum source loss stops the fine-tuning process too soon and does not allow the model to learn the target domain effectively.", "Overall, we observe that the mixed loss can be effectively used for early stopping, regularizing the model and alleviating the need for extensive search for the optimal number of training steps.", "This is an indication that the mixed loss could be used for model validation.", "Ben-David et al. (2007, 2010) provide a theory of learning from different domains.", "A key outcome of this work is the following theorem: Theorem (Ben-David et al., 2007, 2010) Let H be the hypothesis space and let DS , DT be the two domains and (cid:15) S , (cid:15) T be the corresponding error functions.", "Then for any h H : (cid:15) T ( h ) (cid:15) S ( h ) + 1 2 d H H ( DS , DT ) + C (5) where d H H ( DS , DT ) is the H H -divergence (Kifer et al., 2004) between two domains, that is a measure of distance between domains that can be estimated from finite samples.", "Eq.", "5 defines an upper bound for the expected error (cid:15) T ( h ) of a hypothesis h on the target domain as the sum of three terms, namely the expected error on the source domain (cid:15) S ( h ) , the divergence between the source and target domain distributions 12 d H H ( DS , DT ) and the error of the ideal joint hypothesis C .", "When such an hypothesis exists, the term is considered relatively small and in practice ignored.", "The first term, bounds the expected error on the target domain by the expected error in the source domain and is expected to be small, due to supervised learning on the source domain.", "The second term, gives a notion of distance between the source and target domain extracted features.", "Intuitively this equation states: if there exists a hypothesis h that has small error on the source data and the source feature space is close to the target feature space, then this hypothesis will have low error on the target data.", "Domain Adversarial Training aims to learn features that simultaneously result to low source error and low distance between target and source feature spaces based on the combined loss in Eq.", "4.", "According to Ben-David et al. (2007) the H H divergence can be approximated by proxy A-distance, that is defined by Eq.", "6 given the domain classification error (cid:15) D .", "d A = 2(1 2 (cid:15) D ) (6) We calculate an approximation of the distance between domains.", "Following prior work (Ganin et al., 2016; Saito et al., 2017) we create an SVM domain classifier.", "We feed the SVM with BERT's [ CLS ] token representations, measure the domain classification error, and compute A-distance as in Eq.", "6.", "We train the domain classifier on 2000 samples from each source and target domains.", "Fig. 3 shows the A-distance along with the source and the target error, averaged over the twelve available domain pairs using representations obtained from four methods, namely BERT SO, DAT BERT, DPT BERT and UDALM.", "DAT BERT minimizes the distance between domains.", "DPT BERT also reduces the A-distance, to similar levels with DAT, without using an explicit loss to minimize A-distance.", "To our surprise we found that, although it achieves the lowest error rate, UDALM does not significantly reduce the proxy A-distance compared to the source-only baseline.", "Additionally, we observe that the source error is correlated to model performance on the target task, i.e. models with lower source error have also lower target error.", "UDALM specifically, achieves high accuracy on the source task and is able to transfer the task knowledge across domains, while DAT is able to bring domain representations closer, but at the cost of achieving weaker performance on the task at hand.", "Overall, we do not observe a correlation between the resulting A-distance and model performance on target domain.", "Therefore, lower distance between domains, achieved intentionally or not, is not a necessary condition for good performance on the target domain 4 , and our efforts could be better spent towards synergistic learning of the supervised source task and the target domain distribution.", "Domain adversarial training (Ganin et al., 2016) faces some critical limitations that make the method difficult to be reproduced due to high hyperparameter sensitivity and instability during training.", "Such limitations have been highlighted by other authors in the UDA literature.", "For example, according to Shen et al. (2018) when a domain classifier can perfectly distinguish target from source representations, there will be a gradient vanishing problem.", "Shah et al. (2018) state that domain adversarial training is unstable and needs careful hyperparameter tuning for their experiments.", "Wang et al. (2020) report results over three multi-domain NLP datasets, where domain adversarial training in conjunction with BERT under-performs.", "Ruder and Plank (2018) found that the domain adversarial loss did not help for their experiments on the Amazon reviews dataset.", "In our experiments we note that domain-adversarial training results to worse performance than naive source only training.", "Furthermore, we experienced the need for extensive tuning of the d parameter from Eq.", "4 every time the experimental setting changed (e.g. when testing for different amounts of available target data as in Section 6.2).", "This motivated us to further investigate the behavior of BERT fine-tuned with the adversarial cost.", "For visual inspection, we perform T-SNE (Maaten and Hinton, 2008) on representations extracted 4 Shu et al. (2018) state that feature distribution matching is a weak constraint when high-capacity feature extractors are used.", "Intuitively, a high-capacity feature extractor can perform arbitrary transformations to the input features in order to match the distributions.", "from BERT, under four UDA setings in Fig. 4.", "In Fig. 4a we observe features extracted using BERT with Domain Adversarial Training and we compare it with features from SO BERT (Fig. 4b), DPT BERT (Fig. 4c) and UDALM (Fig. 4d).", "We observe that domain adversarial training manages to group tightly target and source samples, especially in the case of positive samples.", "Nevertheless, in the process, DAT introduces significant distortion in the semantic space, which is reflected in model performance 5 .", "We can attribute this behavior to two factors.", "First, The formulation of the adversarial loss in Eq.", "(4) can lead to trivial solutions.", "In order to maximize the LADV term of Eq.", "(4), the model can just flip all domain labels, namely just predict that source samples belong to the target domain and vice-versa.", "In this case the model can still discriminate between domains and domain-independent representations are not encouraged.", "We empirically observed this behavior in our experiments with DAT, and only extensive hyperparameter tuning could alleviate this issue.", "Additionally, Eq.", "(4) aims to minimize the upper bound of the target error (cid:15) T ( h ) in Eq.", "(5).", "While this is desirable, reduction of the upper bound does not necessarily result in reduction of the bounded term in all scenarios.", "Furthermore, optimizing the LADV ( ; DS , DT ) term can lead to increasing LCLF ( ; DS ) , and therefore one must find a balance between the two adversarial terms, again through careful hyper-parameter tuning.", "These issues could potentially be alleviated by including regularization terms that discourage trivial solutions and improve robustness.", "Therefore, given the lack of guarantees for good performance and the practical considerations, further investigation should be conducted regarding the robustness and reproducibility of DAT for UDA.", "Unsupervised domain adaptation of pretrained language models is a challenging problem with direct real world applications.", "In this work we propose UDALM, a robust, plug and play training strategy, which is able to improve performance in the target domain, achieving state-of-the-art results across 12 adaptation settings in the multi-domain Ama-5 Note, we include this visualization for a single source-domain pair as an example.", "We performed multiple runs of T-SNE over all 12 source-domain pairs and this behavior appeared consistently.", "zon reviews dataset.", "Our method produces robust results with little hyper-parameter tuning and the proposed mixed-loss can be used for model validation, allowing for fast model development.", "Furthermore, UDALM scales with the amount of available unsupervised data from the target domain, allowing for adaptation in low-resource settings.", "In our analysis, we discuss the relationship between the A-distance and the target error.", "We observe that low A-distance may not suggest low target error for high capacity models.", "Additionally, we examine limitations of Domain Adversarial Training and highlight that the adversarial cost may lead to distortion of the feature space and negatively impact performance.", "In the future we plan to apply UDALM to other tasks under domain-shift, such as sequence classification, question answering and part-of-speech tagging.", "Furthermore, we plan to extend our method for temporal and style adaptation, by adding more relevant auxiliary tasks that model language shift over time and over different platforms.", "Finally, we want to investigate the effectiveness of the proposed fine-tuning approach in supervised scenarios.", "This research has been co-financed by the European Regional Development Fund of the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship and Innovation, under the call RESEARCH CREATE INNOVATE (project safety4all with code:T1EDK-04248) This work has been partially supported by computational time granted from the Greek Research & Technology Network (GR-NET) in the National HPC facility ARIS.", "The authors would like to thank Efthymios Georgiou for his comments and suggestions." ]
[ "objective", "method", "result", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "objective", "result", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "method", "method", "other", "method", "other", "other", "other", "result", "result", "other", "result", "result", "other", "other", "other", "other", "other", "other", "result", "abstain", "other", "objective", "method", "other", "other", "result", "result", "other", "abstain", "other", "other", "other", "other", "other", "result", "other", "other", "other", "other", "other", "other", "other", "abstain", "objective", "method", "abstain", "objective", "abstain", "abstain", "result", "result", "method", "objective", "objective", "other", "other" ]
[ "Syntax has been demonstrated highly effective in neural machine translation (NMT).", "Previous NMT models integrate syntax by representing 1-best tree outputs from a well-trained parsing system, e.g., the representative Tree-RNN and Tree-Linearization methods, which may suffer from error propagation.", "In this work, we propose a novel method to integrate source-side syntax implicitly for NMT.", "The basic idea is to use the intermediate hidden representations of a well-trained end-to-end dependency parser, which are referred to as syntax-aware word representations (SAWRs).", "Then, we simply concatenate such SAWRs with ordinary word embeddings to enhance basic NMT models.", "The method can be straightforwardly integrated into the widely-used sequence-to-sequence (Seq2Seq) NMT models.", "We start with a representative RNN-based Seq2Seq baseline system, and test the effectiveness of our proposed method on two benchmark datasets of the Chinese-English and English-Vietnamese translation tasks, respectively.", "Experimental results show that the proposed approach is able to bring significant BLEU score improvements on the two datasets compared with the baseline, 1.74 points for Chinese-English translation and 0.80 point for English-Vietnamese translation, respectively.", "In addition, the approach also outperforms the explicit Tree-RNN and Tree-Linearization methods.", "In the past few years, neural machine translation (NMT) has drawn increasing interests due to its simplicity and promising performance (Bah-danau et al., 2014; Jean et al., 2015; Luong and Manning, 2015; Luong et al., 2015; Shen et al., 2016; Vaswani et al., 2017).", "The widely used Corresponding author.", "An example of input dependency tree.", "sequence-to-sequence (Seq2Seq) framework combined with attention mechanism achieves significant improvement over the traditional statistical machine translation (SMT) models on a variety of language pairs, such as Chinese-English (Shi et al., 2016; Mi et al., 2016; Vaswani et al., 2017; Cheng et al., 2018).", "Under an encoder-decoder architecture, the Seq2Seq framework first encodes the source sentence into a sequence of hidden vectors, and then incrementally predicts the target sentence (Cho et al., 2014a).", "Recently, inspired by the success of syntax-based SMT (Williams et al., 2016), researchers propose a range of interesting approaches for exploiting syntax information in NMT models, as syntactic trees could offer long-distance relations in sentences (Shi et al., 2016; Wu et al., 2017b; Li et al., 2017; Bastings et al., 2017; Hashimoto and Tsuruoka, 2017).", "As a straightforward method, tree-structured recurrent neural network ( Tree-RNN ) can elegantly model the source-side syntax and globally encode the whole trees.", "Eriguchi et al. (2016), Chen et al. (2017a) and Yang et al. (2017) show that Tree-RNN can effectively integrate syntax-oriented trees into Seq2Seq NMT models.", "Regardless of the effectiveness of Tree-RNN, we find that it suffers from a severe low-efficiency problem because of the heterogeneity of different syntax trees, which leads to increasing difficul-ties for batch computation compared with sequential inputs.", "Even with deliberate batching method of Neubig et al. (2017), our preliminary experiments show that Tree-RNN with gated recurrent unit (GRU) can lead to nearly four times slower performance when it is integrated into a classical Seq2Seq system.", "To solve the problem, Tree-Linearization is a good alternative for syntax encoding.", "The main idea is to linearize syntax trees into sequential symbols, and then exploit the resulting sequences as inputs for NMT.", "Li et al. (2017) propose a depth-first method to traverse a constituent tree, converting it into a sequence of symbols mixed with sentential words and syntax labels.", "Similarly, Wu et al. (2017b) combine several strategies of tree traversing for dependency syntax integration.", "In this work, we present an implicit syntax encoding method for NMT, enhancing NMT models by syntax-aware word representations ( SAWRs ).", "Figure 1 illustrates the basic idea, where trees are modeled indirectly by sequential vectors extracted from an encoder-decoder dependency parser.", "On the one hand, the method avoids the structural heterogeneity and thus can be integrated efficiently, and on the other hand, it does not require discrete 1-best tree outputs, alleviating the error propagation problem induced from syntax parsers.", "Concretely, the vector outputs are extracted from the encoding outputs of the encoder-decoder dependency parser.", "As shown in Figure 1, the encoding outputs, denoted as o = o 1 o 6 , are then integrated into Seq2Seq NMT models by directly concatenated with the source input word embeddings after a linear projection.", "We start with a Seq2Seq baseline with attention mechanism (Bahdanau et al., 2014) for study, following previous studies of the same research line, and then integrate source dependency syntax by SAWRs.", "We conduct experiments on Chinese-English and English-Vietnamese translation tasks, respectively.", "The results show that our method is very effective in source syntax integration.", "With source dependency syntax, the performances of Chinese-English and English-Vietnamese translation can be significantly boosted by 1.74 BLEU points and 0.80 BLEU points, respectively.", "We also compare the method with the representative Tree-RNN and Tree-Linearization approaches of syntax integration, finding that our method is able to achieve larger improvements than the two approaches for both tasks.", "All the codes are released publicly available at https://github.com/zhangmeishan/SYN4NMT under Apache License 2.0.", "We take the simple yet effective Seq2Seq model with attention mechanism proposed by Luong et al. (2015) as our baseline.", "Under the standard encoder-decoder architecture, an encoder first maps the source-language input sentence into a sequence of hidden vectors, and a decoder then incrementally predicts the target output sentence.", "In particular, we should notice that several recent models (Vaswani et al., 2017; Zheng et al., 2017; Cheng et al., 2018) which have been shown to be more powerful can also serve as our baseline, since these models focus on very different aspects of NMT, which could be potentially complementary with our focus of syntax integration.", "We will demonstrate it by experimental analysis as well.", "In the encoder part, a single-layer bi-directional recurrent neural network (Bi-RNN) is employed to encode the sentence in order to capture features from the current word and the unbounded left and right contextual words.", "Given a source-language input sentence x = x 1 x n and its embedding sequence e x 1 e x n , the Bi-RNN produces an encoding sequence of dense vectors h = h 1 h i h n : h i = h i h i , h i = rnn L ( e x i , h i 1 ) h i = rnn R ( e x i , h i +1 ) (1) where rnn L /R can be either GRU (Cho et al., 2014b) or LSTM.", "We use GRU all through this paper for efficiency following Chen et al. (2017a).", "The decoder part incrementally predicts the target word sequence y = y 1 y m , whose translation probability is defined as follows:", "The training objective is to maximize the probability of the reference translation.", "During evaluation, we aim to search for a target sentence with the highest probability for a given source sentence.", "The probability of the j -th target word is computed by a two-layer feed-forward neural network: p ( y j | y 1 y j 1 , h ) = g ( s j 1 , c j ) , (3) where s j 1 = rnn tgt ( e y j 1 c j 1 , s j 2 ) is the output of a left-to-right RNN over the predicted words, and the c j / c j 1 is the weighted sum over the encoding sequence h of the source sentence via the attention mechanism, which is computed as follows: c j = n (cid:88) k =1 j,k h k j,k = exp( j,k ) (cid:80) nl =1 exp( j,l ) j,l = s T j 1 W a h l (4) where W a is the model parameter in attention.", "Syntax information has been demonstrated to be valuable for NMT.", "Previously, there were two representative approaches to encode syntax into an NMT model.", "The first approach directly represents an input syntax tree by Tree-RNN , and then uses the Tree-RNN outputs as additional encoder inputs for NMT.", "The second approach models source syntax trees indirectly by first converting a hierarchical tree into a sequence of symbols, and then use the symbols as inputs for NMT.", "The second method is referred to as Tree-Linearization here.", "Tree-RNN is able to represent the syntax structures fully and comprehensively.", "However, because of the heterogeneity of different syntax trees, this approach suffers serious inefficiency encoder decoder input embedding projection Bi-RNN LLL decoder parser output translation output encoder Figure 2: The framework of the SAWR approach, where the left part shows the encoder-decoder of a supervised dependency parsing model and the right part shows the NMT encoder-decoder.", "problem as the increased difficulty of batch computation for GPU neural computation.", "The second approach exploits an alternative sequence to substitute the original trees, which solves the inefficiency problem.", "But it may bring loss of syntax information because the hierarchical tree structure is no longer maintained in the new representation, which could be potentially useful for NMT.", "Both the two syntax integration approaches are based on discrete 1-best outputs of a supervised dependency parser, which may suffer from the error propagation problem.", "Incorrect syntax trees as inputs for NMT may produce erroneous outputs, leading to inappropriate translation results.", "In order to alleviate the problem, we present a novel method not using the discrete parsing outputs.", "We focus on supervised dependency parsing models which can be formalized as an encoder-decoder architecture, and exploit the encoder outputs as the inputs for our Seq2Seq NMT model.", "The encoder outputs are sequences of dense vectors aligning with the source sentential words, as shown in Figure 1, and thus they could be easily combined with the encoder part of our NMT model.", "We refer to this method as SAWR for brief.", "Our approach takes the implicit hidden outputs from a supervised parser as inputs for NMT, which greatly reduces the direct influence brought from discrete 1-best parser outputs.", "Figure 2 shows the framework of SAWR.", "Concretely, we first project the encoder outputs of a dependency parsing model into a sequence of vectors by a feed-forward linear layer, as shown by the projection module in Figure 2: s i = W o i + b (5) where o = o 1 o n is the encoder output of a parsing model, W and b are model parameters.", "Then we concatenate the resulting vectors with the source embeddings as inputs for the baseline Bi-RNN Encoder.", "Thus the encoder process can be formalized as follows: h = Bi-RNN (cid:0) e x 1 s 1 , , e x n s n (cid:1) .", "We can train both dependency parsing and machine translation model parameters concurrently.", "In this work, we focus on the machine translation task and do not involve the training objective of dependency parsing.", "However, we can still fine-tune model parameters of the encoder part of dependency parsing by back-propagating the training losses of NMT into this part as well.", "Actually, SAWRs are also similar to the ELMO embeddings (Peters et al., 2018).", "ELMO learns context word representations by using language model as objective, while SAWRs learn syntax-aware word representations by using dependency parsing as objective.", "On the other hand, compared with the Tree-RNN and Tree-Linearization methods which encode syntax trees by neural networks directly, SAWRs are less sensitive to the output syntax trees.", "Thus the SAWR method can alleviate the error propagation problem.", "Data.", "We conduct experiments on the Chinese-English and English-Vietnamese translation tasks, respectively.", "For Chinese-English, we use the parallel training data from the publicly available LDC corpora, 1 with 28.3M Chinese words and 34.5M English words, respectively, consisting of 1.25M sentence pairs, and test model performances on the NIST datasets, using NIST MT02 as the development data, and MT03-06 as test datasets.", "For English-Vietnamese, we use the standard IWSLT 2015 dataset, 2 which consists of about 133K sentence pairs, and evaluate our models by exploiting 1 LDC2002E18, LDC2003E07, LDC2003E14, Hansards portion of LDC2004T07, LDC2004T08 and LDC2005T06.", "the TED tst2012 and tst2013 as the development and test datasets, respectively.", "For the source side sentences, we construct vocabularies of the most frequent 50K words, while for the target side sentences, we apply byte-pair encodings (BPE) (Sennrich et al., 2016) with 32K merges to obtain subword units, and construct the target vocabularies by the most frequent 32K sub-words.", "During training, we use only the sentence pairs whose source and target lengths both are no longer than 50 and 150 for Chinese-English and English-Vietnamese translations, respectively.", "Evaluation.", "We use the case insensitive 4-gram BLEU score as the main evaluation metrics (Papineni et al., 2002), and adopt the script multi-bleu.perl in the Mose toolkit.", "3 Significance tests are conducted based on the best-BLEU results for each approach by using bootstrap resampling (Koehn, 2004).", "Alternatively, in order to compare the effectiveness of our model with other syntax integration methods, we implement a Tree-RNN approach and a Tree-Linearization approach, respectively: Tree-RNN: We build a one-layer bidirectional Tree-RNN with GRU over input word embeddings, producing syntax-enhanced word representations, which are then fed into the encoder of NMT as basic inputs.", "The method is similar to the model proposed by Chen et al. (2017a).", "Tree-Linearization: We first convert dependency trees into constituent trees (Sun and Wan, 2013), and then feed it into the NMT model proposed by Li et al. (2017).", "Hyperparameters.", "We set the dimension sizes of all hidden neural layers to 1024, except the input layers for RNNs (i.e. input word embeddings and the projection layer of SAWR), which are set to 512.", "We initialize all model parameters by random uniform distribution between [ 0 . 1 , 0 . 1] .", "We apply dropout on the output layer of word translation with a ratio of 0.5.", "We adopt the Adam algorithm (Kingma and Ba, 2014) for parameter optimization, with the initial learning rate of 5 10 4 , the gradient clipping threshold of 5, and the mini-batch size of 80.", "During translation, we employ beam search for decoding with the beam size of 5.", "Source-Side Parsing.", "We employ the state-of-the-art BiAffine dependency parser recently proposed by Dozat and Manning (2016) to obtain the source-side dependency syntax information.", "The BiAffine parser can also be understood as an encoder-decoder model, where the encoder part is a three-layer bi-directional LSTM over the input words, and the decoder uses BiAffine operations to score all candidate dependency arcs and finds the highest-scoring trees via dynamic programming.", "For Chinese-English translation, we train the dependency parser on Chinese Treebank 7.0 with Stanford dependencies, 4 using 50K random sentences as the training data and the remaining as the test data.", "The parser achieves 81 .", "02% parsing accuracy (labeled attached score, LAS) on the test dataset.", "For English-Vietnamese translation, we train the dependency parser on English WSJ corpus, following the same data split as Dozat and Manning (2016), and obtaining a LAS of 93 .", "84% on the test dataset.", "5 4.2 Speed Comparison All our experiments are run on a single GPU NVIDIA TITAN Xp.", "We report the averaged one-epoch training time on the Chinese-English translation dataset (consuming all 125M sentence pairs) as follows: Baseline 105 min SAWR 142 min Tree-RNN 498 min Tree-Linearization 137 min 4 https://nlp.stanford.edu/software/stanford-dependencies.shtml 5 For simplicity, we use only words as inputs for both Chinese and English dependency parsing, avoiding the influences brought by other inputs, such as automatic POS tags.", "The SAWR system spends averaged 142 minutes, 6 37 minutes slower than the baseline model.", "The Tree-Linearization spends averaged 137 minutes per epoch, which is the fastest syntax integration method.", "Our SAWR approach spends 5 more minutes than Tree-Linearization, appropriate 3.5% of the total spend time per epoch, which could be negligible.", "The Tree-RNN model spends 498 minutes per epoch, nearly four times slower than the baseline model.", "7 According to the results, we can conclude that the Tree-RNN model is highly inef-ficient for encoding dependency syntax, whereas the SAWR and Tree-Linearization are almost as efficient as the baseline Seq2Seq system.", "Table 1 shows the main results of all approaches on Chinese-English datasets.", "Considering the effect of random initialization, we train three individual models for each approach, and use the averaged BLEU scores for fair comparisons.", "According to the results, we can see that all syntax-integrated approaches can bring significant improvements over the baseline system, which denotes that syntax is highly effective for Chinese-English machine translation.", "In addition, the proposed SAWR approach obtains the largest BLEU improvements, averaged = 1 .", "74 BLEU points better than the baseline system.", "The Tree-RNN and Tree-Linearization approaches bring improve-6 We exclude the time consumed by the encoder part of the dependency parsing model for fair comparisons, as other methods require to perform parsing in an offline way.", "7 The Tree-RNN model is implemented with deliberate batching motivated by Neubig et al. (2017), without which the model is intolerably slow, reaching about 1,900 minutes per epoch.", "ments of averaged = 1 .", "32 and = 1 .", "23 BLEU points, respectively.", "The results show that our implicit syntax-aware encoding method is better than Tree-RNN and Tree-Linearization.", "We compare our NMT models with other state-of-the-art methods as well.", "The results are just for reference since experimental details could be very different.", "In particular, we list the relative improvements over the corresponding baseline models by integrating syntax structures, which are calculated according to their papers.", "All these studies exploit lower baselines compared with our models.", "The Tree-RNN and Tree-Linearization are essentially similar to Chen et al. (2017a) and Li et al. (2017), respectively.", "As shown, our approaches can still obtain large improvements based on a stronger baseline.", "Table 2 shows the final results on the IWSLT 2015 English-Vietnamese translation task.", "The overall tendency is similar to that of Chinese-English translation.", "The syntax information can boost the translation performances by using any of the three approaches.", "The SAWR approach gives the best translation performance, significantly outperform the baseline system by = 0 .", "80 BLEU points.", "While although the other two approaches bring better performances, the improvements are not significant.", "The results demonstrate the advantage of the proposed implicit SAWR approach.", "By not using the 1-best parser outputs, our approach can reduce the error propagation problem, thus bring larger improvements with syntax.", "In particular, we find that the increases of BLEU scores are smaller than that of Chinese-English translation by integrating syntactic features.", "The averaged BLEU increases are 0 .", "55 for English-Vietnamese and 1 .", "43 for Chinese-English.", "The possible reason may be due to that the source English sentences are more grammatically rigorous Parser MT03 MT04 MT05 MT06 Average no Tune 38.42 40.60 38.27 38.04 38.83 Tune 37.33 39.45 36.93 37.03 37.69 Table 3: The influence of fine-tuning parser parameters in the SAWR system.", "than Chinese sentences.", "For example, the English functional words such as of and s which indicate the possessive relationship, should be always kept in sentences by standard, while their Chinese correspondence may be omitted in sentences.", "In this section, we conduct analysis on Chinese-English translation from different aspects to better understand the SAWR approach of integrating source-side dependency syntax for NMT.", "The SAWR approach directly uses the encoder outputs of a dependency parser as extra inputs for NMT.", "In the above experiments, we keep the parser model parameters fixed, letting them unin-fluenced from NMT optimization.", "Actually, this part can be further fine tuned along with the NMT learning, by treating them as one kind model parameters.", "Thus there arises a question that whether fine-tuning the parser model parameters can bring better performance.", "As an interesting attempt, we can simultaneously fine tune the parameters of both the parser and the Seq2Seq NMT model during training.", "Figure 3 shows the results.", "We can see that fine-tuning decreases the average BLEU score by 38 .", "83 37 .", "69 = 1 .", "14 significantly.", "This may be because that fine-tuning disorders the representation ability of the parser and makes its function more overlapping with other network components.", "This further demonstrates that pretrained syntax-aware word representations are helpful for NMT.", "Alignment quality is an important metric to illustrate and evaluate machine translation outputs.", "Here we study how syntax features influence the alignment results for NMT.", "We approximate the alignment scores by the attention probabilities as shown in Equation 4.", "8 For better understanding 8 We aim to offer an intuitive interpretation by a carefully-selected example.", "In fact, the alignment computation method here may be problematic (Koehn and Knowles, 2017).", "the effectiveness of syntax, we choose the target-side English word of for comparison, which is a grammatical functional word.", "Figure 3 shows the alignment probability distributions returned by different approaches.", "Intuitively, this word should be aligned with the Chinese word (de).", "But according to the results, we can see that only the SAWR model distributes a high attention score to it, which is consistent with our intuition.", "The other three models are all aligned to the source word (modern) with high confidence over 85%.", "The possible reason for of being aligned to (modern) could be due to that of modern is a high-frequency collocation in the training corpora.", "Here we perform model ensembles to examine the divergences of the three syntax-integration approaches (Zhou et al., 2017b; Denkowski and Neubig, 2017).", "Intuitively, the hetero-approach ensemble which combines three NMT models of different methods should obtain better performances than homo-approach ensembles which combine three NMT models of the same method, since NMT models of different syntax-integrations approaches have larger divergences.", "Table 4 shows the results.", "First, we can see that ensemble is one effective technique to improve the translation performances.", "More impor-(0 , 10] (11 , 20] (21 , 30] (31 , 40] (41 , 50] > 50 30 34 38 42 BLEU Baseline SAWR Tree-RNN Tree-Linearization Figure 4: The effect of source input length.", "tantly, the results show that the heterogeneous ensemble achieves averaged BLEU improvements by 43 .", "10 41 .", "24 = 1 .", "86 points, better than the gains achieved by all three homo-approach ensembles, denoting that the three approaches could be mutually complementary in representing dependency syntax, and the resulting models of the three approaches are highly diverse.", "Intuitively, by introducing the source syntax into the NMT model, relations between long-distance words are explicitly modeled by dependency trees, thus we can expect that models enhanced by source syntax are able to bring better translations for longer sentences.", "Figure 4 shows the performances of the baseline and all syntax-enriched models in terms of source sentence lengths, where we bin all the MT03-MT06 sentences by their lengths into six intervals.", "The results show that the BLEU scores are improved significantly when source sentential lengths are over 10, which con-firms our intuition.", "Finally, we examine how the performance of the dependency parser influences the final translation quality.", "While the full dependency parser is System MT03 MT04 MT05 MT06 Average/ Transformer 40.45 42.76 40.09 39.67 40.74 SAWR 41.63 43.60 41.68 40.21 41.78/+1.04 Tree-RNN 41.24 43.38 41.04 40.02 41.42/+0.68 Tree-Linearization 41.12 43.02 41.04 39.86 41.26/+0.52 Table 5: Final results based on the transformer.", "trained on 50K sentences, we retrain three weaker dependency parsers on 30K, 10K and 5K sentences, respectively.", "Figure 5 shows the NMT BLEU scores and the parsing accuracies.", "It is clear that the parsing accuracy directly influences the translation quality, indicating the effectiveness and importance of exploiting syntactic information.", "Here we conduct experiments based on the transformer NMT model (Vaswani et al., 2017), which is a stronger baseline, to further verify the effectiveness of our proposed method.", "This also demonstrates that the proposed SAWR method does not limit to a certain NMT baseline.", "Concretely, we extend the bottom word representations by incorporating syntactic encodings s = s 1 s n (shown in Equation", "5) into them, and then feed them into the transformer encoder by a linear projection layer to align with the input dimension.", "We implement Tree-RNN and Tree-Linearization for Transformer in a similar way, only adapting the source input word representing.", "We adopt a widely-used setting with 8 heads, 6 layers and the hidden dimension size of 512.", "Table 5 shows the results.", "As shown, the transformer results are indeed much better than RNN-based baseline.", "The BLEU scores show an average increase of 40 .", "74 37 .", "09 = 3 .", "65 .", "In addition, we can see that syntax information can still give positive influences based on the transformer.", "The SAWR approach can also outperform the baseline system significantly.", "Particularly, we find that our SAWR approach is much more effective than the Tree-RNN and Tree-Linearization approaches.", "The results further demonstrate the effectiveness of SAWRs in syntax integration for NMT.", "By explicitly expressing the structural connections between words and phrases, syntax trees been demonstrated helpful in SMT (Liu et al., 2006; Cowan et al., 2006; Marton and Resnik, 2008; Xie et al., 2011; Li et al., 2013; Williams et al., 2016).", "Although the representative Seq2Seq NMT models are able to capture latent long-distance relations by using neural network structures such GRU and LSTM (Sutskever et al., 2014; Wu et al., 2016), recent studies show that explicitly integrating syntax trees into NMT models can bring further gains (Sennrich and Haddow, 2016; Shi et al., 2016; Zhou et al., 2017a; Wu et al., 2017a; Aharoni and Goldberg, 2017).", "Under the NMT setting, the exploration of syntax trees could be more flex-ible, because of the strong capabilities of neural network in representing arbitrary structures.", "Recursive neural networks based on LSTM or GRU have been one natural method to model syntax trees (Zhu et al., 2015; Tai et al., 2015; Li et al., 2015; Zhang et al., 2016; Teng and Zhang, 2016; Miwa and Bansal, 2016; Kokkinos and Potami-anos, 2017), which are capable of representing the entire trees globally.", "Eriguchi et al. (2016) present the first work to apply a bottom-up Tree-LSTM for NMT.", "The major drawback is that its bottom-up composing strategy is insufficient for bottom nodes.", "Thus bi-directional extensions have been suggested (Chen et al., 2017a; Yang et al., 2017).", "Since Tree-RNN suffers serious inefficiency problem, Li et al. (2017) suggest a Tree-Linearization alternative, which converts constituent trees into a sequence of symbols mixed with words and syntactic tags.", "The method is as effective as Tree-RNN approaches yet more effective.", "Noticeably, all these studies focus on constituent trees.", "There have been several studies for NMT using dependency syntax.", "Hashimoto and Tsuruoka (2017) propose to combine the head information with sequential words together as source encoder inputs, where their input trees are latent dependency graphs.", "Recently, there are several studies by using convolutional neural structures to represent source dependency trees, where tree nodes are modeled individually (Chen et al., 2017b; Bastings et al., 2017).", "Wu et al. (2017b) build a syntax enhanced encoder by multiple Bi-RNNs over several different word sequences based on different traversing orders over dependency trees, i.e., the original sequential order and several tree-based orders.", "All these methods require certain extra efforts to encode the source dependency syntax over a baseline Seq2Seq NMT.", "We proposed a novel syntax integration method, SAWR, to incorporate source dependency-based syntax for NMT.", "It encodes dependency syntax implicitly, not requiring discrete syntax trees as inputs.", "Experiments showed that the method can bring significantly better performances for both Chinese-English and English-Vietnamese translation tasks.", "In addition, we compared the method with two approaches based on Tree-RNN and Tree-Linearization, which has been previously exploited for syntax integration, finding that our method is more effective and meanwhile very efficient.", "We conducted several experimental analyses to study our proposed methods deeper.", "We thank all anonymous reviewers for their valuable comments.", "We thank Huadong Chen, Haoran Wei and Zaixiang Zheng for their help in implementing baseline neural machine translation models.", "This work is supported by National Natural Science Foundation of China (NSFC) grants 61525205, U1836222, and 61672211." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "result", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "abstain", "result", "objective", "other", "other", "other" ]
[ "Anaphora and ellipses are two common phenomena in dialogues.", "Without resolving referring expressions and information omission, dialogue systems may fail to generate consistent and coherent responses.", "Traditionally, anaphora is resolved by coreference resolution and ellipses by query rewrite.", "In this work, we propose a novel joint learning framework of modeling coreference resolution and query rewriting for complex, multi-turn dialogue understanding.", "Given an ongoing dialogue between a user and a dialogue assistant, for the user query, our joint learning model first predicts coreference links between the query and the dialogue context, and then generates a self-contained rewritten user query.", "To evaluate our model, we annotate a dialogue based coreference resolution dataset, MuDoCo, with rewritten queries.", "Results show that the performance of query rewrite can be substantially boosted (+2.3% F1) with the aid of coreference modeling.", "Furthermore, our joint model outperforms the state-of-the-art coreference resolution model (+2% F1) on this dataset.", "In recent years, dialogue systems have attracted growing interest, and been applied to various scenarios, ranging from chatbots to task-oriented dialogues to question answering.", "Despite rapid progress in dialogue systems, several difficulties remain in the understanding of complex, multi-turn dialogues.", "Two major problems are anaphora resolution (Clark and Manning, 2016a,b) and ellipsis (Kumar and Joshi, 2016) in follow-up turns.", "Take the dialogue in Figure 1 as an example: ellipsis happens in user turn 2 where the user is asking for the capital of Costa Rica without explicitly mentioning the country again; coreference happens in user turn 3 where the capital refers to Work done while the first author was an intern at Apple.", "San Jose .", "Without resolving the anaphoric reference and the ellipsis, dialogue systems may fail to generate coherent responses.", "Query rewrite (Quan et al., 2019) is an approach that converts a context-dependent user query into a self-contained utterance so that it can be understood and executed independent of previous dialogue context.", "This technique can solve many cases where coreference or ellipsis happens.", "For instance, the capital in user turn 3 is changed to San Jose in the rewrite.", "Furthermore, the ellipsis of the country name Costa Rica in user turn 2 can be revealed through rewriting.", "The rewritten utterance improves multi-turn dialogue understanding (Yang et al., 2019) by reducing dependency on the previous turns.", "Although query rewrite implicitly resolves coreference resolution, there is information not contained in a rewrite.", "First, it does not provide a distinct coreference link between mentions across dialogue turns as in the classic coreference resolution task.", "This is particularly disadvantageous when there is entity ambiguity in the rewritten sentence.", "For example, in Figure 1, since San Jose in Rewrite turn 3 can be either San Jose in Costa Rica or San Jose in California, it is likely that the system ends up with an incorrect response by generating System", "(i) instead of System", "(ii) due to the wrong interpretation of San Jose.", "Second, mention detection, an essential step in coreference resolution (Peng et al., 2015), is not involved in query rewrite.", "By knowing which span in an utterance is a mention, downstream systems like named entity recognition and intent understanding can perform better (Bikel et al., 2009).", "Third, if coreference links to dialogue context are available, downstream systems can skip entity linking, which is time-consuming and may introduce noise.", "To resolve the above issues, we propose a novel joint learning framework that incorporates the benefits of reference resolution into the query rewrite task.", "To the best of our knowledge, there does not exist, at the time of writing, an English conversational dataset that couples annotations of both query rewrite and coreference resolution (as links or clusters).", "This motivates us to collect annotations for query rewrite on a recent dialogue dataset MuDoCo (Martin et al., 2020), which already has coreference links between user query and dialogue context.", "Compared to existing query rewrite datasets (Quan et al., 2019; Anantha et al., 2020), rewriting in MuDoCo is much more challenging since it involves reasoning over multiple turns and spans multiple domains.", "We design a joint learning model adopting the GPT-2 (Radford et al., 2019) architecture that learns both query rewrite and coreference resolution.", "Given an ongoing dialogue, our model first predicts the coreference links, if any, between the latest user query and the dialogue context.", "Then it generates the rewritten query by drawing upon the coreference results.", "Our experiments show that query rewrite performance can be substantially boosted with the aid of coreference training.", "In addition, our model outperforms strong baselines for the two individual tasks.", "Since both the tasks fundamentally solve reference resolution, the joint training facilitates knowledge sharing.", "Our contributions can be summarized as follows: We present a novel joint learning framework of modeling coreference resolution and query rewrite for multi-turn dialogues.", "MuDoCo dataset with query rewrite labels.", "To the best of our knowledge, our augmented MuDoCo is the first English dialogue dataset with both coreference resolution and query rewrite annotations.", "We propose a novel GPT-2 based model to tackle the two target tasks, and show that joint training with coreference resolution helps in improving the quality of the query rewrites.", "The augmented dataset with our annotations along with the modeling source code are available at https://github.com/apple/ ml-cread .", "Query Rewrite The most relevant line of research is the adoption of query rewrite in dialogues to tackle anaphora or ellipses.", "Many prior works employ an LSTM-based seq-to-seq model, which takes the dialogue context and user query as input, and generates the rewritten query.", "Quan et al. (2019) use the pointer-generator model (See et al., 2017) to rewrite the user query on restaurant-domain task-oriented dialogues.", "By comparison, query rewrite in MuDoCo dataset is more challenging as it covers 6 domains and the rewriting patterns are more complex and diverse than in the Cam-Rest676 dataset (Wen et al., 2017).", "Rastogi et al. (2019) introduce an auxiliary objective of copying entity tokens from the delexicalized utterances to augment the learning of pointer network.", "In Su et al. (2019), two separate attention distributions are learned for the dialogue context and the user query respectively with a control gate.", "This modi-fied copy mechanism shows improvements over the standard pointer-generator on both LSTM-based models and transformer-based models (Vaswani et al., 2017).", "Note that in the dataset used in their work, the dialogue context has only 2 utterances; MuDoCo, in contrast, has up to 8 utterances, making it much more challenging for query rewrite.", "Coreference Resolution Research on document-based coreference resolution has a long history (a detailed survey can be found in Ng (2010)).", "Various approaches have been proposed, ranging from learning mention-pair classifiers (Ng and Cardie, 2002; Bengtson and Roth, 2008), latent structured-based models (Fernandes et al., 2012; Bjrkelund and Kuhn, 2014; Martschat and Strube, 2015) to the more recent neural pipeline based systems that rely on syntactic parsers (Raghunathan et al., 2010) and clustering algorithms (Clark and Manning, 2016a,b).", "The first neural end-to-end coreference resolution model was proposed in Lee et al. (2017) and achieved better results without external resources.", "An improved version was proposed in Lee et al. (2018), which considers higher-order structures by iteratively refining span representations.", "Recently, powerful pre-trained models have been used to extract representations for these end-to-end models using BERT (Joshi et al., 2019) or SpanBERT (Joshi et al., 2020).", "Wu et al. (2020) approach the problem in a question answering framework.", "For each detected mention candidate, the sentence it resides in serves as the query and is used to predict the referent in the passage.", "Different from these works, we focus on coreference resolution in dialogues with the following main distinctions: 1) the speaker information in dialogues is clear; 2) less descriptive content may cause the pronoun mention to be more ambiguous; and 3) coreference resolution is conducted only between the latest user query and the previous dialogue context unlike in document-based coreference resolution where a model can look ahead for the resolution, future turns are not available to a dialogue agent.", "We encourage the reader to refer to Martin et al. (2020) for more details.", "Joint Learning In contrast to prior works that focus solely on either query rewrite or coreference resolution, we present a novel joint learning approach to tackle both the tasks using one single model.", "We hope that this work serves as a first step towards this new, challenging and practical problem for dialogue understanding.", "The MuDoCo dataset contains 7.5k task-oriented multi-turn dialogues across 6 domains.", "A dialogue has an average of 2.6 turns and a maximum of 5 (a turn includes a user query and a system re-sponse).", "Figure 2 shows an example.", "For each partial dialogue, the coreference links, if existing, are annotated between the latest user query and its dialogue context.", "For example, when we consider the partial dialogue preceding up to user turn 2 , there is a coreference link between the anaphora this in user turn 2 and the antecedent song in user turn 1 .", "When an anaphora has multiple antecedents in the context, e.g., song in user turn 1 and Yellow SubFigure 2: An example from the MuDoCo dataset in the music domain with our query rewrite annotation. Word spans in the same color belong to the same mention cluster. marine in system turn 1 , only one of them is annotated as its referent in the coreference link.", "On top of the existing coreference labels, we annotate the rewrite for each utterance.", "The goal is to rewrite the query into a self-contained query independent of the dialogue context.", "30 annotators are recruited for the data collection.", "Each of them is shown a partial dialogue, and is asked (1) to decide if the query needs to be rewritten due to coreference or ellipsis; and (2) to provide the rewritten query, when rewriting is required.", "We notice that there can be various ways of rewriting an utterance.", "For example, some annotators might include every detail of the rewritten entity, while others might choose a precise term; some might paraphrase the rewritten utterance, while others keep the same expression.", "To ensure data consistency and high annotation quality, we designed a comprehensive guideline for the annotators to follow and undertook a two-stage collection process: 1) we organized two training sessions with annotators.", "In each session, 50 representative examples were selected and assigned to each annotator.", "An author inspected these training results individually and provided feedback to the annotators.", "2) 5% of the grading results were manually evaluated by an author for quality assurance.", "Detailed annotation guidelines can be found in the Appendix.", "The joint learning task requires the machine to predict both coreference links and the rewritten query for the latest user query given an ongoing dialogue.", "The outputs of the two individual tasks complement each other and provide more comprehensive information for dialogue understanding.", "For instance, the Yellow Submarine in Figure 2 can be either a song name or an album name.", "Explicit coreference resolution helps to disambiguate Figure 3: The proposed model for joint learning of coreference resolution and query rewrite, designed using the GPT-2 architecture.", "between various possibilities by linking entities to previously resolved ones.", "More importantly, the supervision of coreference resolution can be beneficial to rewriting the anaphora to its antecedent.", "Our proposed model for jointly learning coreference resolution and query rewrite is designed based on the GPT-2 architecture, presented in Figure", "3. The input to the model is the concatenation of the dialogue context and the latest user query, where special tokens are used to separate utterances and indicate speaker information.", "Passing through the standard decoder layers, the hidden state h lt 2 R d and attention score a l,jt 2 RT at each position of the input sequence are calculated, where l , j and t denote the index of the decoder layer, that of the attention head, and the input token position respectively; d and T denote the embedding size and the length of the input sequence respectively.", "Inspired by the end-to-end coreference resolution model (Lee et al., 2017), our model first predicts mentions in the user query and grounds them to their corresponding referent in the dialogue context using attention heads.", "The model then generates the rewritten query conditioned on the resolved coreference links.", "The prediction process has four main steps, described in detail below: Step 1: Mention Detection First, the model detects any possible referring expressions in the user query.", "Here we use the term mention to include all those expressions that require reference resolution (e.g., pronouns or partial entity names).", "We formulate mention detection as a sequence labeling problem: each token in a query is labelled as one of three classes { S, E, N } , referring to Start of mention, End of mention and None respectively.", "This sequence tagger in the mention detector, parameterized by a feed-forward network, takes the hidden states of the query from the last decoder layer as input, and predicts the sequence of class labels.", "Then the mention spans in the query can be determined by a pair of mention start S and end E tags.", "For instance, in Figure 3 the label of position this is class S and that of from is class E , while the rest of the positions in the query are labelled as class N .", "We use m S and m E to respectively denote the start and end position index of a predicted mention m .", "Step 2: Reference Resolution For each detected mention m , the model resolves it to the antecedent (or referent) in the dialogue context by predicting the span boundaries: the position index of the referent start r S and end r E .", "Essentially, the distributions of the boundaries ( r S and r E ) are learned by supervising multiple attention heads associated with the target mention m .", "In other words, the attention distribution a m S (the attention score of each position associated with the mention start m S ) is supervised to focus on the referent start r S .", "Similarly, attention scores a m E associated with the mention end m E are used to learn the boundary of referent end r E .", "Concretely: q r S = 1 L 0 J 0 L 0 X l J 0 X j a l,jm S , q r E = 1 L 0 J 0 L 0 X l J 0 X j a l,jm E , (1) where q r S and q r E are the probability distributions that a given token represents the referent start r S and end r E respectively; L 0 and J 0 are the spec-ified number of the involved decoder layers and attention heads.", "We then take the argmax of these boundary distributions to resolve the referent r .", "Our design of reference resolution effectively leverages the powerful attention mechanism in GPT-2 without adding any extra components for reference resolution.", "Step 3: Binary Rewriting Classification The model completes the coreference resolution in steps 1 and 2, after which it starts producing the rewritten query.", "Unlike existing query rewrite systems that directly generate the rewrite given the input, our model first predicts whether the incoming query requires to be rewritten using a binary classifier.", "As shown in Figure 3, the classifier, a two-layer feed-forward network followed by a softmax layer, takes as input the hidden state of the first decoding step and predicts a vector with two entries representing the rewrite and no-rewrite classes.", "Only when the binary prediction is true, i.e., the classifier predicts the class indicating that a rewrite is required, does the model enter Step 4 to generate the rewritten query; otherwise, the input query will be directly copied as the output.", "We show that a well-learned binary classifier with 93% accuracy functions as a filter that helps the model not only minimize the risk of incorrectly rewriting already self-contained queries, but also allows the rest of the generation process to solely focus on how to rewrite incomplete queries during training.", "Step 4: Query Rewrite Generation In this final step, the model runs the generation step based on its binary decision of whether or not to rewrite.", "Unlike the standard language modeling setup in GPT-2, where the output sequence is generated directly from the last hidden states, we design the Coref2QR attention layer that allows information gained during coreference resolution to effectively assist in the query rewrite generation.", "First, all relevant hidden states of mentions and referents predicted in Steps 1 and 2 are assembled to form a memory pool M .", "Note that it is possible for an example to have more than one coreference link.", "At each time step t 0 during the rewrite generation, the Coref2QR attention layer, operating as the standard multi-head attention mechanism, takes h t 0 as query to attend over the coreference related states M by treating them as keys and values.", "The resulting attention head c t 0 is summed with h t 0 to obtain the feature f t 0 before the final output token classifier.", "This design improves information flow between the two tasks, enabling the model to directly utilize information regarding previously resolved coreferents during rewrite generation.", "The Coref2QR attention can be applied to any arbitrary decoder layer to facilitate the deeper interaction between rewrite and coreference resolution in the model.", "Formally, at each decoder layer l , the memory pool M l stores coreference related states produced at layer l .", "At the generation step t 0 , the Coref2QR layer takes h lt 0 as query to attend over M l to obtain c lt 0 .", "The final feature f t 0 before the output token classifier is then obtained by f t 0 = h Lt 0 + 1 LP l c lt 0 .", "For simplicity, in Figure 3 we only illustrate the Coref2QR attention for the last decoder layer.", "Our results and analysis show that this Coref2QR attention design benefits the quality of query rewrite, especially in rewriting an anaphora into its antecedent.", "Optimization During training, an input sequence with length T is formed by the concatenation of the dialogue context, the user query and the target query rewrite.", "Four objectives, corresponding to each step in the model, are used for training.", "For mention detection, the objective is the cross-entropy between the predicted sequence of mention class p M and its ground-truth sequence y M : LM = q EX t = q S \u0000 log(( y Mt ) > p Mt ) , (2) where q S and q E denote the start and end index of the query respectively.", "> is the transpose operation.", "distributions of the antecedent boundaries q n and the corresponding ground-truth y R n .", "The final loss for reference resolution is the sum of losses from the existing coreference links: LR = NX n =1 \u0000 log(( y R n r S ) > q nr S ) \u0000 log(( y R n r E ) > q nr E ) , (3) where N is the number of coreference links in an example.", "q nr S and q nr E represent the predicted distributions of reference start r S and reference end r E respectively.", "When an example does not contain a coreference link, LR would be 0 .", "For generation, as in the standard language modeling task, we use cross-entropy between the predicted sequence p Q and its ground-truth sequence y Q :", "where t 0 is the time step in the word sequence of query rewrite.", "Note that LQ is 0 for examples that do not need rewrite.", "The final loss is the sum of all these losses: L = LM + LR + LB + LQ .", "Dataset As discussed in Sec. 3, we conduct all experiments on the MuDoCo dataset and follow the provided data split 1 .", "Data from 6 domains are aggregated to form train/dev/test sets with 16k, 1.9k and 1.9k examples respectively.", "Each example contains the dialogue context, the latest user query, and the corresponding coreference resolution and query rewrite annotations.", "Statistics for each domain are provided in Table", "1. Out of all examples that are not the first turn, 64.2% of them contain coreference links and 43.7% of them require query rewrite.", "This makes the task more challenging, as the model also needs to learn when not to rewrite a query and when to predict no coreference links.", "Note that not every coreference link requires rewriting, as in the 1 https://github.com/facebookresearch/ mudoco Domain Total Coref.", "MuDoCo dataset there are coreference annotations where the mention has the exact same word span as its referent.", "Setup The GPT-2 decoder layers and word classification layer in our model are initialized with the pre-trained weights from the GPT-2 small model.", "We fine-tune the model using Adam (Kingma and Ba, 2014) optimizer with learning rate 5e-05 and batch size 15.", "The criterion for early stopping is the averaged performance of coreference resolution and query rewrite on the development set.", "Results are obtained as the average of 5 runs.", "Evaluation Metrics The standard BLEU-4 (Pa-pineni et al., 2002) between the generated and the target sentences are reported.", "In addition, to highlight the quality of the rewritten parts in generated sentences, following the post-processing in Quan et al. (2019), we measure an F1 score calculated by comparing machine-generated words with ground truth words for only the ellipsis / co-reference part of user utterances.", "We also report the percentage of all referents in ground-truth coreference links that were successfully generated in the query rewrite, denoted as reference match (RM).", "The RM ratio explicitly reflects the quality of coreference resolution in the generated rewritten query.", "Baselines The standard seq-to-seq model with attention (seq2seq) and its pointer network (PN; Vinyals et al. (2015)) and pointer-generator network (PG; See et al. (2017)) variants are implemented as baselines.", "The concatenation of the dialogue context and the query are fed as input, and the output is the target rewrite.", "The size of the hidden states is 300 and word vectors are initialized with GloVe embeddings (Pennington et al., 2014).", "Results Table 2 shows the query rewrite results.", "The low F1 score and high BLEU score is because of filtering out the non-rewritten repeated tokens in post-processing when calculating F1.", "This allows us to better evaluate the quality of rewritten parts and to better differentiate between good and bad generation in our task.", "We find that our joint model substantially outperforms all LSTM-based seq-to-seq models on all metrics.", "Although the pointer-generator in LSTMs can effectively copy words from the input to its generation, the powerful transformer architecture with pre-trained weights allows better learning of rewriting patterns.", "To fairly investigate the impact of coreference modeling on the generation of query rewrite, we train a variant of our model using only the query rewrite objectives (Eqns.", "(4) and (5)), denoted as QR-only model.", "We can see that without coreference resolution, the F1 score drops from 60.2 to 57.9 and the reference match drops from 82.0 to 78.7.", "This illustrates the improved ability of the joint model to rewrite anaphoric expressions, since the model can leverage its coreference resolution predictions to generate more accurate query rewrites.", "We present a detailed case study with model predictions in Sec. 5.5.", "Evaluation Metrics The MUC, B 3 , and CEAF \u0000 4 metrics that are widely-used in coreference resolution task are reported.", "Note that these metrics are calculated based on coreference clusters and we only have ground-truth annotations for coreference links between mentions and referents.", "To align the links and clusters, during evaluation we post-process both the ground-truth and the model predictions.", "All the word spans that are identical to the referent in the dialogue context are combined into a cluster so that a link between a mention and a referent can be transformed into a cluster for the standard coreference resolution evaluation.", "Baselines To the best of our knowledge, there is no suitable coreference resolution model that is proposed in the same setup for dialogues 2 .", "We therefore experiment with the state-of-the-art models of document-based coreference resolution, including the end-to-end model (Lee et al., 2017, 2018) using BERT (Joshi et al., 2019) or SpanBERT (Joshi et al., 2020) 3 .", "Note that these models can only serve as a reference since they are not specifically designed for dialogue-based tasks.", "Since they require coreference clusters for training, coreference clusters are built from annotated links as in the post-processing step done for evaluation.", "Results As seen in Table 3, SpanBERT obtains better results than BERT, which is consistent with the findings in Joshi et al. (2020).", "This is mainly because SpanBERT is better at capturing span information, which facilitates tasks such as coreference resolution where reasoning about relationships between spans is required.", "In comparison, our joint learning model achieves competitive and even slightly better results.", "This indicates that the design of our model leveraging attention heads inside GPT-2 is effective at predicting coreference links in dialogues.", "To test if the supervision of query rewrite affects the optimization of coreference resolution in joint learning, we train a model variant 2 The baseline in Martin et al. (2020) is not compared for two reasons: 1) their setups in training/evaluation is different than ours in many ways, e.g., they only consider finished dialogues; 2) their source code is not released.", "3 https://github.com/mandarjoshi90/ coref MUC B 3 CEAF \u0000 4 P R F1 P R F1 P R F1 Avg.", "using only the objectives for coreference resolution (Eqns.", "(2) and (3)), denoted as coref-only model.", "It is observed that the results of the coref-only model are very close to that of the joint model, showing that the addition of coreference resolution in joint learning is beneficial to query rewrite without sacrificing the performance of the former.", "Here, we investigate how the different components in our joint model contribute to the performance of query rewrite.", "We remove one component at a time and examine the performance of query rewrite.", "As shown in Table 4, without the designed coref2qr attention layer, the performance degrades with a drop of 2.9% F1 and 1.4% RM rate.", "By further removing the supervision of coreference modeling from our joint learning model, the model is solely optimized towards the objectives of query rewrite and produces worse results compared to the complete model.", "These results indicate that through joint learning, the model's ability of generating the rewritten query improves, including its ability to rewrite the anaphora with its antecedent, by leveraging the information from coreference resolution modeling.", "In addition, the binary head plays an essential role in our model.", "The accuracy of this binary classifier is 93.9%.", "Without the binary head, the performance drop can be up to 5.9% F1 (60.2 -> 54.3).", "This shows that with the binary classification, the model is able to focus on rewriting the input query without worrying about whether to rewrite or not.", "In this section we analyze query rewrite performance on two different types of rewriting, coreference (coref.) and ellipses (elp.).", "The F1 score over three main domains and all test sets are reported in Table", "5. The seq2seq+pg model is the baseline seq2seq model with pointer-generator; QR-only model is our model variant but trained without coreference modeling.", "The overall trend shows that 1) when the dialogue contains coreferences, the joint learning model is more capable of rewriting the query by leveraging its coreference predictions; 2) when coreferences are not present but the query still needs rewriting on account of information omission, the joint model can still perform competitively with the QR-only model.", "We demonstrate several examples of query rewrites generated by different models to provide more insights into the task and into the benefits of joint learning.", "The coreference links predicted by the joint learning model are appended after its generated rewrite.", "Two examples that require coreference resolution in query rewrite are shown in Table", "6. In the left dialogue, the song in the user query refers back to Talking to the Moon mentioned in the first user turn.", "Both seq2seq+pg and QR-only model fail to generate the correct reference in the rewrite, probably because of the high DialogueContext usr: When was Talking to the Moon by Bruno Mars released?", "set.", "Rewrites generated by three different models are shown.", "complexity of a long dialogue.", "The joint learning model not only correctly predicts the coreference link pointing from the mention to its referent in the first turn, but also generates a rewrite perfectly consistent with its coreference prediction.", "A similar trend can be observed in the right example.", "The first two models cannot identify which Ariana to generate, while our model is able to rewrite with the correct one with the aid of the correct coreference resolution.", "While our model does well on most of the test cases, there are situations where the joint model fails to predict correctly.", "A representative failure example is provided in Appendix A.2.", "Table 7 shows an ellipsis example.", "The implicit location in the user query can be recovered through rewriting by both GPT-2 based models, while the LSTM-based model tends to keep the query.", "This indicates that 1) even with the pointer-generator's ability to copy source text, the seq2seq model is not capable enough of handling the difficult information omission rewrite; 2) the joint learning model still performs well on ellipses, while substantially benefiting in coreference cases.", "We propose a novel joint learning framework for coreference resolution and query rewrite in dialogues.", "Modeling coreference resolution not only complements the missing information in query rewrite, but is also beneficial to rewriting anaphoric expressions.", "Our joint learning model can predict coreference links between the user query and dialogue context, and generate the rewritten query.", "We show that with the aid of coreference resolution, the performance of query rewrite can be substantially boosted.", "Furthermore, our model produces competitive results in coreference resolution when compared to state-of-the-art BERT-based systems.", "We hope that the presented joint learning task with the release of our query rewrite annotations on the MuDoCo dataset provides a promising research direction in multi-turn dialogue understanding.", "One restriction of our model is that by virtue of the model being designed to predict the boundaries of a reference, our model is only able to handle cases involving continuous spans of words.", "In addition, the influence of query rewrite on coreference resolution is limited due to the nature of the information flow in our current model design.", "Future work will focus on these perspectives.", "The authors would like to thank Hadas Kotek for her help with the data annotation guidelines and the organization of the grading project.", "The authors would also like to thank Barry-John Theobald, Stephen Pulman, Jason Williams and Murat Akba-cak for discussions and feedback, and the anonymous reviewers for their helpful feedback." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "objective", "abstain", "result", "result", "abstain", "objective", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "method", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "result", "result", "method", "method", "method", "abstain", "other", "other" ]
[ "Modern models for event causality identification (ECI) are mainly based on supervised learning, which are prone to the data lacking problem.", "Unfortunately, the existing NLP-related augmentation methods cannot directly produce available data required for this task.", "To solve the data lacking problem, we introduce a new approach to augment training data for event causality identification, by iteratively generating new examples and classifying event causality in a dual learning framework.", "On the one hand, our approach is knowledge guided, which can leverage existing knowledge bases to generate well-formed new sentences.", "On the other hand, our approach employs a dual mechanism, which is a learnable augmentation framework, and can interactively adjust the generation process to generate task-related sentences.", "Experimental results on two benchmarks EventStoryLine and Causal-TimeBank show that 1) our method can augment suitable task-related training data for ECI; 2) our method outperforms previous methods on EventStoryLine and Causal-TimeBank (+2.5 and +2.1 points on F1 value respectively).", "Event causality identification (ECI) aims to identify causal relations between events in texts, which can provide crucial clues for NLP tasks, such as logical reasoning and question answering (Girju, 2003; Oh et al., 2013, 2017).", "This task is usually modeled as a classification problem, i.e. determining whether there is a causal relation between two events in a sentence.", "For example in Figure 1, an ECI system should identify two causal relations in two sentences: (1) attack cause killed in S1; (2) statement cause protests in S2.", "Most existing methods for ECI heavily rely on annotated training data (Mirza and Tonelli, 2016; Kimani Gray, a young man who likes football, was killed in a police attack shortly after a tight match. In the week following the fatal violence, several protests have erupted because of the official statement . S1: S2: Kimani Gray, a young man who likes football, was killed in a police attack shortly after a tight match. EDA deletion S3: Figure 1: S1 and S2 are causal sentences that contain causal events . S3 is produced by EDA based on S1. The dotted line indicates the causal relation. Riaz and Girju, 2014b; Hashimoto et al., 2014; Hu and Walker, 2017; Gao et al., 2019).", "However, existing datasets are relatively small, which impede the training of the high-performance event causality reasoning model.", "According to our statistics, the largest widely used dataset EventStoryLine Corpus (Caselli and Vossen, 2017) only contains 258 documents, 4316 sentences, and 1770 causal event pairs.", "Therefore, data lacking is an essential problem that urgently needs to be addressed for ECI.", "Up to now, data augmentation is one of the most effective methods to solve the data lacking problem.", "However, most of the NLP-related augmentation methods are a task-independent framework that produces new data at one time (Zhang et al., 2015; Guo et al., 2019; Xie et al., 2019b).", "In these frameworks, data augmentation and target task are modeled independently.", "This often leads to a lack of task-related characteristics in the generated data, such as task-related linguistic expression and knowledge.", "For example, easy data augmentation (EDA) (Wei and Zou, 2019) is the most representative method that relies on lexical substitution, deletion, swapping, and insertion to produce new data.", "However, solely relying on such word operations often generates new data that dissatisfies task-related qualities.", "As shown in Figure 1, S3 is produced by EDA, it lacks a linguistic expression that expresses the causal semantics between kill and attack .", "Therefore, how to interactively model data augmentation and target task to generate new data with task-related characteristics is a challenging problem on ECI.", "Specific to ECI, we argue that an ideal task-related generated causal sentence needs to possess two characteristics as follows.", "(1) The two events in the causal sentence need to have a causal relation.", "We call such property as Causality .", "For example, there is usually a causal relation between an attack event and a kill event, while nearly no causal relation between an attack event and a born event.", "(2) The linguistic expressions of the causal sentence need to be well-formed to express the causal semantic of events.", "We call such property as Well-formedness , which consists of", "a) canonical sentence grammar,", "b) event-related entities with semantic roles (e.g. the attack was carried out by a police in S1), and", "c) cohesive words that express complete causal semantics (e.g. in a and other words except for events and entities in S1).", "To this end, we propose a learnable data augmentation framework for ECI, dubbed as Learn able Knowledge-Guided D ata A ugmentation (LearnDA).", "This framework regards sentence-to-relation mapping ( the target task , ECI) and relation-to-sentence mapping ( the augmentation task , sentence generation) as dual tasks and models the mutual relation between them via dual learning.", "Specifically, LearnDA can use the duality to generate task-related new sentences learning from identification and makes it more accurate to understand the causal semantic learning from generation.", "On the one hand, LearnDA is knowledge guided.", "It introduces diverse causal event pairs from KBs to initialize the dual generation which could ensure the causality of generated causal sentences.", "For example, the knowledge of judgment cause demonstration from KBs can be used to construct a novel causal sentence, which is also helpful to understand the causal semantic of statement cause protests .", "On the other hand, LearnDA is learnable.", "It employs a constrained generative architecture to generate well-formed linguistic expressions via iteratively learning in the dual interaction, which expresses the causal semantic between given events.", "Methodologically, it gradually fills the remaining missing cohesive words of the complete sentences under the constraint of given events and related entities.", "In experiments, we evaluate our model on two benchmarks.", "We first concern the standard evaluation and show that our model achieves the state-of-the-art performance on ECI.", "Then we estimate the main components of LearnDA.", "Finally, our learnable augmentation framework demonstrates definite advantages over other augmentation methods in generating task-related data for ECI.", "In summary, the contributions as follows: We propose a new learnable data augmentation framework to solve the data lacking problem of ECI.", "Our framework can leverage the duality between identification and generation via dual learning which can learn to generate task-related sentences for ECI.", "Our framework is knowledge guided and learnable.", "Specifically, we introduce causal event pairs from KBs to initialize the dual generation, which could ensure the causality of generated causal sentences.", "We also employ a constrained generative architecture to gradually generate well-formed causal linguistic expressions of generated causal sentences via iteratively learning in the dual interaction.", "Experimental results on two benchmarks show that our model achieves the best performance on ECI.", "Moreover, it also shows definite advantages over previous data augmentation methods.", "To date, many researches attempt to identify the causality with linguistic patterns or statistical features.", "For example, some methods rely on syntactic and lexical features (Riaz and Girju, 2013, 2014b).", "Some focus on explicit causal textual patterns (Hashimoto et al., 2014; Riaz and Girju, 2014a, 2010; Do et al., 2011; Hidey and McKeown, 2016).", "And some others pay attention on statistical causal association and cues (Beamer and Girju, 2009; Hu et al., 2017; Hu and Walker, 2017).", "Recently, more attention is paid to the causality between events.", "Mirza and Tonelli (2014) annotated Causal-TimeBank of event-causal relations based on the TempEval-3 corpus.", "Mirza et al. (2014), Mirza and Tonelli (2016) extracted event-causal relation with a rule-based multi-sieve approach and improved the performance incorporating with event temporal relation.", "Mostafazadeh et al. (2016) annotated both temporal and causal relations in 320 short stories.", "Caselli and Vossen (2017) annotated the EventStoryLine Corpus for Dual Cycle Primal Cycle causal/ non-causal relaiton causal/ non-causal sentence eventpair Annotated Data Pre -trained Identifier Pre-trained Generator Learnable Dual Augmentation Architecture Dual-trained Identifier Dual Augmented Data Full-trained Identifier Pre-training Further training Knowledge relation->sentence->relation sentence->relation->sentence Figure 2: Overview of the learnable knowledge-guided dual data augmentation for ECI.", "event causality identification.", "Dunietz et al. (2017) presented BECauSE 2.0, a new version of the BECauSE corpus (Dunietz et al., 2015) of causal relation and other seven relations.", "Gao et al. (2019) modeled document-level structures to identify causality.", "Liu et al. (2020) identified event causality with the mention masking generalization.", "Unlike computer vision, the augmentation of text data in NLP is pretty rare (Chaudhary, 2020).", "Zuo et al. (2020) solved the data lacking problem of ECI with the distantly supervised labeled training data.", "However, including the distant supervision, most of the existing data augmentation methods for NLP tasks are task-independent frameworks (Related work of data augmentation and dual learning are detailed in Appendix B).", "Inspired by some generative methods which try to generate additional training data while preserving the class label (Anaby-Tavor et al., 2019; Yang et al., 2019; Papanikolaou and Pierleoni, 2020), we introduce a new learnable framework for augmenting task-related training data for ECI via dual learning enhanced with external knowledge.", "As shown in Figure 2, LearnDA jointly models a knowledge guided sentence generator (input: event pair and its causal/non-causal relation , output: causal/non-causal sentence ) and an event causality identifier (input: event pair and its sentence , output: causal/non-causal relation ) with dual learning.", "LearnDA iteratively optimizes identifier and generator to generate task-related training data, and then utilize new data to further train the identifier.", "Therefore, we first present the main idea of dual learning, which is the architecture of learnable dual augmentation, including the states, actions, policies, and Identifier Relation Sentence NCausal-Generator Causal-Generator Sentence Relation event pair (ep) causal/non-causal relation", "rewards.", "Then, we briefly introduce the knowledge guided sentence generator, especially the processes of knowledge guiding and constrained sentence generation.", "Finally, we describe the event causality identifier and training processes of LearnDA.", "The architecture of learnable dual augmentation is shown in Figure 3.", "Specifically, I denotes the event causality identifier, and G denotes the sentence generator which consists of two independent generators.", "They produce causal and non-causal sentences on the relation c of input event pair ep .", "Generally, G generates a sentence s (cid:48) which expresses the causal or non-causal relation c of the input event pair ep .", "Then it receives the reward R that consists of a semantic alignment reward R s from itself and a causality reward R c from I (primal cycle).", "Similarly, I identifies the causal or non-causal relation c (cid:48) of the input event pair ep with its sentence s .", "Then it receives the reward R consists of a causality reward R c from itself and a semantic alignment reward R s from G (dual cycle).", "I and G are optimized interactively with dual reinforcement learning.", "Specifically, for G , an action is the generation from relation to sentence, a state is denoted by the representation of input event pair and its relation, a policy is defined by the parameters of generator.", "For I , an action is the identification from sentence to relation, a state is denoted by the representation of input event pair and its sentence, a policy is defined by the parameters of identifier.", "Inspired by Shen and Feng (2020), we utilize a probability distribution over actions given states to represent the policys, i.e., the probability distribution of the generation of G and identification of I .", "As aforementioned, we introduce two rewards, causality ( R c ) and semantic alignment ( R s ) rewards, which encourage G to generate task-related sentences with the feedback from identifier, while further optimize I with the feedback from generator.", "Definitions are as following: Causality Reward ( R c ) If the relation of input event pair can be clearly expressed by the generated sentence, it will be easier to be understood by identifier.", "Therefore, we use the causal relation classification accuracy as the causality reward to evaluate the causality of generated sentences, while tune and optimize the identifier itself: R c ( ep, s ) = (cid:40) p ( c (cid:48) | s ; I ) Correct classification p ( c (cid:48) | s ; I ) Otherwise , (1) where I is the parameter of I , p ( c (cid:48) | s ; I ) denotes the probability of relation classification, s denotes the input sentence and c (cid:48) is the classified relation.", "Semantic Alignment Reward ( R s ) We hope that the semantic of the generated sentence can be consistent with the relation of the input event pair.", "Additionally, if the relation of the input event pair can be more accurately classified, the semantic of the new generated sentence can be considered more consistent with it.", "Therefore, we measure the semantic alignment by means of the probability of constructing a sentence with similar semantic to the input relation, and the reward is: R s ( ep, c ) = p ( s (cid:48) | c ; G ) = 1 | T s | (cid:88) t T s p ( t | c ; G ) , (2) where G is the parameter of G , c is the input relation, t is one of the generated tokens T s of the generated sentence s (cid:48) , and p ( t | c ; G ) is the generated probability of t .", "Specifically, there are two independent G with different G .", "In detail, cG is employed to generated causal sentence when the input c is causal relation, and non-causal sentence is generated via ncG when c is non-causal relation.", "As shown in Figure 4, knowledge guided sentence generator (KSG) first introduces diverse causal and non-causal event pairs from KBs for causality .", "Then, given an event pair and its causal or non-causal relation, it employs a constrained gen-Ncausal-Generator Causal-Generator event pair: <hurt,onrush> relation: causal Knowledge Kimani Gray, a young man who likes football, was killed in a police attack shortly after a tight match.", "Knowledge Guiding KSG introduces event pairs that are probabilistic causal or non-causal from multiple knowledge bases in two ways.", "(1) Lexical knowledge expanding : expanding annotated event pairs via external dictionaries, such as WordNet (Miller, 1995) and VerbNet (Schuler, 2005).", "(2) Connective knowledge introducing : introducing event pairs from external event-annotated documents (KBP corpus) assisted with FrameNet (Baker et al., 1998) and Penn Discourse Treebank (PDTB2) (Group et al., 2008).", "As shown in Table 1, we illustrate how to extract event pairs from multiple knowledge bases.", "Then, inspired by Bor-des et al. (2013), we filter the extracted event pairs by converting them into triples <e i , causal/non-causal, e j > and calculating the causal-distance by maximizing L in a causal representation space: L = (cid:88) ( e i ,e j ) T (cid:88) ( e (cid:48) i ,e (cid:48) j ) T (cid:48) [ + d ( e (cid:48) i , e (cid:48) j ) d ( e i , e j )] + , (3) where T and T (cid:48) are the causal and non-causal triples set respectively, and e is the representation of event.", "After that, the higher probability of causal relation, the shorter distance between two events, and we sort event pairs in ascending order by their distances.", "Finally, we keep the top and bottom % sorted event pairs to obtain the causal and noncausal event pairs sets for generation.", "Constrained Sentence Generator Given an event pair, constrained sentence generator produces a well-formed sentence that expresses its causal or non-causal relation in three stages: (1) assigning event-related entities ensures the logic of the semantic roles of events, (2) completing sentences ensures the completeness of causal or non-causal Knowledge How to extract event pair Why causal or non-causal Lexical knowledge expanding WordNet 1) Extracting the synonyms and hypernyms from WordNet of each event in ep .", "semantic expression, (3) filtering sentences ensures the quality and diversity of generated sentences.", "Assigning Event-related Entities.", "Event related entities play different semantic roles of events in sentences, which is an important part of event-semantic expression.", "Hence, as shown in Figure 4, given an event pair, we firstly assign logical entities for input events to guarantee the logic of semantic roles in the new sentences, such as gang is a logical entity as the body of the event onrush .", "Logically, entities of the same type play the same semantic roles in similar events.", "Moreover, as shown in Table 1, there is a corresponding original sentence for each extracted event pair.", "Therefore, in new sentence, we assign the most similar entity in the same type from candidate set 2 for each entity in the original sentence.", "For example, we assign gang for onrush in new sentence which is similar with the police related to attack in the original sentence.", "Specifically, we put the candidate entities in the same position in the original sentence to obtain their BERT embeddings.", "Then we select entities via the cosine similarity between their embeddings: E ( ent ) = 1 | ent | (cid:80) w ent E ( w ) , where ent is the entity and E ( w ) is the BERT embedding of ent .", "Completing Sentences.", "A well-formed sentence requires a complete linguistic expression to express the causal or non-causal semantics.", "Therefore, we complete sentences by filling the cohesive words between given events and assigned entities with masked BERT (Devlin et al., 2019).", "All words except events and entities are regarded as cohesive words.", "Specifically, we insert a certain number of the special token [MASK] between events and 2 We collect entities from annotated data and KBP corpus.", "entities, and then predict the [MASK] 3 tokens as new words.", "As shown in Figure 4, we fill cohesive tokens via two independent generators to express causal and non-causal semantic according to the relation of given events.", "For example, in a guiding a causal semantic filled by the causal generator.", "Filtering Sentences.", "Inspired by Yang et al. (2019), we design a filter to select new sentences that are balanced between high quality and high diversity with two key factors: 1) Perplexity (PPL): we take the average probability of the filled cohesive words in the new sentence s (cid:48) as its perplexity: P P L ( s (cid:48) ) = 1 | T ( s (cid:48) ) | (cid:80) t T ( s (cid:48) ) P ( t ) , where T is the set of filled cohesive words.", "2) Distance (DIS): we calculate the cosine similarity between generated sentence s (cid:48) and annotated data D m as its distance: DIS ( s (cid:48) , D m ) = 1 | D m | (cid:80) s D m E ( s (cid:48) ) E ( s ) E ( s (cid:48) ) E ( s ) , where D m is m random selected annotated sentences and E is the BERT sentence representation of the [CLS] token.", "A new sentence should have both appropriate high PPL which indicates the quality of generation, and appropriate high DIS which indicates the difference from the original sentences.", "Therefore, we select the top % of the newly generated sentences according to Score for the further training of identifier as following: Score ( s (cid:48) ) = P P L ( s (cid:48) ) + (1 ) DIS ( s (cid:48) , D m )) , where the is an hyper-parameter.", "We briefly describe the training processes of LearnDA for ECI, including the pre-training of generator and identifier, the dual reinforcement training, and the further training of identifier.", "3 The inserted [MASK] is 1.2 times the number of words between events and entities in the original sentence.", "Event Causality Identifier First of all, we formulate event causality identification as a sentence-level binary classification problem.", "Specifically, we design a classifier based on BERT (Devlin et al., 2019) to build our identifier.", "The input of the identifier is the event pair ep and its sentence s .", "Next, we take the stitching of manually designed features (same lexical, causal potential, and syntactic features as Gao et al. (2019)) and two event representations as the input of top MLP classifier.", "Finally, the output is a binary vector to predict the causal/non-causal relation of the input event pair ep .", "Pre-training We pre-train the identifier and generator on labeled data before dual reinforcement training.", "On the one hand, we train identifier via the cross-entropy objective function of the relation classification.", "On the other hand, for generators, we keep the events and entities in the input sentences, replace the remaining tokens with a special token [MASK], and then train it via the cross-entropy objective function to re-predict the masked tokens.", "Specifically, causal generator and non-causal generator are pre-trained on causal and non-causal labeled sentences respectively.", "Algorithm 1, we interactively optimize the generator and identifier by dual reinforcement learning.", "Specifically, we maximize the following objective functions: LG ( ep, c ) = (cid:40) p ( s (cid:48) | c ; G ) = 1 | T s | (cid:80) t T s p ( t | c ; G ) p ( s (cid:48) | c ; NG ) = 1 | T s | (cid:80) t T s p ( t | c ; NG ) , (4) LI ( ep, s ) = p ( c (cid:48) | s ; I ) , (5) where G and NG is the parameters of causal and non-causal sentence generators respectively, T s is the masked tokens.", "Finally, after dual data augmentation, we utilize generated sentences to further train the dual-trained identifier via the cross-entropy objective function of relation classification.", "Dataset and Evaluation Metrics Our experiments are conducted on two main benchmark datasets, including: EventStoryLine v0.9 (ESC) (Caselli and Vossen, 2017) described above; and (2) Causal-TimeBank (Causal-TB) (Mirza and Tonelli, 2014) which contains 184 documents, 6813 events, and 318 causal event pairs.", "Same as previous methods, we use the last two topics of ESC as the development set for two datasets.", "For evaluation, we adopt Precision (P), Recall (R), and F1-score (F1) as evaluation metrics.", "We conduct 5-fold and 10-fold cross-validation on ESC and Causal-TB respectively, same as previous methods to ensure comparability.", "All the results are the average of three independent experiments.", "Parameters Settings In implementations, both the identifier and generators are implemented on BERT-Base architecture 4 , which has 12-layers, 768-hiddens, and 12-heads.", "We set the learning rate of generator pre-training, identifier pre-training/further training, and dual reinforcement training as 1e-5, 1e-5, and 1e-7 respectively.", "We set the ratio of the augmented data used for training to the labeled data, , , , and as 1:2, 30%, 50%, 0.2, 0.5 and 0.5 respectively tuned on the development set.", "And we apply early stop and SGD gradient strategy to optimize all models.", "We also adopt a negative sampling rate of 0.5 for training the identifier, owing to the sparseness of positive examples.", "(See Appendix D for more details.)", "Compared Methods Same as previous state-of-the-art work.", "For ESC, we prefer 1) LSTM (Cheng and Miyao, 2017), a dependency path based 4 https://github.com/google-research/ bert sequential model that models the context between events to identify causality; 2) Seq (Choubey and Huang, 2017), a sequence model explores complex human designed features for ECI; 3) LR+ and ILP (Gao et al., 2019), document-level models adopt document structures for ECI.", "For Causal-TB, we prefer 1) RB , a rule-based system; 2) DD , a data driven machine learning based system; 3) VR-C , a verb rule based model with data filtering and gold causal signals enhancement.", "These models are designed by Mirza and Tonelli (2014); Mirza (2014) for ECI.", "Owing to our methods are constructed on BERT, we build BERT-based methods: 1) BERT , a BERT-based baseline, our basic proposed event causality identifier.", "2) MM (Liu et al., 2020), the BERT-based SOTA method with mention masking generalization.", "3) MM+ Aug , the further re-trained MM with our dual augmented data.", "4) KnowDis (Zuo et al., 2020) improved the performance of ECI with the distantly labeled training data.", "We compare with it to illustrate the quality of our generated ECI-related training data.", "5) MM + ConceptAug , to make a fair comparison, we introduce causal-related events from ConceptNet that employed by MM, and generate new sentences via KonwDis and LearnDA to further re-train MM (see Appendix C for details).", "Finally, we use LearnDA Full indicates our full model, which is the dual-trained identifier further trained via dual augmented data.", "Table 2 shows the results of ECI on EventStoryLine and Causal-TimeBank.", "From the results: 1) Our LearnDA Full outperforms all baselines and achieves the best performance (52.6%/51.9% on F1 value), outperforming the no-bert (ILP/VR-C) and bert (MM/KnowDis) state-of-the-art methods by a margin of 7.9%/8.7% and 2.5%/2.1% respectively, which justifies its effectiveness.", "Moreover, BERT-based methods demonstrate high recall value, which is benefited from more training data and their event-related guided knowledge.", "2) Comparing KnowDis with LearnDA Full , we note that training data generated by LearnDA is more helpful to ECI than distant supervision with external knowledge (+2.9%/+2.1%).", "This shows that LearnDA can generate more ECI-related data.", "3) Comparing MM+ ConceptNet with MM, with the same knowledge base, our dual augmented data can further improve the performance Methods P R F1 ESCLSTM (Cheng and Miyao, 2017) 34.0 41.5 37.4 Seq (Choubey and Huang, 2017) 32.7 44.9 37.8 LR+ (Gao et al., 2019) 37.0 45.2 40.7 ILP (Gao et al., 2019) 37.4 55.8 44.7 BERT 36.1 56.0 43.9 KnowDis (Zuo et al., 2020) 39.7 66.5 49.7 MM (Liu et al., 2020) 41.9 62.5 50.1 MM+ ConceptAug ( Ours ) 41.2 66.5 50.9* MM+ Aug ( Ours ) 41.0 69.3 51.5* LearnDA Full ( Ours ) 42.2 69.8 52.6* Causal-TB RB (Mirza and Tonelli, 2014) 36.8 12.3 18.4 DD (Mirza and Tonelli, 2014) 67.3 22.6 33.9 VR-C (Mirza, 2014) 69.0 31.5 43.2 BERT 38.5 43.9 41.0 MM (Liu et al., 2020) 36.6 55.6 44.1 KnowDis (Zuo et al., 2020) 42.3 60.5 49.8 MM+ ConceptAug ( Ours ) 38.8 59.2 46.9* MM+ Aug ( Ours ) 39.2 61.9 48.0* LearnDA Full ( Ours ) 41.9 68.0 51.9* Table 2: Results on event causality identification.", "(+0.8%/+2.8%), which illustrates that LearnDA can make more effective use of external knowledge by generating task-related training data.", "4) Comparing MM+ Aug with MM, we note that training with our dual augmented data can improve the performance by 1.4%/3.9%, even though MM is designed on BERT-Large (LearnDA is constructed on BERT-Base) and also introduces external knowledge.", "This indicates that the augmented data generated by our LearnDA can effectively alleviate the problem of data lacking on the ECI.", "We analyze the effect of the learnable dual augmentation for event causality identification.", "1) For identifier .", "Comparing LearnDA Dual with BERT in Table 3, we note that the performance of the proposed identifier is improved (+2.6%) after the dual training only with the same labeled data.", "This indicates that the identifier can learn more informative expressions of causal semantic from generation with dual learning.", "2) For generator .", "Comparing BERT DualAug with BERT Aug in Table 3, we note that the dual augmented data is high quality and more helpful to ECI (+2.6%).", "This indicates generator can generate more ECI task-related data learned from identifier with dual learning.", "Figure 5 illustrates the learnability of our LearnDA.", "Specifically, as the number of training rounds of dual learning increases, the generated data gradually learns task-related information, fur-Method P R FBERT (Our basic identifier) 36.1 56.0 43.9 BERT OrgAug 36.6 59.7 45.4* BERT DualAug 37.8 65.6 48.0* LearnDA Dual 36.8 63.0 46.5* LearnDA DualAug w/o.KB 37.5 67.0 48.1* LearnDA DualAug", "Table 3 also illustrates the effect of knowledge guiding on ECI depending on different knowledge bases.", "1) Comparing LearnDA Full with LearnDA DualAug w/o.KB , we note that the augmented data guided by external knowledge can further improve the performance of ECI.", "2) Specifically, lexical expanding and connective introducing (Sec 3.2) can both make the representation of causal relation more generalized, further making it easier for the identifier to understand the causality.", "3) Moreover, the expanding is more effective than the introducing, because the former brings a wider range of effective knowledge, thus the guidance of Method P R FBERT (Our identifier) 36.1 56.0 43.9 TextSurface BERT 37.0 57.5 45.0* BackTranslation BERT 36.8 61.0 45.9* EDABERT 36.6 62.4 46.1* LearnDA BERT 37.8 65.6 48.0* Table 4: Results of different data augmentation methods on event causality identification on ESC dataset.", "In this section, we conduct a comparison between our augmentation framework and other NLP-related augmentation methods to further illustrate the effectiveness of LearnDA.", "Effectiveness of Our Augmentation We train our identifier with augmented data produced by different NLP-related augmentation methods.", "As shown in Table 4, the augmented data generated by our LearnDA is more efficient for ECI, which is consistent with the previous analysis.", "The LearnDA can generate well-formed task-related new sentences that contain more event causal knowledge.", "Specifically, 1) text surface transformation brings a slight change to the labeled data, thus it has relatively little impact on ECI; 2) Back translation introduces limited new causal expressions by translation, thus it slightly increases the recall value on ECI; 3) EDA can introduce new expressions via substitution, but the augmented data is not canonical and cannot accurately express the causality, therefore, its impact on ECI is also limited.", "Quantitative Evaluation of Task-relevance We select five Ph.D. students majoring in NLP to manual score the 100 randomly selected augmented sentences given their corresponding original sentences as reference (Cohen's kappa = 0.85).", "Furthermore, we calculate the BLEU (Papineni et al., 2002) value to further evaluate the Generator Identifier <crash, target> causal relation A was crash by B as C targeted ... non-causal relation A was crash by B because C targeted ...", "diversity.", "As aforementioned, the task-relevance of new sentences on ECI is manifested in causality and well-formedness, while the diversity indicates the degree of generalization.", "As shown in Table 5, we note the sentences generated by LearnDA are equipped with the above three properties that are close to the labeled sentences.", "Specifically, the sentences produced by EDA has a certain degree of causality and diversity due to the lexical substitution assisted by external knowledge.", "However, they cannot well express the causality due to the grammatical irregularities.", "Correspondingly, new sentences generated via back translation are very similar to the original sentences, while the diversity is poor.", "We conduct a case study to further investigate the effectiveness of our LearnDA.", "Figure 6 illustrates the modification process of dual learning.", "For example as", "a), given two causal events, the generator is expected to generate a causal sentence.", "However, the generator without dual learning produces a noncausal sentence.", "Fortunately, with dual learning, the identifier judges the generated sentence as a non-causal one and guides the generator to produce a causal sentence with the feedback.", "Similarly, as shown in", "b), given a causal sentence, the identifier is expected to output a causal relation, but no dual-trained one cannot do.", "Correspondingly, the generator constructs feedback of low confidence to guide the identifier to output a causal relation.", "This paper proposes a new learnable knowledge-guided data augmentation framework (LearnDA) to solve the data lacking problem on ECI.", "Our framework can leverage the duality between generation and identification via dual learning to generate task-related sentences for ECI.", "Moreover, our framework is knowledge guided and learnable.", "Our method achieves state-of-the-art performance on EventStoryLine and Causal-TimeBank datasets.", "We thank anonymous reviewers for their insightful comments and suggestions.", "This work is supported by the National Key Research and Development Program of China (No.2018YFB1005100), the National Natural Science Foundation of China (No.U1936207, 61806201).", "This work is also supported by Beijing Academy of Artificial Intelligence (BAAI2019QN0301) and the joint project with Beijing Baidu Netcom Science Technology Co., Ltd." ]
[ "abstain", "abstain", "objective", "objective", "method", "objective", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "objective", "objective", "method", "abstain", "method", "method", "result", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "other", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "result", "abstain", "method", "abstain", "method", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "result", "other", "other", "other" ]
[ "Serial recall experiments study the ability of humans to recall words in the order in which they occurred.", "The following serial recall effects are generally investigated in studies with humans: word length and frequency , primacy and recency , semantic confusion , repetition , and transposition effects.", "In this research, we investigate LSTM language models in the context of these serial recall effects.", "Our work provides a framework to better understand and analyze neural language models and opens a new window to develop accurate language models.", "The goal of language modeling is to estimate the probability of a sequence of words in natural language, typically allowing one to make probabilistic predictions of the next word given preceding ones (Bahl et al., 1983; Berger et al., 1996).", "For several years now, Long Short-Term Memory (LSTM) (Graves, 2013) language models have demonstrated state-of-the-art performance (Melis et al., 2018; Merity et al., 2018a; Sundermeyer et al., 2012).", "Recent studies have begun to shed light on the information encoded by LSTM networks.", "These models can effectively use distant history (about 200 tokens of context) and are sensitive to word order, replacement, or removal (Khan-delwal et al., 2018), can learn function words much better than content words (Ford et al., 2018), can remember sentence lengths, word identity, and word order (Adi et al., 2017), can capture syntactic structures (Kuncoro et al., 2018) such as subject-verb agreement (Linzen et al., 2016).", "These characteristics are often attributed to LSTM's ability in overcoming the curse of dimensionality by associating a distributed feature vector to each word (Hinton et al., 1986; Neubig, 2017)and modeling long-range dependencies in faraway context (Khandel-wal et al., 2018).", "The goal of our research is to complement the prior work to provide a richer understanding about how LSTM language models use prior linguistic context.", "Inspired by investigations in cognitive psychology about serial recall in humans (Avons et al., 1994; Henson, 1998; Polisenska et al., 2015) where participants are asked to recall a sequence of items in order in which they were presented, we investigate how word length or frequency ( word-frequency effect), word position ( primacy , recency , and transposition effects), word similarity ( semantic confusion effect), and word repetition ( repetition effect) influence learning in LSTM language models.", "Our investigation provides a framework to better understand and analyze language models at a considerably finer-grained level than previous studies, and opens a new window to develop more accurate language models.", "We find that LSTM language models", "(a) can learn frequent/shorter words considerably better than infrequent/longer ones,", "(b) can learn recent words in sequences better than words in earlier positions, 1", "(c) have a tendency to predict words that are semantically similar to target words indicating that these networks have a tendency to group semantically similar words while suggesting one specific word as target based on prior context,", "(d) predict as output the words that are observed in prior context,", "i.e.", "repeat words from prior context, and", "(e) may transpose (switch adjacent) words in output depending on word syntactic function.", "1 Rats and humans recall the first and last items of sequences best and the middle ones worst (Bolhuis and Van Kampen, 1988; Ebbinghaus, 1913).", "Language models estimate the probability of a sequence as P ( w n 1 ) = (cid:81) ni =1 P ( w i | w i 1 1 ) , where w i is the i th word and w ji indicates the sub-sequence from w i to w j .", "These models minimize their prediction error against words in context, using e.g. the negative log likelihood loss function, during training: L = 1 n ni =1 log P ( w i | w i 1 1 ) .", "In this paper, we show the loss of a model against sequence s and word w i s by L ( s ) and L ( s )[ w i ] respectively.", "We use the LSTM language model developed in (Merity et al., 2018b) for our experiments.", "Given a sequence of words w i 1 1 as context, this model predicts the next word in sequence.", "We refer to w i and w i as target and predicted words respectively; given the global vocabulary V , w i = arg max w j V Pr( w j | w i 1 1 ) .", "We study this LSTM in the context of serial recall effects.", "What is the effect of word frequency/length 2 on the performance of LSTM language models?", "For this effect, we report the average loss for each word frequency as follows: L kWF = 1 / |S k | (cid:88) s S k ,w i s L ( s )[ w i ] , (1) where S k is the set of sequences that, at least, have one target word with term frequency of k , and L kWF is the overall loss for target words of frequency k which sheds light on the expected frequency of words for accurate language modeling.", "What is the effect of word position on the performance of LSTM language models?", "To analyze this effect, we compute the average loss of network with respect to the position of target words as follows: L iPR = 1 /Z (cid:88) s L ( s )[ w i ] , (2) where w i is the target word in sequence s , Z is the number of sequences as normalization factor, and L iPR is the average of loss at position i .", "This effect will shed light on network performance at specific positions in texts which can help rationalizing the need for new language modeling architectures, such as bidirectional LSTMs (Schuster and Paliwal, 1997), or the order by which input data should be processed (Sutskever et al., 2014).", "Are predicted words semantically similar to the target ones in case of incorrect predictions?", "For this analysis, we report the average semantic similarity between target ( w i ) and predicted ( w i ) words as follows: SC = 1 /Z (cid:88) s,w i s,w i (cid:54) = w i sim ( w i , w i ) , (3) where the function sim ( ., . ) computes word similarity either through WordNet (Miller, 1998) or cosine similarity of the corresponding embeddings of its arguments.", "This effect will shed light on how effective LSTMs are in disentangling semantically-similar concepts.", "This gives us a powerful metric to compare networks semantically, especially, in case of equal loss/perplexity.", "This effect refers to prediction of a word that already exists in context,", "i.e.", "in an earlier position in the sequence.", "Here, for each target w n , we compute the probability that, instead of w n , any of w n 1 1 words is predicted as output and report average across all samples: Pr( RE i ) = (4) 1 /Z (cid:88) s Pr( w i w n 1 1 , w i (cid:54) = w n | w n 1 1 ) .", "This effect will shed light on the extent to which network repeats/predicts observed words as possible responses.", "Given that words rarely repeat in sentences, the above metric can be used as a good regularizer for language modeling.", "This effect refers to word prediction in transposed positions,", "i.e. the case where the word pair w i +1 i in an original sequence is more likely to be predicted by the network as w i +1 w i in output.", "Here, for each pair w i +1 i in target sequence, we count the number of times in which w i +1 is more probable to be predicted at position i (as compared to w i ) and w i is more probable to be predicted at position i +1 (as compared to w i +1 ).", "We report the average number of transposition occurrences for all samples at each word position.", "This effect will shed light on how network learn nearby grammatical orders such as conjunction and adjective order.", "258 259 260", "Figures 1a and Figures 1b show this effect in POS level for both PTB and WT2.", "It shows the strong correlation between word length and loss in content words while function words are not so affected.", "(a) Dataset statistics,", "(b): Word-frequency effect, network learns frequent words better than infrequent ones,", "(c): Primacy and Recency effect, loss decreases for target words that appear at the end of sequences.", "264 265 266 267 268 269 270 271 272 273 274", "Figure 1c reveals the strong negative correlation between word frequency and loss such that increase in word frequency leads to decrease in loss.", "From another viewpoint, Figure 1d shows that the frequency of content words decrease while their length increase but Figure 1e shows frequency of function words is independent of their length.", "The fact that function words are closed class and we can not extend them with shorter words can be the reason.", "It is also reported in (Bell et al., 2009) that content words are shorter when more frequent while function words are not so.", "Therefore, word length effect is strongly exists in content words but is weak in function words.", "3 Experiments Datasets .", "We use two benchmark language modeling datasets: Penn Treebank (PTB) (Marcus et al., 1993; Mikolov et al., 2010) and WikiText-2 (WT2) (Merity et al., 2017).", "PTB and WT2 have vocabulary sizes of 10 K and 33 K respectively.", "We use the POS-tagged versions of these datasets provided by Khandelwal et al. (2018), and treat nouns (NN), verbs (VB), adjectives (JJ), and adverbs (RB) as content words, and others word classes as function words, see details in Figure 1a.", "275 276 277 278 279 280 281 282 283", "Also, from Figure 1c and the fact that average frequency in function words is much higher than content words ( 10k in compare to 58 in PTB dataset), it is now clear that why loss of function words are significantly lower than content words.", "Tokens later in the sequence have better recall.", "We examine primacy and recency effect by comparing the loss values in different positions of the sequence.", "Settings .", "We set LSTM's parameters as suggested in (Merity et al., 2018b) for PTB and follow its suggested parameter tuning procedure for WT2.", "For both datasets, we set context size to n = 100 obtained from { 5 , 20 , 50 , 100 , 200 } and validation data; note that the number of samples are equal for different sequence lengths.", "285 286 287", "Figure 2 shows the loss decreases (i.e. the network has better recall) as we approach the end of the sequence.", "3.1 Results We report LSTM performance in terms of prediction loss on development sets for all experiments.", "288 289 290 291 292 293 294 295 296 297 298 299", "Recalling words in transposed positions are fairly rare.", "To figure out the transposition effect, we count how many times words recall in transposed positions.", "Table 1 shows the number of transpositions normalized over sequence length.", "It reveals that number of transpositions do not increase while the sequence length increases.", "Moreover, transposition occurs rarely.", "For example, in sequence of length 100, from 99 candidates of transposition only 0.72 transposed on average.", "Figure 3 shows the number of transposition occurrences in each position of this sequence.", "It shows that", "Word-Frequency Effect : More frequent target words are predicted (learned) more accurately than less frequent ones.", "Figure 1b shows strong inverse correlation between word frequency and LSTM prediction loss.", "This is expected as neural models learn better with more data.", "In addition, although the overall loss of function words is considerably lower than that of content words (because of their overall higher frequency), Figure 1b shows that, for the same word frequency, content words are learned better than function words.", "Primacy and Recency Effects : Target words that appear later in sequences are predicted considerably better than those at earlier positions.", "Figure 1c shows that prediction loss considerably decreases for target words that appear toward the end of the sequences.", "The results are consistent across 1 3 5 7 9 11 Loss ( ) 0.2 0.4 0.6 0.8 1.0 s i m ( t a r g e t , p r e d i c t e d ) Embedding w nearest neithbor random word 1 3 5 7 9 11 Loss ( ) WordNet", "both datasets.", "This effect can explain why bidirectional LSTMs which read input from opposite directions usually work better in NLP applications such as machine translation (Firat et al., 2016; Domhan, 2018).", "A remaining question that is worth investigating is whether bidirectional LSTMs learn first and last few words of sequences better than those in the middle, and if yes, how can we make these models more robust against word position.", "Semantic Confusion Effect .", "There is signifi-cant tendency to predict words that resemble (are semantically-similar to) target words.", "Figure 2 shows the average WordNet and Embedding similarity between target and predicted words in PTB and WT2 across loss values.", "The results indicate high similarity between predicted and target words target predicted 0.0 0.2 0.4 0.6 0.8 1.0 p r o b a b ili t y predicted similar neighbours others predicted similar neighbours others", "for smaller loss values.", "However, confusion consistently increases as prediction loss increases.", "The upper bound similarity (obtained by treating the nearest neighbor of each target word as predicted word) indicates there exists better candidates which LSTM fails to predict.", "Our further analyses show that LSTM has a tendency to group semantically similar words and then suggest one of them.", "We consider most similar words as a group of neighbors and examine how network assigns probabilities to them as compared to others.", "Figure 3a shows that the chance of neighbors of target (with size 150) is equal to chance of other words (with size 9849).", "As Figure 3a shows, the neighbors of predicted words (with size 600) carry equal chance as compared to other words (with size 9399).", "To find these thresholds, we gradually increase the number of neighbors and track the trend of approaching the probability of neighboring group to target.", "As shown in Figure 3b, if the size of neighboring group is set to 150, these probabilities became equal.", "The similar way is repeated for finding the appropriate size for neighbors of predicted words.", "content words .", "We report repetition effect, see Eq.", "(4), at POS tag level (where the predicted word should have the same POS tag as target word in prior context).", "As Figure 4a and 4b show, function words have higher repetition probability than content words.", "This is because function words are more frequent, and the average distance among them (i.e. number of intervening words) is considerably smaller than other POS tags (e.g. 2 . 1 words vs 28 . 9 and 12 . 4 words for RB and JJ respectively).", "We also find that repetition probability decreases as a function of word frequency in prior context, see Figure 4c.", "This is because words (especially NN s and VB s) are often self-contained and their occurrence in prior context helps LSTM to avoid repeating them.", "In addition, we find that function words repeat more frequently than other types and repetition among NN s and VB s is higher than other POS tag pairs; Table 1 shows the confusion matrix for repetition across POS tag classes Perhaps, this could explain the recent language modeling improvement obtained in (Ford et al., 2018) through developing separate yet sequentially connected network architectures with respect to POS tag class.", "There are two factors determining the chance of 5 10 20 50 100 200 context size (number of tokens) 0.5 1.0 1.5 2.0 L o ss POSJJ NN RB VB func PTB WT2 Figure 5: Effective context to learn syntax.", "repetition of a POS class: First, the average distance of consecutive tokens of that POS class; Table 2 reports the corresponding values from training set.", "Second, the accuracy of network in predicting POS classes which has been shown in Figure 5 and also reported in (Khandelwal et al., 2018).", "From these, the repetition probability of function words are expected to be higher than content words.", "Transposition Effect : Transpositions occur more frequently at the beginning of sequences and rarely at the end .", "Figure 6 shows average number of transpositions at each word position across datasets.", "This result is meaningful because miss-predictions occur more frequently at earlier positions (see results for primacy and recency effect).", "In addition, transpositions are rare at higher positions because more context information can help LSTM to make accurate (transposition-free) prediction.", "In addition, Table 3 shows the percentage of transpositions across POS tag classes on PTB.", "The result show that LSTM mainly transposes RB NN ' word pairs with NN RB .' In future, we will conduct further analyses to understand the reason.", "The findings presented in this paper provide insight into how LSTMs model context.", "This information can be useful for improving language mod-0 20 40 60 80 100 position in sequence 500 550 600 650 700 750 1.2k 1.5k 1.7k 2k 2.2k 2.5k # t r a n s p o s i t i o n s PTB # t r a n s p o s i t i o n s WT 2 PTB WT2 Figure 6: Transposition effect, transpositions occur more at the beginning of sequences.", "els.", "For instance, the discovery that some word types are repeated in predictions more than others can help regularizing neural language models by making them adaptive to the different word types.", "We investigate LSTM language models in the context of serial recall indicators.", "We find that frequent target words and target words that appear later in sequences are predicted more accurately, predictions often resemble (are semantically similar to) target words, function words of prior context are more likely to be predicted as target words, and word pair transpositions occur more frequently at the beginning of sequences.", "We sincerely thank Nazanin Dehghani for her constructive collaboration during development of this paper and anonymous reviewers for their insightful comments and effective feedback." ]
[ "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "other", "abstain", "objective", "other", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "result", "other" ]
[ "Most current approaches to metaphor identification use restricted linguistic contexts, e.g. by considering only a verb's arguments or the sentence containing a phrase.", "Inspired by pragmatic accounts of metaphor, we argue that broader discourse features are crucial for better metaphor identification.", "We train simple gradient boosting classifiers on representations of an utterance and its surrounding discourse learned with a variety of document embedding methods, obtaining near state-of-the-art results on the 2018 VU Amsterdam metaphor identification task without the complex metaphor-specific features or deep neural architectures employed by other systems.", "A qualitative analysis further confirms the need for broader context in metaphor processing.", "From bottled up anger to the world is your oyster , metaphor is a defining component of language, adding poetry and humor to communication (Glucksberg and McGlone, 2001) and serving as a tool for reasoning about relations between concepts (Lakoff and Johnson, 1980).", "Designing metaphor processing systems has thus seen significant interest in the NLP community, with applications from information retrieval (Korkontzelos et al., 2013) to machine translation (Saygin, 2001).", "An important first step in any metaphor processing pipeline is metaphor identification .", "To date, most approaches to its identification operate in restricted contexts, for instance, by only considering isolated verbargument pairs (e.g. deflate econ-omy ) (Rei et al., 2017) or the sentence containing an utterance (Gao et al., 2018).", "However, wider context is crucial for understanding metaphor: for instance, the phrase drowning students can be interpreted as literal (in the context of water ) or metaphorical (in the context of homework ).", "Of-You can't steal their ideas. No, idiotnot so I can steal them.", "ten the context required extends beyond the immediate sentence; in Table 1, coreferences ( them ) must be resolved to understand the arguments of a verb, and a game is metaphorical in a political context.", "Indeed, a rich linguistic tradition (Grice, 1975; Searle, 1979; Sperber and Wilson, 1986) explains metaphor as arising from violations of expectations in a conversational context.", "Following these theories, in this paper we argue that metaphor processing models should expand beyond restricted contexts to use representations of wider discourse.", "We support this claim with two contributions: (1) we develop metaphor identification models which take as input an utterance, its immediate lexicosyntactic context, and broader discourse representations, and demonstrate that incorporating discourse features improves performance; (2) we perform a qualitative analysis and show that broader context is often required to correctly interpret metaphors.", "To the best of our knowledge, this is the first work to investigate the effects of broader discourse on metaphor identification.", "1 2 Related work Metaphor identification is typically framed as a binary classification task, either with (1) word tu-1 Code and data available at https://github.com/ jayelm/broader-metaphor .", "ples such as SVO triples (car drinks gasoline) or (2) whole sentences as input, where the goal is to predict the metaphoricity of a token in the sentence.", "Recent work has used a variety of features extracted from these two types of contexts, including selectional preferences (Shutova, 2013; Beigman Klebanov et al., 2016), concrete-ness/imageability (Turney et al., 2011; Tsvetkov et al., 2014), multi-modal (Tekiroglu et al., 2015; Shutova et al., 2016) and neural features (Do Dinh and Gurevych, 2016; Rei et al., 2017).", "At the recent VU Amsterdam (VUA) metaphor identification shared task (Leong et al., 2018), neural approaches dominated, with most teams using LSTMs trained on word embeddings and additional linguistic features, such as semantic classes and part of speech tags (Wu et al., 2018; Stemle and Onysko, 2018; Mykowiecka et al., 2018; Swarnkar and Singh, 2018).", "Most recently, Gao et al. (2018) revisited this task, reporting state-of-the-art results with BiLSTMs and contextualized word embeddings (Peters et al., 2018).", "To the best of our knowledge, none of the existing approaches have utilized information from wider discourse context in metaphor identification, nor investigated its effects.", "Following past work, we use the Verbs subset of the VUA metaphor corpus (Steen et al., 2010) used in the above shared task.", "The data consists of 17240 training and 5873 test examples, equally distributed across 4 genres of the British National Corpus: Academic, Conversation, News, and Fiction.", "All verbs are annotated as metaphorical or literal in these texts.", "We sample 500 examples randomly from the training set as a development set.", "For each utterance, our models learn generic representations of a verb lemma , 2 its syntactic arguments , and its broader discourse context .", "We concatenate these features into a single feature vector and feed them into a gradient boosting decision tree classifier (Chen and Guestrin, 2016).", "3 By observing performance differences when using the lemma only (L), lemma + arguments (LA), or 2 The lemmatized form of the verb has improved generalization in other systems (Beigman Klebanov et al., 2016).", "lemma + arguments + context (LAC), we can investigate the effects of including broader context.", "To obtain arguments for verbs, we extract subjects and direct objects with Stanford CoreNLP (Manning et al., 2014).", "67 .", "4% of verb usages in the dataset have at least one argument; absent arguments are represented as zero vectors.", "To obtain the broader context of a verb, we take its surrounding paragraph as defined by the BNC; the average number of tokens in a context is 97 .", "3 .", "Figure 1 depicts the feature extraction and classification pipeline of our approach.", "To learn representations, we use several widely-used embedding methods: 4 GloVe We use 300 -dimensional pre-trained GloVe embeddings (Pennington et al., 2014) trained on the Common Crawl corpus as representations of a lemma and its arguments.", "To learn a context embedding, we simply average the vectors of the tokens in the context.", "Out-of-vocabulary words are represented as a mean across all vectors.", "doc2vec We use pretrained 300 -dimensional paragraph vectors learned with the distributed bag-of-words method of Le and Mikolov (2014) (col-loquially, doc2vec), trained on Wikipedia (Lau and Baldwin, 2016).", "Here, paragraph vectors are learned to predict randomly sampled words from the paragraph, ignoring word order.", "To extract representations for verbs and arguments, we embed one-word documents consisting of only the word itself.", "5 We use a learning rate = 0 .", "01 and 1000 epochs to infer vectors.", "Skip-thought We use pretrained skip-thought vectors (Kiros et al., 2015) learned from training an encoderdecoder model to reconstruct the surrounding sentences of an input sentence from the Toronto BooksCorpus (Zhu et al., 2015).", "From this model, we extract 4800 -dimensional representations for verb lemma, arguments, and contexts.", "ELMo Finally, we use ELMo, a model of deep contextualized word embeddings (Peters et al., 2018).", "We extract 1024 -dimensional representations from the last layer of a stacked BiLSTM 4 These methods differ significantly in dimensionality and training data.", "Our intent is not to exhaustively compare these methods, but rather claim generally that many embeddings give good performance on this task.", "5 Since some methods provide only document embeddings and not word embeddings, for consistency, in all methods we use the same embedding process even for single-word verbs and arguments.", "trained on Wikipedia and monolingual news data from WMT 20082012.", "To learn embeddings for verbs and arguments, we extract representations for sentences containing only the word itself.", "To learn context embeddings, we again average the constituent word embeddings.", "For each embedding method, we evaluate the three configurations of featuresL, LA, and LACon the VUA shared task train/test split, reporting precision, recall and F1 score.", "Since we are interested in whether incorporating broader context significantly improves identification performance, we compare successive model predictions (LAC vs. LA; LA vs. L) using the mid-p variant of McNe-mar's test for paired binary data (Fagerland et al., 2013).", "We first compare our models to the baselines of the VUA shared task (Leong et al., 2018): Baseline 1 , a logistic regression classifier trained only on one-hot encodings of verb lemmas; and Baseline 2 , the same classifier with additional WordNet class and concreteness features.", "We also compare to the best systems submitted to the VUA shared task: Wu et al. (2018), an ensemble of 20 CNN-BiLSTMs trained on word2vec embeddings, part-of-speech tags, and word embedding clusters; and Stemle and Onysko (2018), a BiLSTM trained on embeddings from English language learner corpora.", "Results for our models are presented in Table 2.", "Interestingly, most of the simple lemma models (L) already perform at Baseline 2 level, obtaining F1 scores in the range 60 62 .", "This is likely due to the generalization made possible by dense representations of lemmas (vs. one-hot encodings) and the more powerful statistical classifier used.", "As expected, the addition of argument information consistently enhances performance.", "Significant improvement over previous model ( p < 0 . 01 , 0 . 001 ).", "Crucially, the addition of broader discourse context improves performance for all embedding methods.", "In general, we observe consistent, statistically significant increases of 2 3 F1 points for incorporating discourse.", "Overall, all LAC models except doc2vec exhibit high performance, and would have achieved second place in the VUA shared task.", "These results show a clear trend: the incorporation of discourse information leads to improvement of metaphor identification performance across models.", "Table 3 displays the performance breakdown by genre in the VUA test set for our best performing model (ELMo LAC) and selected comparison systems.", "Echoing Leong et al. (2018), we observe that the Conversation and Fiction genres are consistently more difficult than the Academic and News genres across all models.", "This is partially because in this dataset, metaphors in these genres are rarer, occuring 35% of the time in Academic and 43% in News , but only 15% in Conversation and 24% in Fiction .", "In addition, for our model specifically, Conversation genre contexts are much Genre Model P R F1 Academic Baseline 2 70.7 83.6 76.6 Wu et al. (2018) 74.6 76.3 75.5 ELMo LAC 65.4 86.8 74.6 Conversation Baseline 2 30.1 82.1 44.1 Wu et al. (2018) 40.3 65.6 50.3 ELMo LAC 42.6 56.0 48.4 Fiction Baseline 2 40.7 66.7 50.6 Wu et al. (2018) 54.5 78.4 57.6 ELMo LAC 48.2 63.0 54.6 News Baseline 2 67.7 68.9 68.3 Wu et al. (2018) 69.4 74.4 71.8 ELMo LAC 65.2 80.0 71.8 Table 3: Performance breakdown by genre for ELMo LAC model and comparison systems.", "Our best performing model (ELMo LAC) is within 0 .", "4 F1 score of the first-place model in the VUA shared task (Wu et al., 2018).", "The GloVe LAC model would also have obtained second place at 65 .", "2 F1, yet is considerably simpler than the systems used in the shared task, which employed ensembles of deep neural architectures and hand-engineered, metaphor-specific features.", "To better understand the ways in which discourse information plays a role in metaphor processing, we randomly sample 100 examples from our development set and manually categorize them by the amount of context required for their interpretation.", "For instance, a verb may be interpretable when given just its arguments (direct sub-ject/object), it may require context from the enclosing sentence, or it may require paragraph-level context (or beyond).", "We also similarly analyze 100 sampled errors made on the development set by the ELMo L, LA, and LAC models, to examine whether error types vary between models.", "Our analysis in Table 4 shows that 11% of examples in the development set require paragraph-level context for correct interpretation.", "Indeed, while such examples are frequently misclassified by the L and LA models ( 13% , 15% ), the error rate is halved when context is included ( 8% ).", "Table 5 further presents examples requiring at least paragraph-level context, along with gold label and model predictions.", "Out of the 31 unique such examples identified in the above analyses, we found 11 ( 35% ) requiring explicit coreference resolution of a pronoun or otherwise underspecified noun (e.g. Table 5 row", "1) and 5 ( 16% ) which reference an entity or event implicitly ( ellipsis ; e.g. Table 5 row 2).", "However, we also observed 4 errors ( 13% ) due to examples with non-verbs and incomplete sentences and 11 examples ( 35% ) where not even paragraph-level context was sufficient for interpretation, mostly in the Conversation genre, demonstrating the subjective and borderline nature of many of the annotations.", "This analysis shows a priori the need for broader context beyond sentence-level for robust metaphor processing.", "Yet this is not an upper bound on performance gains; the general improvement of the LAC models over LA shows that even when context is not strictly necessary, it can still be a useful signal for identification.", "We presented the first models which leverage representations of discourse for metaphor identification.", "The performance gains of these models demonstrate that incorporating broader discourse information is a powerful feature for metaphor identification systems, aligning with our qualitative analysis and the theoretical and empirical evidence suggesting metaphor comprehension is heavily influenced by wider context.", "Given the simplicity of our representations of context in these models, we are interested in future models which (1) use discourse in more sophisticated ways, e.g. by modeling discourse relations or dialog state tracking (Henderson, 2015), and (2) leverage more sophisticated neural architectures (Gao et al., 2018).", "We thank anonymous reviewers for their insightful comments, Noah Goodman, and Ben Leong for assistance with the 2018 VUA shared task data.", "We thank the Department of Computer Science and Technology and Churchill College, University of Cambridge for travel funding.", "Jesse Mu is supported by a Churchill Scholarship and an NSF Graduate Research Fellowship.", "Helen Yannakoudakis was supported by Cambridge Assessment, University of Cambridge.", "We thank the NVIDIA Corporation for the donation of the Titan GPU used in this research." ]
[ "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "other", "abstain", "other", "abstain", "abstain", "objective", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "other", "other", "other", "other", "other" ]
[ "A spectB ased S entiment A nalysis ( ABSA ), aiming at predicting the polarities for aspects, is a fine-grained task in the field of sentiment analysis.", "Previous work showed syntactic information, e.g. dependency trees, can effectively improve the ABSA performance.", "Recently, pre-trained models (PTMs) also have shown their effectiveness on ABSA.", "Therefore, the question naturally arises whether PTMs contain sufficient syntactic information for ABSA so that we can obtain a good ABSA model only based on PTMs.", "In this paper, we firstly compare the induced trees from PTMs and the dependency parsing trees on several popular models for the ABSA task, showing that the induced tree from fine-tuned RoBERTa (FT-RoBERTa) outperforms the parser-provided tree.", "The further analysis experiments reveal that the FT-RoBERTa Induced Tree is more sentiment-word-oriented and could benefit the ABSA task.", "The experiments also show that the pure RoBERTa-based model can outperform or approximate to the previous SOTA performances on six datasets across four languages since it implicitly incorporates the task-oriented syntactic information.", "1 1 Introduction Aspect-based sentiment analysis (ABSA) aims to do the fine-grained sentiment analysis towards aspects (Pontiki et al., 2014, 2016).", "Specifically, for one or more aspects in a sentence, the task calls for detecting the sentiment polarities for all aspects.", "Take the sentence great food but the service was dreadful for example, the task is to predict the sentiments towards the underlined aspects, which expects to get polarity positive for aspect food and polarity negative for aspect service .", "Generally, ABSA Equal contribution.", "contains aspect extraction (AE) and aspect-level sentiment classification (ALSC).", "We only focus on the ALSC task.", "Early works of ALSC mainly rely on manually designed syntactic features, which is labor-intensive yet insufficient.", "In order to avoid designing hand-crafted features (Jiang et al., 2011; Kiritchenko et al., 2014), various neural network models have been proposed in ALSC (Dong et al., 2014; Vo and Zhang, 2015; Wang et al., 2016; Chen et al., 2017; He et al., 2018; Zhang et al., 2019b; Wang et al., 2020).", "Since the dependency tree can help the aspects find their contextual words, most of the recently proposed State-of-the-art (SOTA) ALSC models utilize the dependency tree to assist in modeling connections between aspects and their opinion words (Wang et al., 2020; Sun et al., 2019b; Zhang et al., 2019b).", "Generally, these dependency tree based ALSC models are implemented in three methods.", "The first one is to use the topological structure of the dependency tree (Dong et al., 2014; Zhang et al., 2019a; Huang and Carley, 2019; Sun et al., 2019b; Zheng et al., 2020; Tang et al., 2020); The second one is to use the tree-based distance, which counts the number of edges in a shortest path between two tokens in the dependency tree (He et al., 2018; Zhang et al., 2019b; Phan and Ogunbona, 2020); The third one is to simultaneously use both the topological structure and the tree-based distance.", "Except for the dependency tree, pre-trained models (PTMs) (Qiu et al., 2020), such as BERT (De-vlin et al., 2019), have also been used to enhance the performance of the ALSC task (Sun et al., 2019a; Tang et al., 2020; Phan and Ogunbona, 2020; Wang et al., 2020).", "From the view of interpretability of PTMs, Chen et al. (2019); Hewitt and Manning (2019); Wu et al. (2020) try to use probing methods to detect syntactic information in PTMs.", "Empirical results reveal that PTMs capture some kind of dependency tree structures implicitly.", "Q1: Will the tree induced from PTMs achieve better performance than the tree given by a dependency parser when combined with different tree-based ALSC models?", "To answer this question, we choose one model from each of the three typical dependency tree based methods in ALSC, and compare their performance when combined with the parser-provided dependency tree and the off-the-shelf PTMs induced trees.", "Q2: Will PTMs adapt the implicitly entailed tree structure to the ALSC task during the fine-tuning?", "Therefore, in this paper, we not only use the trees induced from the off-the-shelf PTMs to enhance ALSC models, but also use the trees induced from the fine-tuned PTMs (In short FT-PTMs) which are fine-tuned on the ALSC datasets.", "Experiments show that trees induced from FT-PTMs can help tree-based ALSC models achieve better performance than their counterparts before fine-tuning.", "Besides, models with trees induced from the ALSC fine-tuned RoBERTa can even outperform trees from the dependency parser.", "Last but not least, we find that the base RoBERTa with an MLP layer is enough to achieve State-of-the-art (SOTA) or near SOTA performance on all six ALSC datasets across four languages, while incorporating tree structures into RoBERTa-based ALSC models does not achieve concrete improvement.", "(1) We extensively study the induced trees from PTMs and FT-PTMs.", "Experiments show that models using induced trees from FT-PTMs achieve better performance.", "Moreover, models using induced trees from fine-tuned RoBERTa outperform other trees.", "(2) The analysis of the induced tree from FT-PTMs shows that it tends to be more sentiment-word-oriented, making the aspect term directly connect to its sentiment adjectives.", "(3) We achieve SOTA or near SOTA performances on six ALSC datasets across four languages based on RoBERTa.", "We find that the RoBERTa could better adapt to ALSC and help the aspects to find the sentiment words.", "ALSC without Dependencies Vo and Zhang (2015) propose the early neural network model which does not rely on the dependency tree.", "Along this line, diverse neural network models have been proposed.", "Tang et al. (2016a) use the long short term memory (LSTM) network to enhance the interactions between aspects and context words.", "In order to model relations of aspects and their contextual words, Wang et al. (2016); Liu and Zhang (2017); Ma et al. (2017); Tay et al. (2018) incorporate the attention mechanism into the LSTM-based neural network models.", "Other model structures such as convolutional neural network (CNN) (Li et al., 2018; Xue and Li, 2018), gated neural network (Zhang et al., 2016; Xue and Li, 2018), memory neural network (Tang et al., 2016b; Chen et al., 2017; Wang et al., 2018), attention neural network (Tang et al., 2019) have also been applied in ALSC.", "ALSC with Dependencies Early works of ALSC mainly employ traditional text classification methods focusing on machine learning algorithms and manually designed features, which took syntactic structures into consideration from the very beginning.", "Kiritchenko et al. (2014) combine a set of features including sentiment lexicons and parsing dependencies, from which experiments show the effectiveness of context parsing features.", "A myriad of works attempt to fuse dependency tree into neural network models in ALSC.", "Dong et al. (2014) propose to convert the dependency tree into a binary tree first, then apply the adaptive recursive neural network to propagate information from the context words to aspects.", "Despite the improvement of aspect-oriented feature modeling, converting the dependency tree into a binary tree might cause syntax related words separated away from each other.", "In general, owing to the syntax parsing errors, early dependency tree based ALSC models do not show clear preponderance over models without the dependency tree.", "However, the introduction of the neural network into the dependency parsing task enhances the parsing quality substantially (Chen and Manning, 2014; Dozat and Manning, 2017).", "Recent advances, leveraging graph neural network (GNN) to model the dependency tree (Zhang et al., 2019a; Huang and Carley, 2019; Sun et al., 2019b; Tang et al., 2020; Wang et al., 2020), have achieved significant performance.", "Among them, Zheng et al. (2020); Wang et al. (2020) attempt to convert the dependency tree into the aspect-oriented dependency tree.", "Instead of using the topological structure of dependency tree, He et al. (2018); Zhang et al. (2019b); Phan and Ogunbona (2020) exploit the tree-based distance between two tokens in the dependency tree.", "PTMs-based Dependency Probing Over the past few years, the pre-trained models (PTMs) have dominated across various NLP tasks.", "Therefore, many researchers are attracted to investigate what linguistic knowledge has been captured by PTMs (Clark et al., 2019; Hewitt and Liang, 2019; Hewitt and Manning, 2019; Wu et al., 2020).", "Clark et al. (2019) try to use a single or a combination of head attention maps of BERT to infer the dependencies.", "Since BERT has many attention heads, this method can hardly fully reveal the dependency between two tokens.", "Hewitt and Manning (2019) propose a small learnable probing model to probe the syntax dependencies encoded in BERT.", "Despite very few parameters been added, it may still be very hard to tell if the syntactic information is encoded by BERT itself or by the additional parameters from the probing model.", "Therefore, the parameter-free dependency probing method proposed in Wu et al. (2020) might be more preferred.", "In this section, we first introduce how to induce trees from PTMs, then we describe three tree-based ALSC models, which are selected from three representative methods of incorporating the dependency tree in ALSC task.", "Perturbed Masking (Wu et al., 2020) can induce trees from the pre-trained models without additional parameters.", "Generally, a broad range of PTMs can be applied in the Perturbed Masking method.", "For the sake of being representative and practical, we select BERT and RoBERTa as our base models.", "In this subsection, we first briefly introduce the model structure of BERT and RoBERTa, then present the basic idea of the Perturbed Masking method.", "More details about them can be found in their respective reference papers.", "BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019) both take Transformers (Vaswani et al., 2017) as backbone architecture.", "Generally, they can be formulated as the following equations h l = LN( h l 1 + MHAtt( h l 1 )) , (1) h l = LN( h l + FFN( h l )) , (2) where h 0 is the BERT/RoBERTa input representation, formed by the sum of token embeddings, position embeddings, and segment embeddings; LN is the layer normalization layer; MHAtt is the multi-head self-attention; FFN contains three layers, the first one is a linear projection layer, then an activation layer, then another linear projection layer; l is the depth of Transformer layers.", "The base and large version of BERT and RoBERTa have 12, 24 Transformer layers, respectively.", "BERT is pre-trained on Masked Language Modeling (MLM) and Next Sentence Prediction (NSP) tasks.", "In the MLM task, 15% of the tokens in a sentence are manipulated in three ways.", "Specifically, 10%, 10%, 80% of them are replaced by a random token, itself, or a [MASK] token, respectively.", "In the NSP task, two sentences A and B are concatenated before sending to BERT.", "Given 50% of the time when B is the next utterance of A, BERT needs to utilize the vector representation of [CLS] to figure out whether the input is continuous or not.", "RoBERTa is only pre-trained on the MLM task.", "Perturbed Masking aims to detect syntactic information from pre-trained models.", "For a sentence x = [ x 1 , . . . , x T ] , BERT and RoBERTa will map each x i into a contextualized representation H ( x ) i .", "Perturbed Masking is trying to derive the value f ( x i , x j ) that denotes the impact a token x j has on another token x i .", "To derive this value, it first uses the [MASK] (or < mask > in RoBERTa) to replace the token x i , which returns a representation H ( x \\{ x i } ) i for the masked x i ; secondly, it further masks the token x j , which returns a representation H ( x \\{ x i , x j } ) i with both x i , x j being masked.", "The impact value f ( x i , x j ) is calculated by the Euclidean distance as follows, f ( x i ,x j )= || H ( x \\{ x i } ) i H ( x \\{ x i ,x j } ) i || 2 (3) By repeating this process between every two tokens in the sentence, we can get an impact matrix M RT T and M i,j = f ( x i , x j ) .", "The tree decoding algorithm, such as Eisner (Eisner, 1996) and ChuLiu/Edmonds' algorithm (Chu and Liu, 1965; Edmonds, 1967), is then used to extract the dependency tree from the matrix M .", "The Perturbed Masking can exert on any layer of BERT or RoBERTa.", "In this subsection, we introduce three representative tree-based ALSC models.", "Each of the model is from the methods mentioned in the Introduction part (Section 1).", "For a fair comparison, all the selected models are of the most recently advanced tree-based ALSC models.", "We briefly introduce these three models as follows.", "The Aspect-specific Graph Convolutional Networks (ASGCN) is proposed by Sun et al. (2019b).", "They utilize the dependency tree as a graph, where each word is viewed as a node and the dependencies between words are deemed as edges.", "After converting the dependency tree into the graph, ASGCN uses the Graph Convolutional Network (GCN) to operate on this graph to model dependencies between each word.", "The Proximity-Weighted Convolution Network (PWCN) model is proposed by Zhang et al. (2019b).", "They try to help the aspect to find their contextual words.", "For an input sentence, the PWCN first gets its dependency tree, and based on this tree it would assign a proximity value to each word in the sentence.", "The proximity value for each word is calculated by the shortest path in the dependency tree between this word and the aspects.", "The Relational Graph Attention Network (RGAT) is proposed by Wang et al. (2020).", "In the RGAT model, they transform the dependency tree into an aspect-oriented dependency tree.", "The aspect-oriented dependency tree uses the aspect as the root node, and all other words depend on the aspect directly.", "The relation between the aspect and other words is either based on the syntactic tag or the tree-based distance in the dependency tree.", "Specifically, the RGAT reserves syntactic tags for words with 1 tree-based distance to aspect, and assigns virtual tags to longer distance words, such as 2:con for A 2 tree-based distance connection.", "Therefore, Dataset Split Positive Negative Neutral Rest14 Train 2164 807 637 Test 728 196 196 Laptop14 Train 994 870 464 Test 341 128 169 Twitter Train 1561 1560 3127 Test 173 173 346 Table 1: Data statistics.", "the RGAT model not only exploits the topological structure of the dependency tree but also the tree-based distance between two words.", "In this section, we present details about the datasets, the tree structures used in experiments, as well as the experiments implementations.", "We conduct experiments on all six datasets across four languages.", "But due to the limited space, we present our experiments on the non-English datasets in the Appendix.", "We run experiments on six benchmark datasets.", "Three of them, namely, Rest14, Laptop14, and Twitter, are English datasets.", "Rest14 and Laptop14 are from SemEval 2014 task 4 (Pontiki et al., 2014), containing sentiment reviews from restaurant and laptop domains.", "Twitter is from Dong et al. (2014), which is processed from tweets.", "The statistics of these datasets are presented in Table 6.", "Details of the other three non-English datasets can be found in the Appendix.", "Following previous works, we remove samples with conflicting polarities or with NULL aspects in all datasets.", "For each dataset, we obtain five kinds of trees from three sources.", "(1) The first one is derived from the off-the-shelf dependency tree parser, such as spaCy 2 and allenNLP 3 , written as Dep..", "For the three English datasets, we use the biaffine parser from the allenNLP package to get the dependency tree, which is reported in Wang et al. (2020) that the biaffine parser could achieve better performance.", "(2) We induce trees from the pre-trained BERT and RoBERTa by the Perturbed Masking method (Wu et al., 2020), written them as BERT Induced Tree and RoBERTa Induced Tree, respectively.", "(3) We 2 http://spacy.io/ 3 http://www.allennlp.org/ BERT / RoBERTa class softmax MLP Max pooling Token embeddings Position embeddings Segment embeddings 0 1 [CLS] + + 0 2 The + + 0 8 the + + 0 6 delicious + + 0 3 crab + + 0 4 cakes + + 0 5 are + + 0 7 and + + 0 9 BBQ + + 0 10 rib + + 0 11 was + + 0 13 .", "use the Perturbed Masking method to induce trees from the fine-tuned BERT and RoBERTa after fine-tuning in the corresponding datasets.", "These two are written as FT-BERT Induced Tree and FT-RoBERTa Induced Tree.", "Besides, we add Left-chain and Right-chain in our experiments.", "Left-chain, Right-chain mean that every word deems its previous or next word as the dependent child word.", "In order to derive the FT-PTMs Induced Tree, we fine-tune BERT and RoBERTa on the ALSC datasets.", "To introduce as few parameters as possible, a rather simple MLP is used and the overall structure of our fine-tuning model is presented in Figure 1. The fine-tuning experiments are with the batch size b = 32 , dropout rate d = 0 .", "1 , learning rate = 2 e-4 using the AdamW optimizer with the default settings.", "As for the Perturbed Masking method, we apply ChuLiu/Edmonds' algorithm for the tree decoding.", "For the induced trees, we first induce trees from each layer of the PTMs, then test them by the model in Figure 1 on dev set which is composed by 20% of training set.", "Experiments show that the trees induced from the 11th layer of the PTMs could achieve the best performance among all layers, which is applied for all our experiments.", "We conduct multiple experiments incorporating different trees (Section 4.2) into the aforementioned tree-based models (Section 3.2).", "Specifically, we use the 300-dimension Glove (Penning-ton et al., 2014) embeddings for English datasets.", "We keep the word embeddings fixed to avoid over-fitting.", "It is worth noting that in experiments with the RGAT model, since the induced tree does not provide syntactic tags, we assign virtual tags for every dependency in a uniform way, which slightly damage the performance of model.", "The comparison between models with different trees is presented in Table 2, which comprises experiments results of English datasets.", "The results of non-English datasets can be found in the Appendix.", "We observe that among all the trees, incorporating FT-RoBERTa Induced Tree leads to the best results on all datasets.", "On average, models based on the FT-RoBERTa Induced Tree outperform Dep. by about 1.1% in accuracy.", "This proves the effectiveness and advantage of FT-RoBERTa Induced Tree in this competitive comparison.", "Models using BERT Induced Tree and RoBERTa Induced Tree from Table 2 show small performance difference in all but one dataset, and both are close to the Left-chain and Right-chain baselines.", "To have a better sense, we visualize trees induced from RoBERTa in Figure 2b.", "It shows that RoBERTa Induced Tree has strong neighboring connection dependency pattern.", "This behavior is expected since the masked language modeling pre-training task will make words favor depending more on its neighboring words.", "This tendency may be the reason why PTMs induced trees perform similarly to the Left-chain and Right-chain baselines.", "To answer the question Q1 in the Introduction part (Section 1), we need to compare the Dep., BERT Induced Tree, and RoBERTa Induced Tree results.", "The results show that models with dependency trees usually achieve better performance than PTMs induced trees.", "This is predictable since the word in PTMs induced trees tends to depend on words in their either left or right side as shown in Figure 2. It is worth noting that this observation does not align with the observation in Wu et al. (2020).", "The experiments based on PWCN in Wu et al. (2020) show that BERT Induced Tree achieves comparable results with the Dep., which is consistent with our PWCN results.", "However, this observation does not hold when the induced trees are used in a broader range of tree-based ALSC models, especially for the RGAT model in the bottom of Table 2. More detailed analysis will be provided in the next section.", "Although models with the PTMs induced trees usually perform worse than those with the dependency parsing trees, models with trees induced from ALSC fine-tuned RoBERTa can surpass both of them.", "Take RoBERTa Induced Tree and FT-RoBERTa Induced Tree in Table 2 for example, Model Tree Features Tree Structure Rest14 Laptop14 Twitter Acc.", "compared with RoBERTa Induced Tree, models incorporating FT-RoBERTa Induced Tree achieves an average accuracy improvement of 1.56%.", "This trending is also observed between BERT Induced Tree and FT-BERT Induced Tree.", "From Table 3, we observe that on average over 70% relations in BERT/RoBERTa Induced Tree are neighboring connections.", "This will damage the performance of models using topological structures of trees.", "Thus, PTMs induced trees usually perform worse than Dep., with a slight 4 The Left/Right-chain are exactly the same input files after the data preprocessing in these three models.", "In comparison with RoBERTa Induced Tree, a significant decline of the proportion is shown in FT-RoBERTa Induced Tree in Table 3. We see the same tendency in BERT Induced Tree and FT-BERT Induced Tree.", "This marks the consistent structure change in the fine-tuning process, indicating the transition to a more diverse structure.", "As shown in Figure 2b, RoBERTa Induced Tree has a clear pattern to depend on words in their neighbor side.", "Yet FT-RoBERTa Induced Tree in Figure 2c shows a more diverse dependency pattern.", "Aspects-sentiment Distance is the average distance between aspect and sentiment words.", "We pre-define a sentiment words set C .", "For a sentence S i in datasets S , the set of aspects words in S i is termed as w .", "S i C is the set of sentiment words appearing both in the sentence S i and the sentiment words set C .", "The Aspects-sentiment Distance(AsD) is calculated as follows: AsD ( S i ) = w i (cid:80) w C (cid:48) i (cid:80) C (cid:48) = S i C dist ( C (cid:48) i , w i ) | w | | C (cid:48) | (4) AsD = S i (cid:80) S AsD ( S i ) | S | (5) where | | is the number of elements in the set and dist ( x i , x j ) represents the relative distance between x i and x j in the tree.", "Specifically, C contains sentiment words counted on Amazon-2 from Tian et al. (2020), which can be found in the Appendix.", "As for the Rest14 and Laptop14, Xu et al. (2020) provides the paired sentiment words with its corresponding aspect.", "We also calculate the paired Aspects-sentiment Distance(pAsD) on these two datasets, which only counts the distance between aspect and its corresponding sentiment words.", "We present the Aspects-sentiment Distance (AsD) of different trees in English datasets in Table 4. Results show that FT-RoBERTa has the least AsD value, indicating the shortest aspects-sentiment distance.", "Compared to PTMs induced trees, the trees from FT-PTMs have less AsD, indicating shortened aspects-sentiment distance.", "This shows that the FT-PTMs induced Embedding Model Tree Structure Rest14 Laptop14 Twitter Acc.", "trees are more sentiment-word-oriented, which partially reveals that the fine-tuning in ALSC encourages the aspects to find sentiment words.", "However, for the Dep., we notice that some Twitter results in Table 2 can not be fully explained by these two proposed metrics.", "We conjecture that the grammar casualness features the Twitter corpus, which makes the parser hard to provide an accurate dependency parsing tree.", "Still, these two metrics can be suitable for the induced trees.", "Taken together, as the conclusion to Q2 , these analyses demonstrate that the fine-tuning on ALSC could adapt the induced tree implicitly.", "On the one hand, less proportion of neighboring connections after fine-tuning indicates the increase of long range connections.", "On the other hand, less Aspects-sentiment Distance after fine-tuning illustrates the shorter distance between aspects and sentiment words, which helps to model connections between aspects and sentiment words.", "Thus, as shown in Section 5.1, fine-tuning RoBERTa in ALSC not only makes induced tree better suit the ALSC task but also outperform the dependency tree when combined with different tree-based ALSC models.", "Additional, we explore how well the fine-tuned RoBERTa model could achieve in the ALSC task.", "We select a set of top high-performing models of ALSC as state-of-the-art alternatives.", "The comparison results are shown in Table 5. Comparing with all these SOTA alternatives, surprisingly, the RoBERTa with an MLP layer achieve SOTA or near SOTA performance.", "Especially, compared to other datasets, we notice that significant improvement is obtained on the Laptop14 dataset.", "We assume that the pre-training corpus of RoBERTa may be more friendly to the laptop domain since the RoBERTa-MLP already obtains much better results than the BERT-MLP on Laptop14.", "For these BERT-based models in the second row of Table 5, similar experiments using RoBERTa are conducted.", "However, limited improvements have been made over the RoBERTa-MLP.", "We expect that induced trees from models specifically pre-trained for ALSC (Tian et al., 2020) may provide more information, which is left for the future works.", "The FT-RoBERTa Induced Tree could be ben-eficial to Glove based ALSC models.", "However, incorporating trees over the RoBERTa brings no significant improvement, even the decline can be seen in some cases.", "This may be caused by failure to reconcile the implicitly entailed tree with external tree.", "We argue that incorporating trees over the RoBERTa in currently widely-used tree methods may be the loss outweighs the gain.", "Additionally, in the review of previous ALSC works, we notice that very few works employ the RoBERTa as the base model.", "We would attribute this to the difficulty of optimizing the RoBERTa-based ALSC models.", "As the higher architecture, which is usually randomly initialized, needs a bigger learning rate compared to the RoBERTa.", "The inappropriate hyperparameters may be the cause reason for the lagging performance of previous RoBERTa-based ALSC works (Phan and Ogunbona, 2020).", "In this paper, we analyze several tree structures for the ALSC task including parser-provided dependency tree and PTMs-induced tree.", "Specifically, we induce trees using the Perturbed Masking method from the original PTMs and ALSC fine-tuned PTMs respectively, and then compare the different tree structures on three typical tree-based ALSC models on six datasets across four languages.", "Experiments reveal that fine-tuning on ALSC task forces PTMs to implicitly learn more sentiment-word-oriented trees, which can bring benefits to Glove based ALSC models.", "Benefited from its better implicit syntactic information, the fine-tuned RoBERTa with an MLP is enough to obtain SOTA or near SOTA results for ALSC task.", "Our work can lead to several promising directions, such as PTMs-suitable tree-based models and better tree-inducing methods from PTMs.", "We would like to thank the anonymous reviewers for their helpful comments.", "This work was supported by the National Key Research and Development Program of China (No. 2020AAA0106700) and National Natural Science Foundation of China (No. 62022027)." ]
[ "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "result", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "other", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "other", "other" ]
[ "As the sources of information that we consume everyday rapidly diversify, it is becoming increasingly important to develop NLP tools that help to evaluate the credibility of the information we receive.", "A critical step towards this goal is to determine the factuality of events in text.", "In this paper, we frame factuality assessment as a modal dependency parsing task that identifies the events and their sources, formally known as conceivers , and then determine the level of certainty that the sources are asserting with respect to the events.", "We crowdsource the first large-scale data set annotated with modal dependency structures that consists of 353 Covid-19 related news articles, 24,016 events, and 2,938 conceivers.", "1 We also develop the first modal dependency parser that jointly extracts events, conceivers and constructs the modal dependency structure of a text.", "We evaluate the joint model against a pipeline model and demonstrate the advantage of the joint model in conceiver extraction and modal dependency structure construction when events and conceivers are automatically extracted.", "We believe the dataset and the models will be a valuable resource for a whole host of NLP applications such as fact checking and rumor detection.", "We're not just fighting an epidemic; we're fighting an infodemic.", "The ongoing COVID-19 pandemic taught us the importance of determining factuality of events at a time when the sources of media we consume have greatly diversified.", "This is compounded by the fact that the information that we receive from these 1 https://github.com/Jryao/modal_ dependency news sources usually does not go through as a rigorous editing and verification process as traditional media do.", "The sheer volume of the new media content makes human verification impossible and there is thus an increasing need for NLP tools that help verify statements made in these media sources.", "To verify if an event has indeed happened we first need to determine the level of certainty with which the event is asserted by the information source, which defaults to the author of a document but can also be another source in the text that the author attributes the information to.", "The factuality of an event cannot be fully determined without also taking into account the credibility of information source.", "Consider the text snippet in (1): (1) WBUR: A man in his 20s from Worcester County tested positive Tuesday for the new, apparently more contagious coronavirus variant, public health officials said .", "The variant was first detected in the United Kingdom, and experts have warned that it could soon become widespread in the U.S. Suppose our goal here is to determine the factuality of the statements in (1).", "We first need to determine the level of certainty with which the source is committed to the factuality of the statements.", "While the public health officials are fairly certain that a man from Worcester County tested positive for the coronavirus variant, the experts were not as certain that the virus variant will defi-nitely become widespread, as indicated by the linguistic cues like could.", "In other words, there are different levels of certainty with which the two events are asserted.", "In addition, the credibility of information source is also crucially important when evaluating the factuality of the events (De Marn-effe et al., 2012).", "If the information source is not public health officials and instead it is an anonymous source, the information that the Worcester man tested positive will be less credible.", "In fact, the factuality of the events also depends on the WBUR, the author of this text.", "If the author made up these statements, then the Worcester man testing positive would not be a fact, regardless of the level of certainty with which the events are asserted.", "Ultimately, it is impossible to fully determine the credibility of a source purely based on the information within a single text, but linking each event to its source or chain of sources allows us to verify the factuality of the event against other sources and our world knowledge.", "Therefore, identifying the level of certainty with which is an event is asserted together with its source is a crucial first step in assessing the factuality of the event.", "Previous work on factuality assessment has focused on determining the level of certainty that is asserted on events and framed it as a classi-fication or regression problem (Saur and Pustejovsky, 2012; Lee et al., 2015; Stanovsky et al., 2017; Rudinger et al., 2018; Qian et al., 2018).", "However, as we discussed above, the level of certainty alone is insufficient in determining the factuality of an event.", "In this work, we adopt a factuality representation framework proposed in (Vigus et al., 2019) called modal dependency structure (MDS).", "A modal dependency structure is formally a document-level structure where nodes are events and sources, known as conceivers while edges represent the modal strength , or the level of certainty that the conceiver holds towards an event.", "Figure 1 shows the modal dependency structure of the text in (1).", "One main advantage of MDS over previous approaches to factuality is that an MDS explicitly represents both the conceiver and the event and represents the modal strength as the level of certainty that the conceiver holds towards the events.", "It is also a hierarchical structure that allows nested representations when multiple sources are possible.", "For example, in Figure 1, the factuality of the tested positive event depends on both the credibility of public health officials, as well as the author (AUTH) of this text.", "Representing factuality as a modal dependency structure also allows us to cast factuality assessment as a modal dependency parsing problem, and opens up the door for using various structured prediction approaches to tackle this problem.", "Since no large-scale data annotated with MDS exists, we first need to develop a data set annotated with modal dependency structures to train and evaluate modal dependency parsing models.", "The main contributions of this work are as follows: We construct a large corpus annotated with modal dependency structures via crowdsourcing.", "It consists of 353 Covid-19 related news articles, in which 24,016 events and 2,938 conceivers are annotated.", "To our best knowledge, this corpus is the first large-scale corpus annotated with modal dependency structures.", "Although modal dependency structure is a complicated representation, we show that our data set is annotated with high consistency.", "We believe the crowdsourcing techniques we have developed will add to the knowledge of how to develop large-scale data sets via crowdsourcing, especially for complicated representations.", "We develop a joint modal dependency parsing model that extracts events, conceivers and parses a document into its modal dependency structure.", "We present experimental results that show the effectiveness of our parsing approach and the consistency of the data set.", "In addition, we evaluate the joint model against the pipeline model, and show the advantage of the joint model in overall end-to-end modal dependency parsing performance.", "In this section we first provide additional detail for the modal dependency structure, and then present our strategy of decomposing the modal dependency structure into subtasks that are suitable for crowdsourcing.", "We also evaluate the quality of our annotated data set, and provide statistics relevant for training MDS parsing models.", "The modal dependency structure builds on the annotation scheme of FactBank (Saur and Pustejovsky, 2009) and is inspired by the structured approach in temporal dependency annotation in Zhang and Xue (2018b).", "Like FactBank, the modal dependency structure combines epistemic strength full, partial, neutral with polarity values positive, negative to define a set of six values for modal strength.", "Table 1 shows the modal strength values (i.e. labels) used in modal dependency structures and their corresponding values in FactBank.", "As illustrated in Figure 1, these values are represented as edge labels in the modal dependency structure.", "Readers are referred to (Vigus et al., 2019) for how these six values are defined.", "CT-Table 1: Modal strength values in the modal dependency structure and FactBank.", "While Vigus et al. (2019) show that modal dependency structures (MDS) can be annotated with high inter-annotator agreement by expert annotators in a pilot annotation of six documents, a corpus that is much larger in scale is needed in order to train modal dependency parsers that can be used for downstream applications.", "To make MDS feasible for crowdsourcing, we have adopted a number of strategies.", "The first strategy is to decompose MDS annotation into four subtasks: event identification, event attachment, event modal strength annotation and modal superstructure construction.", "The instructions to crowd-workers for each subtask are piloted to ensure that the subtask can be performed with high consistency before they are set up for productive annotation.", "Second, where possible, we have applied a number of pre-processing steps to simplify the tasks for crowd-workers.", "In addition, we have also adopted a payment structure to incentivize high-quality work.", "In all subtasks, we require three crowd-workers to complete one assignment and use the majority vote answer as the final decision.", "All annotations are conducted on the Amazon Mechanic Turk platform.", "Task 1: Event Identification Event annotation involves identifying event trigger words, which are typically verbs and nouns.", "We first extract event candidates using a publicly available, common event trigger word dictionary.", "2 We then ran the Stanford CoreNLP dependency parser (Man-ning et al., 2014) on raw text to extract the verbs and the root of each syntactic dependency parse as candidate events.", "A pilot study shows that 90% of the verbs in the extracted candidate events are event triggers, so we decide to treat all verbs as event triggers and launch an event identification task for about 10K non-verb event candidates.", "We present crowd-workers with event candidates and ask them to decide if they are events.", "Task 2: Event Attachment The next subtask is to attach a child event to a parent , which can be a conceiver (2a), another event (2b), or in the case of hypothetical situations, an abstract have-condition node (2c).", "To simplify things for crowd workers, we made the decision not to introduce abstract nodes in the modal dependency tree.", "Events in hypothetical situation are annotated as neutral events and attached to their conceivers directly.", "A child event is attached to a parent event only when the parent event is a modal predicate.", "For example, in (2b), the parent of visit is wants .", "The modal predicates form a closed set and can be extracted with a list of modal event triggers.", "Using a dependency parser, we can reliably identify events that should be attached to modal events, so we can do this part of the annotation without soliciting judgments from crowd workers.", "(2)", "a. A 72-year-old man died , the police said.", "Pos (died, police)", "b. John wants to visit Japan.", "Neut (visit, wants)", "c. If it rains tomorrow, I will stay at home.", "Neut (have-condition, Author) Pos (rains, have-condition) In the majority of cases, the parent of an event is a conceiver (or the Author), as in (2a), where the conceiver of died is the police .", "For any given 2 https://github.com/Jryao/temporal_ dependency_graphs_crowdsourcing event, the list of candidate conceivers can be very large, so some filtering is needed to shrink it down so that a smaller list of candidates is presented to crowd-workers.", "To collect possible conceivers, we first construct a list of common conceiver-introducing predicates (CIPs) following Saur and Pustejovsky (2009) and Vigus et al. (2019).", "Then, we extract possible conceivers with the Stanford CoreNLP parser from three sources: the subject of common CIPs in our list, the subject of all other events, and named entities that are possible conceivers, such as organization, person.", "For each event, we limited the candidate conceivers to those in the same paragraph as the event, and further filter out unlikely conceivers by their hypernyms in Wordnet.", "3 We present a list of possible conceivers and ask workers to select the most appropriate one for the event in question.", "Task 3: Event Modal Strength Annotation After attaching the events to their parent, the third task is to annotate modal strength from the conceiver's perspective, which are edge labels in modal dependency structures.", "Vigus et al. (2019) define six modal strength values listed in Table 1.", "In our pilot annotation, however, we found partial negative and neutral negative events only account for less than 2% of all events.", "To have a manageable crowdsourcing task and given their low frequency, we decide to merge partial negative and neutral negative events to negative events, and only use four labels: full positive, partial positive, neutral positive and negative.", "The event modal strength task is annotated in two steps.", "In the first step, events are classified into three classes: full positive, negative, and neither.", "In the second step, events in the third class are further classified into partial positive and neutral positive.", "For example, (3a) is annotated as a full positive event, (3c) is annotated as a negative event, and (3b) is annotated as neither in the first step.", "(3b) is further labeled as a neutral positive event in the second step.", "(3)", "a. The dog barked .", "b. The dog might have barked .", "c. The dog didn't bark .", "conceiver in some cases.", "(4a) and (4b) are two common cases where the parent of a conceiver is another conceiver.", "In (4a), the conceiver Mary and the embedded conceiver John are in the same sentence, and in (4b), the conceiver John is in quoted speech.", "For the cases like (4b), the two conceivers are not necessarily in the same sentence, but they are usually close to each other in the text.", "(4)", "a. Mary says John wants to visit Japan.", "Pos (John, Mary)", "b. John wants to visit Japan. He wants to go next summer.", "Mary says.", "Pos (John, Mary) Conceivers that don't have any neighboring conceivers are directly attached to the Author.", "4 For the rest, we design a conceiver attachment task similar to the event attachment task.", "The modal strength of conceivers is decided by the modal strength of their CIPs, which is available after Task 3.", "For the conceivers that don't have an associated CIP, such as named entities, we ask crowd-workers to annotate their modal strength.", "Our basic quality control strategy involves using two tests to select crowd-workers: a qualification test and a survival test.", "Workers need to first pass the qualification test in order to be eligible for working on a task.", "In addition, test questions with ground truth answers are embedded in each HIT.", "Workers need to maintain a high cumulative accuracy through the annotation process to remain eligible for the task.", "For the event identification subtask, our qualification test threshold and survival test threshold are both set to 80% accuracy.", "For the event attachment task, they are both set to 70% as it is a more challenging task.", "We also developed a stratified payment approach to incentivize high-quality work.", "There is no guarantee that workers who have passed the qualification test will continue to perform well in the actual annotation task.", "To incentivize high-quality annotation, we adopt a stratified payment approach for event modal strength annotation.", "We offer a base payment of $ 0.01 per question, and increase it to $ 0.02 if the worker achieves a 70% accuracy on the test questions in that HIT, and further increase it to $ 0.03 if the worker achieves a 90% accuracy.", "The 4 In practice, if there is no other conceivers in the same paragraph, we attach that conceiver to the Author.", "We measure annotation quality with two metrics.", "First, we compute the agreement among crowd-workers using Worker Agreement With Aggregate (WAWA) (Ning et al., 2018), which measures the average agreement between each crowd-worker and the aggregate answer.", "Second, we compare crowd-workers' annotation with the annotation of an expert annotator and compute the F-score.", "Table 2 presents the WAWA scores for each subtask.", "The statistics show good agreement among crowd-workers for all subtasks, with a moderately lower agreement for Task 4, the construction of the conceiver superstructure.", "We also evaluate the agreement between the majority opinion of crowd-workers and the expert annotator with an 11-document subset that are annotated by both the expert annotator and crowd-workers.", "In this evaluation, we attempt to measure the agreement between the crowd-sourced modal dependency structures and the modal dependency structures annotated by the expert annotator.", "After assembling the modal dependency structures from the full annotation pipeline, we also report the overall agreement between crowd-workers and the expert annotator in Table 3.", "Our overall unlabeled attachment agreement (UAA) is 78.6%, labeled attachment agreement (LAA) is 72.1%.", "Since we decompose the MDS annotation into smaller steps, the annotation of an earlier step will affect that of a later step.", "For instance, the results of the event identification step (Task 1) are used as input to set up the event attachment (Task 2) and modal strength annotation (Task 3).", "An incorrect annotation in Task 1 will propagate to the other tasks that are based on the event annotation.", "Table 3 presents the agreement statistics for the subtasks.", "It is important to note that the agreement statistics for the subtasks include disagreements in event identification and are thus generally a bit lower than the stand-alone tasks.", "We downloaded the coronavirus news data set using AYLIEN API.", "5 We sampled 353 news articles from 11 media sources, including Business Standard, Business Insider, NBC News, The New York Times, Reuters, The Guardian, The Washington Post, CNN, Fox News, Yahoo News and Wikinews.", "As shown in Table 4, our MDS data set has 24,016 events and 2,938 conceivers, a much larger corpus than FactBank (Saur and Pustejovsky, 2009), which has 208 articles and 9,488 events.", "A more detailed breakdown shows that for event attachment annotation, 29% events are attached to a non-Author conceiver, and 66% events are attached to the Author.", "The rest of the events either have a unspecified conceiver or an event as parent.", "In this section, we introduce our parser for modal dependency parsing.", "Our modal dependency parser is inspired by Zhang and Xue (2018a), who introduce a ranking model for temporal dependency parsing.", "As the temporal dependency tree used to train their model is similar in structure to our modal dependency tree, it is reasonable to adopt their model as the starting point.", "Our model is also inspired by Ross et al. (2020), who extend Zhang and Xue (2018a) by replacing the Bi-LSTM encoder with contextualized neural language models, such as BERT (Devlin et al., 2019).", "Specifically, our modal dependency parser constructs a modal 5 https://aylien.com/blog/ free-coronavirus-news-dataset dependency tree by incrementally identifying the parent node for each child node in textual order.", "For each child node, the parser ranks the candidate parent nodes and selects the one with the highest score as its parent node.", "Since the nodes in a modal dependency tree are events or conceivers, to parse a text into a modal dependency tree, we need to first extract the events and conceivers, then build the modal dependency structure.", "Since Zhang and Xue (2018a)'s pipeline system suffers from error propagation, we adopt a multi-task learning approach that jointly trains the event and conceiver extraction and structure building components.", "3.1 Model Description Figure 2 shows the model architecture.", "Given an input text, we obtain the token representation w k for each token by encoding the text with a pre-trained BERT encoder (Devlin et al., 2019).", "To fit the long document to BERT, we treat one document as a batch, and encode it sentence by sentence.", "This contextualized token representation is shared by the mention extraction stage and structure building stage.", "We then label the k -th token with a BIO tagger by mapping w k to a tag logit using a standard feed-forward neural network.", "In our experiment, we use a single tagger to extract both events and conceivers because recognizing certain events such as reporting events ( e.g. said ) might be helpful to extract conceivers.", "In the structure building stage, the goal is to find the most appropriate parent for each event and conceiver node.", "In theory, every node in the document can be a candidate parent for a given child node.", "To reduce the search space, we only consider candidate parent nodes within a 5-sentence window of the child node plus two meta nodes (the Author and Root nodes) as candidate parents and include at most n candidate parents for each child.", "The representation for node x i is the concatenation of the start token representation, the end token representation, and the span representation of the node.", "Following Zhang and Xue (2018a), we use an attention vector (Bahdanau et al., 2016) computed over the tokens in node span i as its span representation.", "Let w t be the tokens in node i , the span representation x i is computed as following: t = FFN ( w t ) a i,t = exp[ t ] (cid:80) end ( i ) k = start ( i ) exp[ k ] Figure 2: Model architecture for the joint model.", "The pair representation of node i and one of its candidate parent y (cid:48) i is the concatenation of their node representations.", "The pair score is computed by sending the pair representation of node i and y (cid:48) i to a feed-forward neural network: s i,y (cid:48) i ,l k = FFN p ([ x i , y i (cid:48) ]) where FFN p denotes the feed-forward neural network that computes a pair score.", "s i,y (cid:48) i ,l k represents the score of node y (cid:48) i being the parent of node i with the modal relation label l k .", "After computing all the pair scores for the node i , we concatenate them into a vector c i .", "Then a softmax function is applied to get a probability distribution for the node i having each candidate parent (with each relation).", "We use cross-entropy loss for both mention extraction and dependency parsing.", "We optimize the following joint loss during training: L = L e + L p where L e and L p refer to extraction loss and parsing loss respectively, and is a hyper-parameter controlling weights between extraction and parsing.", "Data Split Out of the 353 documents in our MDS dataset, we use 289 of them as training data, 32 as validation data, and 32 as testing data in our experiments.", "The test set includes the 11 documents that are annotated by the expert annotator.", "Table 5 shows the data split.", "The expert annotator also adjudicated some of the more challenging aspects of MDS annotation to improve the quality of the validation and test sets.", "Implementation Details We use bert-large-cased (Wolf et al., 2020) for all experiments.", "For each child, we include at most n = 16 candidate parent nodes.", "Our hyper-parameter settings can be found in the Appendix.", "Results We evaluate our joint model against the pipeline model.", "The pipeline model trains the event and conceiver mention extractor separately from the structure building component without sharing the BERT encoder.", "We use micro-average F1 score as the evaluation metric, and for the mention extraction task, an event or conceiver is only correctly identified if there is an exact match between the extracted mention with the gold mention.", "As we can see in Table 6, the pipeline model and joint model achieve similar results on event extraction, indicating that event extraction does not benefit from a joint model.", "This shows that event extraction can by and large be extracted independently without taking into account their relations to their conceivers.", "However, the joint model outperforms the pipeline model in conceiver extraction by 0.2% and 2.9% on the development and test set respectively.", "This improvement is consistent with the observation that conceiver extraction is closely related to the structure building stage of MDS parsing because an entity (e.g. a person or organization) is a conceiver only if it serves as the conceiver of an event or another conceiver in the structure building stage.", "Not all entities in a text are conceivers.", "In both models, the conceiver extraction scores are lower than the event extraction scores due to the scarcity of conceivers in the data set.", "When evaluating the performance of the structure building component of the parsing model with gold mentions as input (the Parsing (gold) column), the pipeline model achieves slightly higher scores than the joint model.", "However, when using the automatically extracted events and conceivers from the mention extraction stage as input to the structure building stage (the Parsing (auto) column), in a setting that really matters for downstream applications, the joint model outperforms the pipeline model on both the development and test set by 0.6% and 1.5% respectively.", "This shows that the joint model reduces inconsistent predictions between the mention extraction and structure building stages resulting from a pipeline model not sharing parameters, and improves the overall result.", "Since the 11-document subset of the test set are annotated by both the expert and crowd-workers, we can conduct a comparative error analysis of the system and crowd-workers and see if they make the same mistakes.", "For this particular analysis, we focus on the structure building stage with gold event and conceiver mentions as input.", "We only look at whether a child event or conceiver is attached to its correct parent.", "In the majority of cases, the Author node is the conceiver of a child node.", "However, finding the non-Author conceiver for a child is more revealing about the effectiveness of the model.", "So we focus on nodes whose correct conceivers are not the Author, and evaluate both crowd-workers' annotation and the system output against that of the expert annotator.", "In this subset of the test set, 317 nodes have a non-Author conceiver as parent.", "Among these, crowd-workers disagree with the expert annotator in 102 cases, while the system disagrees with the expert annotator in 122 cases (the last row of Table 7).", "This shows this is a challenging aspect of the annotation for both crowd-workers and the system, with the system performing worse than crowd-workers.", "Out of the 317 nodes, 59 of them have the conceiver in a different sentence while the remaining 258 have the conceiver in the same sentence.", "We Event ID Conc ID Parsing (gold) Parsing (auto) dev test dev test dev test dev test Pipeline 92.7 90.9 70.9 67.5 80.7 80.6 69.7 67.5 Joint 92.8 90.8 71.1 70.4 79.4 80.1 70.3 69.0 Table 6: Comparison of the pipeline model and the joint model.", "can see from Table 7 that among those where the child is in the same sentence as the parent, the system and crowd-workers disagree with the expert annotator to a similar extent, 32.6% vs 35.3%.", "However, for cases where the child is in a different sentence from its parent, there is a much bigger difference in their disagreement with the expert annotator (30.5% vs. 52.5%).", "This shows that while crowd-workers can identify conceivers from a different sentence just as easily as from the same sentence, it is much more difficult for the system to attach a child node to a distant conceiver.", "Addressing this challenge will be crucial to further improve the performance of the model.", "Factuality Annotation While there is a signifi-cant amount of research on annotating factuality or modality (Saur and Pustejovsky, 2009; Diab et al., 2009; Matsuyoshi et al., 2010; Soni et al., 2014; Lee et al., 2015; Prabhakaran et al., 2015; Minard et al., 2016), factuality and opinions (Son et al., 2014), senses of modal verbs (Ruppenhofer and Rehbein, 2012), and credibility in social media (Mitra and Gilbert, 2015), a few of them are particularly related to our work.", "Our annotation is closely related to FactBank Saur and Pustejovsky (2009) in that both annotate the level of certainty that the source asserts over an event, but FactBank does not explicitly represent their relations in a hierarchical structure and is annotated by expert annotators.", "Like our work, Lee et al. (2015) also annotate event factuality via crowdsourcing, but they only annotate the level of certainty from the perspective of the author, to the exclusion of non-author conceivers.", "Our work is based on Vigus et al. (2019), who first came up with the model dependency annotation scheme.", "However, they only annotate 377 events from 6 documents in a pilot effort with expert annotators.", "We have shown that it is feasible for crowd-workers to annotate modal dependency structures with considerable consistency and produce modal dependency annotation at scale.", "Automatic factuality assessment Existing work typically casts factuality assessment as a classifica-tion or regression problem.", "For example, Saur and Pustejovsky (2012) and Prabhakaran et al. (2015) adopt rule-based and feature-based machine learning approaches to factuality classification.", "More recently, Qian et al. (2018) predict event factuality via a Generative Adversarial Networks based model.", "Rudinger et al. (2018) design two LSTM based models, and Pouran Ben Veyseh et al. (2019) use a graph-based neural network model for event factuality prediction.", "Our work departs from previous practice and recasts factuality assessment as modal dependency parsing to simultaneously predict the source and its level of certainty over an event, and exposes both for downstream applications.", "In this paper, we proposed a novel approach to factuality assessment by casting it as a modal dependency parsing problem.", "We first built a large data set annotated with modal dependency structures via crowdsourcing, and demonstrated the quality of this data set with a careful evaluation of each aspect of the annotation.", "We then developed the first modal dependency parser, adopting a joint learning approach to alleviate error propagation, and demonstrated its advantage over the pipeline approach in an end-to-end evaluation.", "Future work involves further improving the accuracy of the parser and applying the parser to perform large-scale factuality assessments of events in news media.", "We thank the anonymous reviewers for comments.", "Marie-Catherine De Marneffe, Christopher D Manning, and Christopher Potts.", "Did it happen?", "the pragmatic complexity of veridicality assessment.", "Computational linguistics , 38(2):301333.", "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova.", "BERT: Pre-training of deep bidirectional transformers for language understanding.", "T. Mitra and E. Gilbert.", "2015.", "Credbank: A large-scale social media corpus with associated credibility annotations.", "In ICWSM .", "This work is supported in part by a grant from the IIS Division of National Science Foundation (Award No. 1763926) entitled Building a Uniform Meaning Representation for Natural Language Pro-cessing.", "All views expressed in this paper are those of the authors and do not necessarily represent the view of the National Science Foundation.", "This research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via Contract", "No.: 2021-20102700002.", "The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government.", "The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes not withstanding any copyright annotation therein.", "For our annotation, we use publicly available data sources, partly from Wikinews and partly from a publicly available data set that was aggregated, analyzed, and enriched by AYLIEN using AYLIEN's News Intelligence Platform.", "We use the Amazon MTurk platform to collect the annotation, a common practice in the NLP community.", "In the annotation task 1, 2 and 4, we pay $ 0.02 $ 0.03 per question.", "In task 3, we adopt a stratified payment approach to incentivize high-quality work.", "We pay $ 0.01 $ 0.03 per question, based on the quality of the annotation.", "We believe our crowd workers are fairly compensated.", "The expert annotator is one of the authors of this paper." ]
[ "objective", "abstain", "method", "objective", "objective", "objective", "method", "abstain", "abstain", "other", "abstain", "objective", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "objective", "objective", "result", "objective", "objective", "result", "result", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "other", "other", "other", "other", "method", "objective", "objective", "objective", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "method", "method", "abstain", "method", "abstain", "method", "abstain" ]
[ "Bounded-Memory Control Pappas Dani Yogatama Zhaofeng Wu Schwartz Noah A. Smith Science & Engineering, University of Washington Allen Institute for Artificial Intelligence Engineering, Hebrew University of Jerusalem Science, The University of Hong Kong", "{hapeng,jkasai,npappas,zfw7,nasmith}@[email protected],[email protected]@mail.huji.ac.il Abstract Transformer architectures have achieved state-of-the-art results on a variety of natural language processing (NLP) tasks.", "However, their attention mechanism comes with a quadratic complexity in sequence lengths, making the computational overhead prohibitive, especially for long sequences.", "Attention context can be seen as a random-access memory with each token taking a slot.", "Under this perspective, the memory size grows linearly with the sequence length, and so does the overhead of reading from it.", "One way to improve the efficiency is to bound the memory size.", "We show that disparate approaches can be subsumed into one abstraction, a ttention with b ounded-memory c ontrol (ABC ), and they vary in their organization of the memory.", "ABC reveals new, unexplored possibilities.", "First, it connects several efficient attention variants that would otherwise seem distinct.", "Second, this abstraction gives new insightsan established approach (Wang et al., 2020b) previously thought to not be applicable in causal attention, actually is .", "Last, we present a new instance of ABC , which draws inspiration from existing ABC approaches, but replaces their heuristic memory-organizing functions with a learned , contextualized one.", "Our experiments on language modeling, machine translation, and masked language model finetuning show that our approach outperforms previous efficient attention models; compared to strong transformer baselines, it significantly improves the inference time and space efficiency with no or negligible accuracy loss.", "Transformer architectures are now central in natural language processing (Vaswani et al., 2017).", "They rely on the attention mechanism (Bahdanau et al., 2015) to contextualize the input.", "The context can be seen as a random access memory whose size linearly grows with the sequence length; each query This work was done while Zhaofeng Wu and Nikolaos Pappas were at the University of Washington.", "reads from it using a softmax-normalized linear combination, with overhead linear in the memory size.", "This amounts to a quadratic complexity overall, making transformers' computational overhead prohibitive, especially for long sequences.", "One way to improve attention's efficiency is to bound its memory size.", "Imposing a constant-sized constraint over the memory ensures that reading from it has constant time and space overhead, yielding a linear overall complexity in sequence lengths.", "This is in fact a common strategy adopted by several recent works.", "In this work, we show that some of these works are closely connected in ways that, to date, have gone unremarked.", "We propose a ttention with b ounded-memory c ontrol (ABC ), a unified abstraction over them.", "In ABC , constant-sized memories are organized with various control strategies, e.g., induced from heuristic patterns (Beltagy et al., 2020; Zaheer et al., 2020; Ainslie et al., 2020; Rae et al., 2020, inter alia ), locality assumptions (Parmar et al., 2018; Liu et al., 2018), or positions (Wang et al., 2020b).", "These strategies, by and large, are context-agnostic.", "In response to this, we propose ABCMLP , a particular instance of ABC that learns a contextualized control strategy from data.", "Specifically, ABCMLP uses a neural network to determine how to store each token into the memory (if at all).", "Compared to previous bounded-memory models, it strikes a better trade-off between accuracy and efficiency: controlling for the accuracy, ABCMLP can get away with much smaller memory sizes.", "ABC models (including ABCMLP ) come with a linear complexity in sequence lengths, and admit recurrent computation graphs in causal attention (self-attention over the prefix).", "Therefore they are appealing choices in a variety of applications, including text encoding, language modeling and text generation.", "This leads to a surprising finding.", "Linformer (Wang et al., 2020b), an established efficient attention method, was previously thought not 7469 to be applicable in causal attention or autoregressive decoding (Tay et al., 2020).", "Through the ABC view, we show that it actually is , and achieves competitive performance in our machine translation experiments.", "ABC connects existing models that would otherwise seem distinct, reveals new insights into established methods, and inspires new efficient attention architectures.", "We explore its applications in transformers, as a drop-in substitute for the canonical softmax attention.", "ABC offers a novel lens that can help future research in the analysis of transformers, where the theoretical insights are still catching up with empirical success.", "Experiments on language modeling, machine translation, and masked language model finetuning show that our ABCMLP model outperforms previous ABC approaches in accuracy with a much smaller memory size.", "Compared to the strong transformer baseline, ABCMLP achieves a significant speedup and memory savings at inference time, with no or negligible accuracy loss.", "The efficiency improvements are more prominent for long sequences, suggesting that the asymptotic savings are even more appealing in applications involving long sequences.", "We release our code at https://github.com/Noahs-ARK/ABC .", "This section presents our outer-product memory perspective of attention, which allows for a smooth transition to later discussion.", "In attention, a sequence of queries { q i } Ni =1 attend to a memory with N slots, each storing a key and value pair: K = [ k 1 , . . . , k N ] , V = [ v 1 , . . . , v N ] RN d .", "1 Query q reads from the memory using a softmax-normalized linear combination, producing a d -dimensional vector: attn( q , { k i } , { v i } ) = V softmax ( Kq ) .", "This takes O ( N ) time and space.", "When the attention with N queries can be parallelized (e.g., in text encoding), it takes linear time and quadratic space; when it cannot be (e.g., in decoding), it takes quadratic time and linear space.", "The memory can be equivalently represented as sums of vector outer products: K = IK = (cid:80) Ni =1 e i k i , V = (cid:80) Ni =1 e i v i .", "I is the identity matrix, and denotes the outer product: [ x 1 The number of queries and key-value pairs may differ, e.g., in the cross attention of a sequence-to-sequence model. y ] i,j = x i y j .", "N -dimensional vectors { e i } form the standard basis: e i has the i th element being one and others zeros.", "We can view e i as control vectors that determine where to store k i and v i : e i k i = (cid:2) 0 , . . . 0 (cid:124) (cid:123)(cid:122) (cid:125) i 1 , 1 , 0 , . . . , 0 (cid:124) (cid:123)(cid:122) (cid:125) N i (cid:3) k i = (cid:2) 0 (cid:124)(cid:123)(cid:122)(cid:125) d ( i 1) ; k i ; 0 (cid:124)(cid:123)(cid:122)(cid:125) d ( N i ) (cid:3) .", "The N -byd matrix on the last line has its i th row being k i and all others zeros; in this sense, k i is stored in the i th slot by e i , not affecting others.", "A straightforward way to improve attention's efficiency is to bound its memory size.", "Our outer-product view of attention provides a straightforward way to devise this, by replacing { e i } with control vectors that select n N vectors to attend to.", "We dub this approach a ttention with b ounded-memory c ontrol (ABC ).", "Concretely, let (cid:101) K , (cid:101) V R n d denote a constant-size memory with n slots, with n set a priori .", "{ i R n } Ni =1 denotes a sequence of control vectors.", "The output is calculated by attending to (cid:101) K and (cid:101) V : ABC ( q , { k i } , { v i } , { i } ) = (cid:101) V softmax (cid:16) (cid:101) Kq (cid:17) .", "(4) We will discuss various ways to construct { i } in the subsequent sections.", "Reading from the memory takes a constant O ( n ) time and space; therefore ABC 's overall complexity is O ( Nn ) , linear in the sequence length.", "2 Eq.", "3 offers an equivalent recurrent computation, which is particularly useful in causal attention where only the prefix is looked at, (cid:101) K t +1 = (cid:101) K t + t +1 k t +1 , (5) likewise for (cid:101) V t .", "In what follows, we study several existing efficient attention approaches and show that they are in fact instances of the ABC abstraction.", "Linformer (Wang et al., 2020b) is an established efficient transformer variant that has proven successful in masked language modeling and text encoding.", "It assumes fixed-length inputs and learns a low-rank approximation of the attention weights.", "A learned n -byN matrix WLF down projects the N -byd dimensional keys and values along the timestep dimension, to an n -byd memory: (cid:101) KLF = WLFK , (cid:101) VLF = WLFV ; they are then used for attention computation with Eq.", "4.", "This yields a linear complexity in the input length.", "Linformer is an ABC instance with LF i = WLF : ,i ( i th column), and in this sense, it learns a control vector for each position.", "Previous works have noted that Linformer cannot be efficiently applied in causal attention (Table 1 of Tay et al., 2020).", "Indeed, it is less straightforward to avoid mixing future with the past when projecting along the timestep dimension.", "ABC reveals that, in fact, Linformer is applicable in causal attention.", "Like all ABC models, it admits a linear-complexity recurrent computation (Eq. 5): (cid:101) KLF t +1 = (cid:101) K t + LF t +1 k t +1 .", "This confirms ABC 's benefits: it reveals new insights about existing models and reassesses their applications and impact.", "Our experiments show that Linformer achieves competitive performance in machine translation.", "Improving attention's efficiency with clustering has received an increasing amount of interest (Kitaev et al., 2020; Roy et al., 2020; Wang et al., 2020a, inter alia ).", "ABC bears interesting connections to clustering-based methods.", "Here we discuss an approach that closely follows Vyas et al. (2020), except that it clusters keys and values instead of queries, and only attends to the centroids to reduce the effective context size.", "Formally, keys and values are grouped into n < N clusters { (cid:101) k CL j } nj =1 , { (cid:101) v CL j } nj =1 .", "3 Let an N -byn binary matrix M denote the cluster membership shared between keys and values.", "M i,j = 1 iff.", "k i is assigned to cluster (cid:101) k CL j and v i to (cid:101) v CL j .", "The j th centroid for the keys is (cid:101) k CL j = N (cid:88) i =1 M i,j (cid:80) N =1 M ,j k i ; (6) 3 We use (cid:101) k CL j to denote both the j th cluster and its centroid.", "The last line indicates that this model is an instance of ABC : i = (cid:80) nj =1 ( M i,j / (cid:80) N =1 M ,j ) e j .", "The stack of centroids can be seen as the constant-size memory.", "Putting aside the clustering overhead (i.e., constructing M and computing centroids), it has a linear complexity in the sequence length.", "In some applications, being able to remove entries from the memory can be beneficial: clearing up older context frees slots for more recent ones, promoting a locality inductive bias.", "ABC offers the capability to do so, if augmented with an additional matrix multiplication.", "We use the sliding-window attention as an example.", "Attending to the most recent n input tokens (Belt-agy et al., 2020; Zaheer et al., 2020; Sukhbaatar et al., 2021, inter alia ) can be seen as a first-in-first-out queue that pops out the oldest token while pushing in the most recent one: (cid:101) KWD t = [ k t n +1 , ..., k t ] .", "The pop operation can be achieved by multiplying an n -byn upper shift matrix : U i,j = i +1 ,j , with being the Kronecker delta (i.e., U has ones only on the superdiagonal and zeros elsewhere).", "Left-multiplying U against (cid:101) KWD t shifts its rows one position up, with zeros appearing in the last: U (cid:101) KWD t = U (cid:2) k t n +1 , . . . , k t (cid:124) (cid:123)(cid:122) (cid:125) n (cid:3) = (cid:2) k t n +2 , . . . , k t 1 , k t (cid:124) (cid:123)(cid:122) (cid:125) n 1 , 0 (cid:3) R n d .", "Then the most recent token can be put into the slot freed up: (cid:101) KWD t +1 = U (cid:101) KWD t + e n k t +1 .", "U and t = e n ensure a first-in-first-out queue.", "Dilated and stride convolution patterns (Beltagy et al., 2020) can be similarly recovered (A.4).", "Recurrently multiplying U simulates the discrete pop operation (Grefenstette et al., 2015; Joulin and Mikolov, 2015; Yogatama et al., 2018) in a differentiable way.", "This is reminiscent of recurrent neural networks, while in this case U is never updated as 7471 parameters.", "It is exciting to explore learning U , but is beyond the scope of this work.", "Discussion.", "Besides the models discussed above, certain variants of Rae et al. (2020) and sparse attention patterns (local-to-global attention; Beltagy et al., 2020; Zaheer et al., 2020; Ainslie et al., 2020) can also be seen as instances of ABC (A).", "ABC provides a unified perspective of them, and at the same time points out their limitations: their control strategies are context-agnostic.", "In response to this, in 4 we propose to learn a contextualized strategy from data.", "Table 1 analyzes various ABC models, and Table 2 details their complexity.", "The ABC abstraction connects several existing approaches that would otherwise seem distinct.", "This inspires the design of new architectures.", "We hypothesize that learning a contextualized strategy can achieve better performance.", "This section introduces ABCMLP .", "It parameterizes with a single-layer multi-layer perceptron (MLP) that takes as input the token's representation x i , and determines which slots to write it into and how much.", "Matrix W is learned.", "exp is an elementwise activation function.", "The motivation is to allow for storing a fractional (but never negative) amount of input into the memory.", "4 Using a non-negative activation, however, has a drawback: the scales of (cid:80) i i k i and (cid:80) i i v i would grow with the sequence lengths, making training less stable.", "To overcome this, we divide i vectors by their sum.", "This functions as normalization and aims to offset the impact of varying sequence lengths.", "5 It admits the recurrent computation graph as in Eq.", "5, and has a linear complexity in the sequence length.", "A key design choice of ABCMLP is that its i depends only on current input x i .", "This helps (1) keep the recurrent computation efficient in practice (Lei et al., 2018), and (2) make it applicable 4 We experiment with other activations in C.2.", "5 Here encoder self-attention or cross attention is assumed, and the normalization sums over the entire sequence.", "Causal attention is slightly different, normalizing by the sum over the prefix instead: i = i / (cid:80) ij =1 j .", "This does not require access to future tokens.", "B.1 details a linear complexity computation graph of causal i .", "in not only encoder self-attention and cross attention, but also causal attention.", "Concurrently to this work, Goyal et al. (2021) and Ma et al. (2021) also proposed methods to learn contextualized control.", "They compute i from previous layer's memory, revealing the full sequence to the control vectors.", "As a result, these two approaches are unsuitable for causal attention.", "6 ABCMLP , as other ABC models, can be used as a drop-in replacement for the canonical softmax attention, and we apply its multihead variant in transformers.", "With proper parameter sharing, the number of additional parameters ABCMLP incurs is small: inspired by Wang et al. (2020b), we tie -MLP's parameters across different layers, which adds less than 1% parameters to the models.", "ABCMLP : context-agnostic then context-dependent attention.", "We now dissect ABCMLP and show that it can be seen as a cascade of two attention mechanisms: one with a learned context-agnostic pseudo query followed by one with a context-dependent query.", "Our analysis starts with a one-dimensional example; the conclusion generalizes to higher-dimensional cases.", "Example 1.", "Consider ABCMLP with a single memory slot ( n = 1 ).", "It is parameterized with a learned vector w , and i = exp( w x i ) / (cid:80) Nj =1 exp( w x j ) .", "Since i is a scalar here, i k i = i k i .", "In other words, (cid:101) K uses w as a pseudo-query to attend to { x i } and { k i } .", "Likewise, (cid:101) V = attn( w , { x i } Ni =1 , { v i } Ni =1 ) .", "Despite its similarity to the standard softmax attention, Example 1 has a more efficient linear complexity in sequence lengths.", "w 's being context-independent is the key to the savings.", "Table 2 details its complexity.", "Example 1's conclusion generalizes to higher-dimensional cases: the j th dimension of { i } attends to { x i } and { k i } using the j th row of W as the context-independent pseudo-query; n such attention mechanisms run in parallel, stacking the 6 Both are instances of ABC (A.5).", "results into n -byd memory (cid:101) K and (cid:101) V .", "Intuitively, it is the real queries { q i } that encode what information is useful for the prediction task.", "Without access to them, ABCMLP summarizes the input for n times using different pseudo-queries, aiming to preserve enough information in the memory for onward computation.", "The attention output is calculated with the context-dependent real queries using Eq.", "4.", "B.2 presents a detailed derivation.", "starting from distinct motivations, ABCMLP closely relates to hierarchical attention (HA; Yang et al., 2016).", "HA summarizes the context into higher-level representations with a cascade of attention mechanisms, e.g., words to sentences, and then to documents.", "ABCMLP applies two types of attention.", "The first learns context-agnostic pseudo-queries and attends to the same sequence for n times in parallel, while the second retrieves from the memory with real queries.", "HA, in contrast, summarizes non-overlapping segments at each level.", "The learned pseudo-queries closely relate to the inducing point method in set attention (ISA; Lee et al., 2019).", "ISA applies a non-linear feedforward network between a cascade of two attention modules.", "This precludes the outer-product memory computation and efficient recurrences in ABC .", "Another line of work linearizes attention through kernel tricks and also applies bounded memory: their feature map dimensions are analogous to memory sizes.", "They substitute the softmax with approximations (Peng et al., 2021; Choromanski et al., 2021), heuristically designed (Katharopoulos et al., 2020; Schlag et al., 2021), or learned (Kasai et al., 2021b) functions.", "ABCMLP keeps the softmax, but over a smaller constant-sized context.", "This can be useful in practice: (1) ABC provides a unified perspective of several efficient attention methods, allowing for borrowing from existing wisdom to design new architectures; (2) it draws a close analogy to the canonical softmax attention, and is better-suited as its drop-in substitute in various application settings, as we will show in the experiments; (3) empirically, we find that ABCMLP can get away with a much smaller memory size to retain the accuracy.", "Peng et al. (2021) and Schlag et al. (2021) use gating to promote recency bias.", "The same technique is equally applicable in ABC models.", "ral Turing machines (NTM; Graves et al., 2014).", "ABCMLP computes the control vectors { i } as a function of the input, but not of the memory as in NTM.", "This ensures that the control vectors at different timesteps can be computed in parallel, improving the time efficiency in practice (Lei et al., 2018; Peng et al., 2018).", "Analogies between memory and neural architectures are also made by other previous works (Hochreiter and Schmidhuber, 1997; Weston et al., 2015; Le et al., 2020, inter alia ).", "We evaluate ABC models on language modeling (5.1), sentence-level and document-level machine translation (5.2), and masked language model finetuning (5.3).", "Dataset statistics and implementation details are summarized in C.", "Setting.", "We experiment with WikiText-103, sampled text from English Wikipedia (Merity et al., 2017).", "The BASE model with standard softmax attention is the strong transformer-based language model by Baevski and Auli (2019).", "We compare the following ABC variants, which build on BASE , but replace the softmax attention with linear-complexity bounded-memory attention alternatives while keeping other components the same.", "ABCMLP , as described in 4, learns a contextualized exp -MLP as the function.", "Linformer (3.1; Wang et al., 2020b).", "ABCRD stores each token in a randomly-selected memory slot with t = e i t .", "i t is uniformly drawn from { 1 , . . . , n } at each time step.", "This helps us quantify the differences between random and learned bounded-memory controls.", "We consider two model size settings: 16 layers (Baevski and Auli, 2019).", "All models have around 242M parameters.", "They train with 512-token segments, and evaluate with 0 or 480 context sizes: a 0or 480length prefix precedes each evaluation segment.", "32 layers (Kasai et al., 2021b).", "All models have 484M parameters.", "This setting applies layer dropout (Fan et al., 2020), and evaluates with a 256 context size.", "It aims to compare ABCMLP to several kernel-based efficient attention variants: ELU (Katharopoulos et al., 2020), RFA (Peng et al., 2021), and T2R (Kasai et al., 2021b).", "(a) 16-layer setting.", "0/480 indicate evaluation context sizes.", "(b) 32-layer setting.", "A 256-length context is used at evaluation time.", "numbers are due to Kasai et al. (2021b).", "ABC models, ABCMLP achieves the best performance for both context sizes.", "With a memory size n = 64 , ABCMLP outperforms both Linformer and ABCRD by more than 2.9 test perplexity; and the gap is larger with the longer 480-length context: more than 3.6 test perplexity.", "ABCMLP -32 outperforms its larger-memory ABC counterparts by more than 2.1 test perplexity.", "These results confirm ABCMLP 's advantages of using a contextualized strategy.", "Surprisingly, Linformer underperforms ABCRD , and its performance drops with the larger 480-length context window.", "This suggests that, while successful in text encoding, Lin-former's position-based strategy is a suboptimal design choice for causal attention, at least for long context.", "All ABC models underperform the BASE , with ABCMLP -64 having the smallest gap of 0 .", "5 perplexity.", "ABCMLP -32 outperforms kernel-based methods by more than 0.9 test perplexity, using Kasai et al. (2021b)'s 32-layer setting (Table 3b).", "Datasets.", "To assess their performance over various output lengths, we compare ABC models on sentenceand documentlevel machine translation.", "Sentence-level translation with WMT14 EN-DE 7474 Model Cross n Causal n BLEU BASE-27.2 ABCRD 32 32 25.7 ABCRD 64 64 26.2 Linformer 32 32 26.6 Linformer 64 64 26.7 ABCMLP 32 8 27.1 ABCMLP 32 32 27.3", "(a) Bolded number outperforms BASE .", "seeds.", "Bold number performs the best among ABC models.", "(Bojar et al., 2014).", "The preprocessing and data splits follow Vaswani et al. (2017).", "Document-level translation with IWSLT14 ES-EN (Cettolo et al., 2014).", "We use Miculicich et al. (2018)'s data splits and preprocessing.", "Following standard practice (Voita et al., 2019), a 4-sentence sliding window is used to create the dataset, i.e., each instance has 4 sentences.", "Setting.", "We compare ABC variants as in 5.1.", "C.2 further compares to the clustering-based (3.2) and sliding-window (3.3) ABC variants.", "The BASE model they build on is our implementation of transformer-base (Vaswani et al., 2017).", "ABC variants replace decoder cross attention and causal attention with bounded-memory attention, while keeping softmax attention for the encoder, since its overhead is much less significant (Kasai et al., 2021a); other components are kept the same.", "C.2 studies a model that replaces all softmax attention with ABCMLP .", "It performs on par with BASE , confirming ABCMLP 's broad applicability in various application scenarios.", "We evaluate with SacreBLEU (Post, 2018).", "Results.", "Table 4a summarizes sentence-level machine translation results on the WMT14 EN-DE test set.", "Overall ABCMLP performs on par with BASE , with either 32-32 cross-causal memory sizes or 32-8.", "Even with smaller memory sizes, it outperforms other ABC variants by more than 1.1 BLEU.", "Differently from the trend in the language modeling experiment (5.1), Linformer outperforms ABCRD by more than 0.5 BLEU.", "We attribute this to the smaller sequence lengths of this dataset.", "ABCMLP outperforms other ABC models by more than 0.4 BLEU, even with smaller memory sizes.", "The trend is similar on document-level translation with IWSLT14 ES-EN (Table 4b), except that ABCMLP slightly underperforms BASE by 0.2 BLEU.", "This suggests that even with longer sequences, ABCMLP is effective despite its bounded memory size.", "Linformer fails to converge even with multiple random seeds, suggesting the limitations of its purely position-based strategy in tasks involving decoding varying-length text.", "Setting.", "We compare the ABC variants as in 5.1.", "It is interesting to pretrain ABC from scratch, but we lack the resources to do so.", "Instead, we warm-start from a pretrained RoBERTa-base (Liu et al., 2019) trained with the softmax transformer, swap its attention with ABC variants, and continue pretraining with the masked language modeling (MLM) objective on a concatenation of BookCorpus (Zhu et al., 2015), English Wikipedia, OpenWebText (Gokaslan and Cohen, 2019), and RealNews (Zellers et al., 2019).", "7 Then the models are finetuned and evaluated on downstream classification datasets from the the GLUE benchbark (Wang et al., 2019).", "This is an appealing setting, since it avoids reinvesting the huge amounts of resources already put into pretraining.", "8 Results.", "Table 5 compares downstream text classification performance.", "BASE indicates a baseline that continues pretraining RoBERTa-base on our data.", "9 Following standard practice, we report development accuracy.", "Linformer achieves competitive 7 Our data differs from RoBERTa's, which we do not have access to.", "We replace CC-News (Nagel, 2016) with RealNews, and drop Stories (Trinh and Le, 2018), whose public access is broken at the time of this work.", "8 In preliminary experiments, we explored swapping in ABC , and then directly finetuning on downstream tasks without continued MLM pretraining; all models fail.", "9 BASE slightly underperforms RoBERTa-base.", "This could be due to overfitting, or the pretraining data discrepancy.", "performance, aligned with Wang et al. (2020b)'s results.", "ABCMLP outperforms Linformer, and performs on par with or better than BASE , affirming the benefits of using contextualized memory organization in MLM.", "ABCRD fails to converge in continued pretraining even with multiple seeds.", "Based on the above results, we think ABCMLP can achieve competitive performance when pretrained from scratch, just as Linformer does (Wang et al., 2020b).", "Further empirical exploration is beyond our budget and left for future work.", "Decoding efficiency over varying sequence lengths.", "ABC 's efficiency gains can be more prominent for long sequences.", "We study ABCMLP 's decoding overhead with varying sequence lengths.", "Following Kasai et al. (2021b), we consider a sequence-to-sequence generation experiment.", "Three linear-complexity models are compared: RFA (with 256/128 cross/causal memory sizes; Peng et al., 2021), T2R (32/4; Kasai et al., 2021b), and ABCMLP (32/8).", "The sizes are chosen to maximize efficiency without accuracy drop.", "T2R needs to be finetuned from a pretrained transformer to match its performance, while others don't.", "All linear-time models achieve consistent decoding speed for different lengths (Figure 1a), substantially outpacing the softmax attention baseline, especially for long sequences.", "In particular, ABCMLP decodes 1.25 times faster than RFA, another competitive model that can match trans-former's accuracy without a warm start from a pretrained model.", "This can be attributed to the fact that ABCMLP achieves similar accuracy with a much smaller memory.", "T2R's memory sizes are similar to ABCMLP 's, but it decodes about 20% faster.", "This is because it does not compute the softmax when calculating attention output, while ABCMLP does (Eq. 4).", "These results show that ABCMLP is an appealing modeling choice for decoding tasks, especially when training from scratch is desired.", "ABCMLP also achieves significant savings in terms of memory overhead (Figure 1b).", "ABCMLP , RFA, and T2R's curves are similar.", "Text encoding efficiency.", "We compare the efficiency of ABCMLP against softmax attention and Linformer when used as text encoders.", "The mod-els' sizes mirror those in the MLM experiment (5.3).", "Table 6 summarizes inference time and memory overhead with 512-length inputs, batch size 16.", "Both ABCMLP and Linformer achieve inference speed gains and memory savings over BASE .", "Linformer is faster, since its linear projection is cheaper to compute than ABCMLP 's MLP.", "Inference speed is measured on the same V100 GPU.", "The trend in memory overhead is similar.", "Although ABCMLP slightly underperforms Linformer in terms of inference speed, it can be a more appealing architectural choice in practice: in all of our 5 experiments, ABCMLP outperforms other ABC models in accuracy.", "Linformer, in contrast, fails to converge or yields sub-optimal performance on some tasks.", "This confirms its flexibility and ap-7476 BASE Linformer ABCMLP n 64 128 64 128 Speed 1.0 1.7 1.5 1.5 1.3 Memory 1.0 0.5 0.6 0.5 0.6 Table 6: Text encoding inference speed (higher is better) and memory (lower is better).", "Memory size's impact on accuracy.", "Practically, one may want to minimize the memory size to improve efficiency.", "We use the WMT14 EN-DE experiment to investigate how memory size affects accuracy.", "Using the 5.2's setup, we vary ABCMLP 's cross and causal attention memory sizes and compare their translation quality on the development data.", "They are selected from { 8 , 16 , 32 , 64 } , with cross attention's equal to or larger than causal's: cross attention is more important than causal attention in machine translation (Michel et al., 2019).", "Our results (Table 7) align with this observation: when cross attention memory is large enough, reducing causal attention memory size from 64 to 8 has a minor 0.3 BLEU drop.", "Surprisingly, ABCMLP with 8-8 sized cross-causal memory is only 1.1 BLEU behind the best-performing configuration.", "We presented attention with bounded-memory control (ABC ).", "It provides a unified perspective of several recently-proposed models, and shows that they vary in the organization of the bounded memory.", "ABC reveals new insights into established methods and inspires new architectures.", "We proposed ABCMLP , a particular instance of ABC that learns a contextualized memory control.", "On language modeling, machine translation, and masked language model finetuning, ABCMLP outperforms previous ABC models.", "Compared to the strong transformer baseline, ABCMLP achieves substantial efficiency improvements with no or negligible accuracy loss.", "We would like to thank the ARK group at the University of Washington for their helpful feedback, and the anonymous reviewers for their thoughtful comments.", "This work was supported in part by NSF grant 2113530 and a Google Fellowship.", "Nikolaos Pappas was supported by the Swiss National Science Foundation grant P400P2_183911." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "other", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "abstain", "result", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "abstain", "abstain", "other", "other", "other" ]
[ "The rapid development of large pre-trained language models has greatly increased the demand for model compression techniques, among which quantization is a popular solution.", "In this paper, we propose BinaryBERT, which pushes BERT quantization to the limit by weight binarization.", "We find that a binary BERT is hard to be trained directly than a ternary counterpart due to its complex and irregular loss landscape.", "Therefore, we propose ternary weight splitting, which initializes BinaryBERT by equivalently splitting from a half-sized ternary network.", "The binary model thus inherits the good performance of the ternary one, and can be further enhanced by fine-tuning the new architecture after splitting.", "Empirical results show that our BinaryBERT has only a slight performance drop compared with the full-precision model while being 24 smaller, achieving the state-of-the-art compression results on the GLUE and SQuAD benchmarks.", "Recent pre-trained language models have achieved remarkable performance improvement in various natural language tasks (Vaswani et al., 2017; Devlin et al., 2019).", "However, the improvement generally comes at the cost of increasing model size and computation, which limits the deployment of these huge pre-trained language models to edge devices.", "Various methods have been recently proposed to compress these models, such as knowledge distillation (Sanh et al., 2019; Sun et al., 2019; Jiao et al., 2020), pruning (Michel et al., 2019; Fan et al., 2019), low-rank approximation (Ma et al., 2019; Lan et al., 2020), weight-sharing (Dehghani et al., 2019; Lan et al., 2020; Huang et al., 2021), dynamic networks with adaptive depth and/or width (Hou et al., 2020; Xin et al., 2020; Zhou et al., 2020), and quantization (Zafrir", "et al., 2019; Shen et al., 2020; Fan et al., 2020; Zhang et al., 2020).", "Among all these model compression approaches, quantization is a popular solution as it does not require designing a smaller model architecture.", "Instead, it compresses the model by replacing each 32-bit floating-point parameter with a low-bit fixed-point representation.", "Existing attempts try to quantize pre-trained models (Zafrir et al., 2019; Shen et al., 2020; Fan et al., 2020) to even as low as ternary values (2-bit) with minor performance drop (Zhang et al., 2020).", "However, none of them achieves the binarization (1-bit).", "As the limit of quantization, weight binarization could bring at most 32 reduction in model size and replace most floating-point multiplications with additions.", "Moreover, quantizing activations to 8-bit or 4-bit further replaces the floating-point addition with int8 and int4 addition, decreasing the energy burden and the area usage on chips (Courbariaux et al., 2015).", "In this paper, we explore to binarize BERT parameters with quantized activations, pushing BERT quantization to the limit.", "We find that directly training a binary network is rather challenging.", "According to Figure 1, there is a sharp performance drop when reducing weight bit-width from 2-bit to 1-bit, compared to other bit configurations.", "To explore the challenges of binarization, we analyze the loss landscapes of models under different precisions both qualitatively and quantitatively.", "It is found that while the full-precision and ternary (2-bit) models enjoy relatively flat and smooth loss surfaces, the binary model suffers from a rather steep and complex landscape, which poses great challenges to the optimization.", "Motivated by the above empirical observations, we propose ternary weight splitting , which takes the ternary model as a proxy to bridge the gap between the binary and full-precision models.", "Specifically, ternary weight splitting equivalently converts both the quantized and latent full-precision weights in a well-trained ternary model to initialize BinaryBERT.", "Therefore, BinaryBERT retains the good performance of the ternary model, and can be further refined on the new architecture.", "While neuron splitting is previously studied (Chen et al., 2016; Wu et al., 2019) for full-precision network, our ternary weight splitting is much more complex due to the additional equivalence requirement of quantized weights.", "Furthermore, the proposed BinaryBERT also supports adaptive splitting .", "It can adaptively perform splitting on the most important ternary modules while leaving the rest as binary, based on efficiency constraints such as model size or floating-point operations (FLOPs).", "Therefore, our approach allows flexible sizes of binary models for various edge devices' demands.", "Empirical results show that BinaryBERT split from a half-width ternary network is much better than a directly-trained binary model with the original width.", "On the GLUE and SQuAD benchmarks, our BinaryBERT has only a slight performance drop compared to the full-precision BERT-base model, while being 24 smaller.", "Moreover, BinaryBERT with the proposed importance-based adaptive splitting also outperforms other splitting criteria across a variety of model sizes.", "In this section, we show that it is challenging to train a binary BERT with conventional binarization approaches directly.", "Before diving into details, we first review the necessary backgrounds.", "We follow the standard quantization-aware training procedure (Zhou et al., 2016).", "Specifically, given weight w R n (a.k.a latent full-precision weights), each forward propagation quantizes it to w = Q ( w ) by some quantization function Q ( ) , and then computes the loss (cid:96) ( w ) at w .", "During back propagation, we use (cid:96) ( w ) to update latent full-precision weights w due to the non-differentiability of Q ( ) , which is known as the straight-through estimator (Courbariaux et al., 2015).", "Recent TernaryBERT (Zhang et al., 2020) follows Ternary-Weight-Network (TWN) (Li et al., 2016) to quantize the elements in w to three values { , 0 } .", "To avoid confusion, we use superscript t and b for the latent full-precision weights and quantized weights in ternary and binary models, respectively.", "Specifically, TWN ternarizes each element w ti in the ternary weight w t as w ti = Q ( w ti )= (cid:26) sign ( w ti ) | w ti | 0 | w ti | < , (1) where sign ( ) is the sign function, = 0 .", "Binarization.", "Binarization is first proposed in (Courbariaux et al., 2015) and has been extensively studied in the academia (Rastegari et al., 2016; Hubara et al., 2016; Liu et al., 2018).", "As a representative work, Binary-Weight-Network (BWN) (Hubara et al., 2016) binarizes w b element-wisely with a scaling parameter as follows: w bi = Q ( w bi ) = sign ( w bi ) , = 1 n (cid:107) w b (cid:107) 1 .", "Despite the appealing properties of network binarization, we show that it is non-trivial to obtain binary BERT with these binarization approaches.", "To study the performance drop of BERT quantization, we train the BERT model with full-precision, { 8,4,3,2,1 } -bit weight quantization and 8-bit activations on MRPC and MNLI-m from the GLUE benchmark (Wang et al., 2018) 1 .", "We use loss-aware weight quantization (LAQ) (Hou and Kwok, 2018) for 8/4/3-bit weight quantization, TWN (Li et al., 2016) for weight ternarization and BWN (Hubara et al., 2016) for weight binarization.", "Meanwhile, we adopt 8-bit uniform quantization for activations.", "We follow the default experimental settings detailed in Section 4.1 and Appendix C.1.", "1 We conduct more experiments on other GLUE datasets and with different settings in Appendix C.1, and find similar empirical results to MRPC and MNLI-m here.", "From Figure 1, the performance drops mildly from 32-bit to as low as 2-bit, i.e., around 0 .", "6% on MRPC and 0 .", "2% on MNLI-m.", "However, when reducing the bit-width to one, the performance drops sharply, i.e, 3 .", "8% and 0 .", "9% on the two tasks, respectively.", "Therefore, weight binarization may severely harm the performance, which may explain why most current approaches stop at 2-bit weight quantization (Shen et al., 2020; Zadeh and Moshovos, 2020; Zhang et al., 2020).", "To further push weight quantization to the limit, a first step is to study the potential reasons behind the sharp drop from ternarization to binarization.", "Visualization.", "To learn about the challenges behind the binarization, we first visually compare the loss landscapes of full-precision, ternary, and binary BERT models.", "Following (Nahshan et al., 2019), we extract parameters w x , w y from the value layers 2 of multi-head attention in the first two Transformer layers, and assign the following perturbations on parameters: w x = w x + x 1 x , w y = w y + y 1 y , (3) 2 We also extract parameters from other parts of the Transformer in Appendix C.2, and the observations are similar.", "where x { 0 .", "2 w x , 0 .", "4 w x , ..., 1 .", "0 w x } are perturbation magnitudes based the absolute mean value w x of w x , and similar rules hold for y .", "1 x and 1 y are vectors with all elements being 1. For each pair of ( x, y ) , we evaluate the corresponding training loss and plot the surface in Figure 2. As can be seen, the full-precision model (Fig-ure", "2(a)) has the lowest overall training loss, and its loss landscape is flat and robust to the perturbation.", "For the ternary model (Figure", "2(b)), despite the surface tilts up with larger perturbations, it looks locally convex and is thus easy to optimize.", "This may also explain why the BERT model can be ternar-ized without severe accuracy drop (Zhang et al., 2020).", "However, the loss landscape of the binary model (Figure", "2(c)) turns out to be both higher and more complex.", "By stacking the three landscapes together (Figure", "2(d)), the loss surface of the binary BERT stands on the top with a clear margin with the other two.", "The steep curvature of loss surface reflects a higher sensitivity to binarization, which attributes to the training difficulty.", "Steepness Measurement.", "To quantitatively measure the steepness of loss landscape, we start from a local minima w and apply the second order approximation to the curvature.", "According to the Taylor's expansion, the loss increase induced by quantizing Figure 4: The overall workflow of training BinaryBERT.", "where (cid:15) = w w is the quantization noise, and max is the largest eigenvalue of the Hessian H at w .", "Note that the first-order term is skipped due to (cid:96) ( w ) = 0 .", "Thus we take max as a quantitative measurement for the steepness of the loss surface.", "Following (Shen et al., 2020) we adopt the power method to compute max .", "As it is computationally expensive to estimate H for all w in the network, we consider them separately as follows: (1) the query/key layers (MHA-QK), (2) the value layer (MHA-V), (3) the output projection layer (MHA-O) in the multi-head attention, (4) the intermediate layer (FFN-Mid), and (5) the output layer (FFN-Out) in the feed-forward network.", "Note that we group key and query layers as they are used together to calculate the attention scores.", "From Figure 3, the top-1 eigenvalues of the binary model are higher both on expectation and standard deviation compared to the full-precision baseline and the ternary model.", "For instance, the top-1 eigenvalues of MHA-O in the binary model are 15 larger than the full-precision counterpart.", "Therefore, the quantization loss increases of full-precision and ternary model are tighter bounded than the binary model in Equation (4).", "The highly complex and irregular landscape by binarization thus poses more challenges to the optimization.", "Given the challenging loss landscape of binary BERT, we propose ternary weight splitting (TWS) that exploits the flatness of ternary loss landscape as the optimization proxy of the binary model.", "As is shown in Figure 4, we first train the half-sized ternary BERT to convergence, and then split both the latent full-precision weight w t and quantized w t to their binary counterparts w b 1 , w b 2 and w b 1 , w b 2 via the TWS operator .", "To inherit the performance of the ternary model after splitting, the TWS operator requires the splitting equivalency (i.e., the same output given the same input): w t = w b 1 + w b 2 , w t = w b 1 + w b 2 .", "While solution to Equation (5) is not unique, we constrain the latent full-precision weights after b b t b b", "splitting w 1 , w 2 to satisfy w = w 1 + w 2 as", "where a and b are the variables to solve.", "By Equations (6) and (7) with w t = w b 1 + w b 2 , we get a = (cid:80) i I | w t i | + (cid:80) j J | w t j | (cid:80) k K | w t k | 2 (cid:80) i I | w ti | , b = n |I| (cid:80) i I | w ti | (cid:80) ni =1 | w ti | 2( |J | + |K| ) , (8) where we denote I = { i | w ti (cid:54) = 0 } , J = { j | w tj = 0 and w tj > 0 } and K = { k | w tk = 0 and w tk < 0 } .", "| | denotes the cardinality of the set.", "Detailed derivation of Equation (8) is in Appendix A. Quantization Details.", "Following (Zhang et al., 2020), for each weight matrix in the Transformer layers, we use layer-wise ternarization (i.e., one scaling parameter for all elements in the weight matrix).", "For word embedding, we use row-wise ternarization (i.e., one scaling parameter for each row in the embedding).", "After splitting, each of the two split matrices has its own scaling factor.", "Aside from weight binarization, we simultaneously quantize activations before all matrix multiplications, which could accelerate inference on specialized hardwares (Shen et al., 2020; Zafrir et al., 2019).", "Following (Zafrir et al., 2019; Zhang et al., 2020), we skip the quantization for all layer-normalization (LN) layers, skip connections, and bias as their calculations are negligible compared to matrix multiplication.", "The last classification layer is also not quantized to avoid a large accuracy drop.", "Training with Knowledge Distillation.", "Knowledge distillation is shown to benefit BERT quantization (Zhang et al., 2020).", "Following (Jiao et al., 2020; Zhang et al., 2020), we first perform intermediate-layer distillation from the full-precision teacher network's embedding E , layer-wise MHA output M l and FFN output F l to the quantized student counterpart E , M l , F l ( l = 1 , 2 , ...L ).", "We aim to minimize their mean sqau-red errors, i.e., (cid:96) emb = MSE ( E , E ) , (cid:96) mha = (cid:80) l MSE ( M l , M l ) , and (cid:96) ffn = (cid:80) l MSE ( F l , F l ) .", "Thus the objective function is (cid:96) int = (cid:96) emb + (cid:96) mha + (cid:96) ffn .", "We then conduct prediction-layer distillation by minimizing the soft cross-entropy (SCE) between quantized student logits y and teacher logits y , i.e.,", "Further Fine-tuning.", "After splitting from the half-sized ternary model, the binary model inherits its performance on a new architecture with full width.", "However, the original minimum of the ternary model may not hold in this new loss landscape after splitting.", "Thus we further fine-tune with prediction-layer distillation to look for a better solution.", "We dub the resulting model as BinaryBERT.", "Our proposed approach also supports adaptive splitting that can flexibly adjust the width of BinaryBERT, based on the parameter sensitivity to binarization and resource constraints of edge devices.", "Specifically, given the resource constraints C (e.g., model size and computational FLOPs), we first train a mixed-precision model adaptively (with sensitive parts being ternary and the rest being bi-nary), and then split ternary weights into binary ones.", "Therefore, adaptive splitting finally enjoys consistent arithmetic precision (1-bit) for all weight matrices, which is usually easier to deploy than the mixed-precision counterpart.", "Formulation.", "Intuitively, we assign ternary values to weight matrices that are more sensitive to quantization.", "The quantization sensitivity of the weight matrix is empirically measured by the performance gain of not quantizing it comparing to the fully-quantized counterpart (Details are in Appendix B.1.).", "We denote u RZ + as the sensitivity vector, where Z is the total number of splittable weight matrices in all Transformer layers, the word embedding layer and the pooler layer.", "The cost vector c RZ + stores the additional increase of parameter or FLOPs of each ternary weight matrix against a binary choice.", "The splitting assignment can be represented as a binary vector s { 0 , 1 } Z , where s z = 1 means to ternarize the z -th weight matrix, and vice versa.", "The optimal assignment s can thus be solved from the following combinatorial optimization problem: max s u (cid:62) s (11) s.t. c (cid:62) s C C 0 , s { 0 , 1 } Z , where C 0 is the baseline efficiency of the half-sized binary network.", "Dynamic programming can be applied to solve Equation (11) to avoid NP-hardness.", "In this section, we empirically verify our proposed approach on the GLUE (Wang et al., 2018) and SQuAD (Rajpurkar et al., 2016, 2018) benchmarks.", "We first introduce the experimental setup in Section 4.1, and then present the main experimental results on both benchmarks in Section 4.2.", "We compare with other state-of-the-arts in Section 4.3, and finally provide more discussions on the proposed methods in Section 4.4.", "Code is available at https://github.com/huawei-noah/ Pretrained-Language-Model/tree/master/ BinaryBERT .", "Dataset and Metrics.", "The GLUE benchmark contains multiple natural language understanding tasks.", "We follow Devlin et al. (2019) to evaluate the performance on these tasks: Matthews correlation # Quant #Bits (W-E-A) Size(MB) FLOPs(G) DA MNLI-m/mm QQP QNLI SST-2 CoLA STS-B MRPC RTE Avg.", "for CoLA, Spearman correlation for STS-B and accuracy for the rest tasks: RTE, MRPC, SST-2, QQP, MNLI-m (matched) and MNLI-mm (mismatched).", "For machine reading comprehension on SQuAD, we report the EM (exact match) and F1 score.", "Aside from the task performance, we also report the model size (MB) and computational FLOPs at inference.", "For quantized operations, we follow (Zhou et al., 2016; Liu et al., 2018; Li et al., 2020a) to count the bit-wise operations, i.e., the multiplication between an m -bit number and an n -bit number approximately takes mn/ 64 FLOPs for a CPU with the instruction size of 64 bits.", "Implementation.", "We take DynaBERT (Hou et al., 2020) sub-networks as backbones as they offer both half-sized and full-sized models for easy comparison.", "We start from training a ternary model of width 0 .", "5 with the two-stage knowledge distillation introduced in Section 3.1.", "Then we split it into a binary model with width 1 .", "0 , and perform further fine-tuning with prediction-layer distillation.", "Each training stage takes the same number of training epochs.", "Following (Jiao et al., 2020; Hou et al., 2020; Zhang et al., 2020), we adopt data augmentation with one training epoch in each stage on all GLUE tasks except for MNLI and QQP.", "Aside from this default setting, we also remove data augmentation and perform vanilla training with 6 epochs on these tasks.", "On MNLI and QQP, we train 3 epochs for each stage.", "We verify our ternary weight splitting ( TWS ) against vanilla binary training ( BWN ), the latter of which doubles training epochs to match the overall training time in TWS for fair comparison.", "More training details are provided in Appendix B. Activation Quantization.", "While BinaryBERT focuses on weight binarization, we also explore activation quantization in our implementation, which is beneficial for reducing the computation burden on specialized hardwares (Hubara et al., 2016; Zhou et al., 2016; Zhang et al., 2020).", "Aside from 8-bit uniform quantization (Zhang et al., 2020; Shen et al., 2020) in past efforts, we further pi-oneer to study 4-bit activation quantization.", "We find that uniform quantization can hardly deal with outliers in the activation.", "Thus we use Learned Step-size Quantization (LSQ) (Esser et al., 2019) to directly learn the quantized values, which empirically achieves better quantization performance.", "The main results on the development set are shown in Table 1. For results without data augmenta-Quant", "tion (row #2-5), our ternary weight splitting method outperforms BWN with a clear margin 3 .", "For instance, on CoLA, ternary weight splitting achieves 6 .", "7% and 9 .", "6% with 8-bit and 4-bit activation quantization, respectively.", "While data augmentation (row 6-9) mostly improves each entry, our approach still overtakes BWN consistently.", "Furthermore, 4-bit activation quantization empirically benefits more from ternary weight splitting (row 4-5 and 8-9) compared with 8-bit activations (row 2-3 and 6-7), demonstrating the potential of our approach in extremely low bit quantized models.", "In Table 2, we also provide the results on the test set of GLUE benchmark.", "Similar to the observation in Table 1, our approach achieves consistent improvement on both 8-bit and 4-bit activation quantization compared with BWN.", "The results on the development set of SQuAD v1.1 and v2.0 are shown in Table 3.", "Our proposed ternary weight splitting again outperforms BWN w.r.t both EM and F1 scores on both datasets.", "Similar to previous observations, 4-bit activation enjoys a larger gain in performance from the splitting approach.", "For instance, our approach improves the EM score of 4-bit activation by 1 .", "8% and 0 .", "6% on SQuAD v1.1 and v2.0, respectively, both of which are higher than those of 8-bit activation.", "3 Note that DynaBERT only squeezes width in the Transformer layers but not the word embedding layer, thus the split binary model has a slightly larger size than BWN.", "The adaptive splitting in Section 3.2 supports the conversion of mixed ternary and binary precisions for more-fine-grained configurations.", "To verify its advantages, we name our approach as Maximal Gain according to Equation (11), and compare it with two baseline strategies", "i) Random Gain that randomly selects weight matrices to split; and", "ii) Minimal Gain that splits the least important modules according to sensitivity.", "We report the average score over six tasks (QNLI, SST-2, CoLA, STS-B, MRPC and RTE) in Figure 5.", "The end-points of 9.8MB and 16.5MB are the half-sized and full-sized BinaryBERT, respectively.", "As can be seen, adaptive splitting generally outperforms the other two baselines under varying model size, indicating the effectiveness of maximizing the gain in adaptive splitting.", "In Appendix C.4, we provide detailed performance on the six tasks, together with the architecture visualization of adaptive splitting.", "Now we compare our proposed approach with a variety of state-of-the-art counterparts, including Q-BERT (Shen et al., 2020), GOBO (Zadeh and Moshovos, 2020), Quant-Noise (Fan et al., 2020) and TernaryBERT (Zhang et al., 2020).", "Aside from quantization, we also compare with other general compression approaches such as Distill-BERT (Sanh et al., 2019), LayerDrop (Fan et al., 2019), TinyBERT (Jiao et al., 2020), and AL-BERT (Lan et al., 2020).", "The results are taken from the original papers, respectively.", "From Table 4, our proposed BinaryBERT has the smallest model size with the best performance among all quantiza-Quant #Bits (W-E-A) SQuAD v1.1 MNLI -m QNLI MRPC TWN 0 .", "tion approaches.", "Compared with the full-precision model, our BinaryBERT retains competitive performance with a significant reduction of model size and computation.", "For example, we achieve more than 24 compression ratio compared with BERT-base, with only 0 .", "4% and 0 .", "0% / 0 .", "2% drop on MNLI-m on SQuAD v1.1, respectively.", "We now demonstrate the performance gain by re-fining the binary model on the new architecture.", "We evaluate the performance gain after splitting from a half-width ternary model (TWN 0 . 5 ) to the full-sized model (TWN 1 . 0 ) on the development set of SQuAD v1.1, MNLI-m, QNLI and MRPC.", "The results are shown in Table 5.", "As can be seen, further fine-tuning brings consistent improvement on both 8-bit and 4-bit activation.", "Training Curves.", "Furthermore, we plot the training loss curves of BWN, TWN and our TWS on MRPC with data augmentation in Figures", "6(a) and", "6(b).", "Since TWS cannot inherit the previous optimizer due to the architecture change, we reset the optimizer and learning rate scheduler of BWN, TWN and TWS for a fair comparison, despite the slight increase of loss after splitting.", "We find that our TWS attains much lower training loss than BWN, and also surpasses TWN, verifying the advantages of fine-tuning on the wider architecture.", "Optimization Trajectory.", "We also follow (Li et al., 2018; Hao et al., 2019) to visualize the optimization trajectory after splitting in Figures", "6(c) and", "6(d).", "We calculate the first two principal components of parameters in the final BinaryBERT, which are the basis for the 2-D plane.", "The loss contour is thus obtained by evaluating each grid point in the plane.", "It is found that the binary models are heading towards the optimal solution for both 8/4-bit activation quantization on the loss contour.", "We now study if there are any improved binarization variants that can directly bring better performance.", "Aside from BWN, we compare with LAB (Hou et al., 2017) and BiReal (Liu et al., 2018).", "Meanwhile, we compare with gradual quantization, i.e., BWN training based on a ternary model, denoted as BWN .", "Furthermore, we also try the same scaling factor of BWN with TWN to make the precision change smooth, dubbed as BWN .", "From Table 6, we find that our TWS still outperforms various binarization approaches in most cases, suggesting the superiority of splitting in finding better minima than direct binary training.", "Network quantization has been a popular topic with vast literature in efficient deep learning.", "Below we give a brief overview for three research strands: network binarization, mixed-precision quantization and neuron splitting, all of which are related to our proposed approach.", "Network binarization achieves remarkable size reduction and is widely explored in computer vision.", "Existing binarization approaches can be categorized into quantization error minimization (Raste-gari et al., 2016; Hou et al., 2017; Zhang et al., 2018), improving training objectives (Martinez et al., 2020; Bai et al., 2020) and reduction of gradient mismatch (Bai et al., 2018; Liu et al., 2018, 2020).", "Despite the empirical success of these approaches in computer vision, there is lit-tle exploration of binarization in natural language processing tasks.", "Previous works on BERT quantization (Zafrir et al., 2019; Shen et al., 2020; Zhang et al., 2020) push down the bit-width to as low as two, but none of them achieves binarization.", "On the other hand, our work serves as the first attempt to binarize the pre-trained language models.", "Given the observation that neural network layers exhibit different sensitivity to quantization (Dong et al., 2019; Wang et al., 2019), mixed-precision quantization re-allocate layer-wise quantization bit-width for higher compression ratio.", "Inspired by neural architecture search (Liu et al., 2019; Wang et al., 2020), common approaches of mixed-precision quantization are primarily based on differentiable search (Wu et al., 2018a; Li et al., 2020b), reinforcement learning (Wu et al., 2018b; Wang et al., 2019), or simply loss curvatures (Dong et al., 2019; Shen et al., 2020).", "While mixed-precision quantized models usually demonstrate better performance than traditional methods under the same compression ratio, they are also harder to deploy (Habi et al., 2020).", "On the contrary, BinaryBERT with adaptive splitting enjoy both the good performance from the mixed precision of ternary and binary values, and the easy deployment given the consistent arithmetic precision.", "There are also works on binary neural architecture search (Kim et al., 2020; Bulat et al., 2020) which have a similar purpose to mixed-precision quantization.", "Nonetheless, such methods are usually time-consuming to train and are prohibitive for large pre-trained language models.", "Neuron splitting is originally proposed to accelerate the network training, by progressively increasing the width of a network (Chen et al., 2016; Wu et al., 2019).", "The split network equivalently inherits the knowledge from the antecessors and is trained for further improvement.", "Recently, neuron splitting is also studied in quantization (Zhao et al., 2019; Kim et al., 2019).", "By splitting neurons with large magnitudes, the full-precision outliers are removed and thus the quantization error can be effectively reduced (Zhao et al., 2019).", "Kim et al. (2019) apply neuron splitting to decompose ternary activation into two binary activations based on bias shifting of the batch normalization layer.", "However, such a method cannot be applied in BERT as there is no batch normalization layer.", "Besides, weight splitting is much more complex due to the equivalence constraint on both the quantized and latent full-precision weights.", "In this paper, we propose BinaryBERT, pushing BERT quantization to the limit.", "As a result of the steep and complex loss landscape, we find directly training a BinaryBERT is hard with a large performance drop.", "We thus propose a ternary weight splitting that splits a trained ternary BERT to initialize BinaryBERT, followed by fine-tuning for further refinement.", "Our approach also supports adaptive splitting that can tailor the size of BinaryBERT based on the edge device constraints.", "Empirical results show that our approach significantly outperforms vanilla binary training, achieving state-of-the-art performance on BERT compression.", "This work was partially supported by the National Key Research and Development Program of China (No. 2018AAA0100204), and Research Grants Council of the Hong Kong Special Administrative Region, China (No. CUHK 14210717 of the General Research Fund).", "We sincerely thank all anonymous reviewers for their insightful suggestions." ]
[ "abstain", "objective", "result", "objective", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "method", "method", "result", "other", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "objective", "abstain", "result", "other", "other" ]
[ "Abstract Meaning Representation (AMR) research has mostly focused on English.", "We show that it is possible to use AMR annotations for English as a semantic representation for sentences written in other languages.", "We exploit an AMR parser for English and parallel corpora to learn AMR parsers for Italian, Spanish, German and Chinese.", "Qualitative analysis show that the new parsers overcome structural differences between the languages.", "We further propose a method to evaluate the parsers that does not require gold standard data in the target languages.", "This method highly correlates with the gold standard evaluation, obtaining a (Pearson) correlation of 0.95.", "Abstract Meaning Representation (AMR) parsing is the process of converting natural language sentences into their corresponding AMR representations (Banarescu et al., 2013).", "An AMR is a graph with nodes representing the concepts of the sentence and edges representing the semantic relations between them.", "Most available AMR datasets large enough to train statistical models consist of pairs of English sentences and AMR graphs.", "The cross-lingual properties of AMR across languages has been the subject of preliminary discussions.", "The AMR guidelines state that AMR is not an interlingua (Banarescu et al., 2013) and Bojar (2014) categorizes different kinds of divergences in the annotation between English AMRs and Czech AMRs.", "Xue et al. (2014) show that structurally aligning English AMRs with Czech and Chinese AMRs is not always possible but that refined annotation guidelines suffice to resolve some of these cases.", "We extend this line of research by exploring whether divergences among languages can be overcome, i.e., we investigate This is the sovereignty of each country sovereignty country this each Questa `e la sovranit`a di ogni paese :poss :domain :mod Figure 1: AMR alignments for a English sentence and its Italian translation.", "whether it is possible to maintain the AMR annotated for English as a semantic representation for sentences written in other languages, as in Figure 1. We implement AMR parsers for Italian, Spanish, German and Chinese using annotation projection, where existing annotations are projected from a source language (English) to a target language through a parallel corpus (e.g., Yarowsky et al., 2001; Hwa et al., 2005; Pado and Lapata, 2009; Evang and Bos, 2016).", "By evaluating the parsers and manually analyzing their output, we show that the parsers are able to recover the AMR structures even when there exist structural differences between the languages, i.e., although AMR is not an interlingua it can act as one.", "This method also provides a quick way to prototype multilingual AMR parsers, assuming that Part-of-speech (POS) taggers, Named Entity Recognition (NER) taggers and dependency parsers are available for the target languages.", "We also propose an alternative approach, where Machine Translation (MT) is used to translate the input sentences into English so that an available English AMR parser can 1146 be employed.", "This method is an even quicker solution which only requires translation models between the target languages and English.", "Due to the lack of gold standard in the target languages, we exploit the English data to evaluate the parsers for the target languages.", "Henceforth, we will use the term target parser to indicate a parser for a target language.", "We achieve this by first learning the target parser from the gold standard English parser, and then inverting this process to learn a new English parser from the target parser.", "We then evaluate the resulting English parser against the gold standard.", "We call this full-cycle evaluation.", "Similarly to Evang and Bos (2016), we also directly evaluate the target parser on silver data, obtained by parsing the English side of a parallel corpus.", "In order to assess the reliability of these evaluation methods, we collected gold standard datasets for Italian, Spanish, German and Chinese by acquiring professional translations of the AMR gold standard data to these languages.", "We hypothesize that the full-cycle score can be used as a more reliable proxy than the silver score for evaluating the target parser.", "We provide evidence to this claim by comparing the three evaluation procedures (silver, full-cycle, and gold) across languages and parsers.", "Our main contributions are: We provide evidence that AMR annotations can be successfully shared across languages.", "We propose two ways to rapidly implement non-English AMR parsers.", "We propose a novel method to evaluate non-English AMR parsers when gold annotations in the target languages are missing.", "This method highly correlates with gold standard evaluation, obtaining a Pearson correlation coefficient of 0.95.", "We release human translations of an AMR dataset (LDC2015E86) to Italian, Spanish, German and Chinese.", "AMR is a semantic representation heavily biased towards English, where labels for nodes and edges are either English words or Propbank frames (Kingsbury and Palmer, 2002).", "The goal of AMR is to abstract away from the syntactic realization of the original sentences while maintaining its underlying meaning.", "As a consequence, different phrasings of one sentence are expected to provide identical AMR representations.", "This canonicalization does not always hold across languages: two sentences that express the same meaning in two different languages are not guaranteed to produce identical AMR structures (Bojar, 2014; Xue et al., 2014).", "However, Xue et al. (2014) show that in many cases the unlabeled AMRs are in fact shared across languages.", "We are encouraged by this find-ing and argue that it should be possible to develop algorithms that account for some of these differences when they arise.", "We therefore introduce a new problem, which we call cross-lingual AMR parsing: given a sentence in any language, the goal is to recover the AMR graph that was originally devised for its English translation.", "This task is harder than traditional AMR parsing as it requires to recover English labels as well as to deal with structural differences between languages, usually referred as translation divergence.", "We propose two initial solutions to this problem: by annotation projection and by machine translation.", "AMR is not grounded in the input sentence, therefore there is no need to change the AMR annotation when projecting to another language.", "We think of English labels for the graph nodes as ones from an independent language, which incidentally looks similar to English.", "However, in order to train state-of-the-art AMR parsers, we also need to project the alignments between AMR nodes and words in the sentence (henceforth called AMR alignments).", "We use word alignments, similarly to other annotation projection work, to project the AMR alignments to the target languages.", "Our approach depends on an underlying as-sumption that we make: if a source word is word-aligned to a target word and it is AMR aligned with an AMR node, then the target word is also aligned to that AMR node.", "More formally, let S = s 1 . . . s | s | be the source language sentence and T = t 1 . . . t | t | be the target language sentence; A s ( ) be the AMR alignment mapping word tokens in S to the set of AMR nodes that are triggered by it; A t ( ) be the same function for T ; v be a node in the AMR graph; and finally, W ( ) be an alignment that maps a word in S to a subset of words in T .", "Then, the AMR projection assump-1147 tion is: i, j, v t j W ( s i ) v A s ( s i ) v A t ( t j ) In the example of Figure 1, Questa is word-aligned with This and therefore AMR-aligned with the node this , and the same logic applies to the other aligned words.", "The words is , the and of do not generate any AMR nodes, so we ignore their word alignments.", "We apply this method to project existing AMR annotations to other languages, which are then used to train the target parsers.", "We invoke an MT system to translate the input sentence into English so that we can use an available English parser to obtain its AMR graph.", "Naturally, the quality of the output graph depends on the quality of the translations.", "If the automatic translation is close to the reference translation, then the predicted AMR graph will be close to the reference AMR graph.", "It is therefore evident that this method is not informative in terms of the cross-lingual properties of AMR.", "However, its simplicity makes it a compelling engineering solution for parsing other languages.", "We now turn to the problem of evaluation.", "Let us assume that we trained a parser for a target language, for example using the annotation projection method discussed in Section 2.1.", "In line with rapid development of new parsers, we assume that the only gold AMR dataset available is the one released for English.", "SILVER We can generate a silver test set by running an automatic (English) AMR parser on the English side of a parallel corpus and use the output AMRs as references.", "However, the silver test set is affected by mistakes made by the English AMR parser, therefore it may not be reliable.", "FULL-CYCLE In order to perform the evaluation on a gold test set, we propose full-cycle evaluation: after learning the target parser from the English parser, we invert this process to learn a new English parser from the target parser, in the same way that we learned the target parser from the English parser.", "The resulting English parser is then evaluated against the (English) AMR gold standard.", "We hypothesize that the score of the new English parser can be used as a proxy to the score of the target parser.", "GOLD To show whether the evaluation methods proposed can be used reliably, we also generated gold test AMR datasets for four target languages (Italian, Spanish, German and Chinese).", "In order to do so, we collected professional translations for the English sentences in the AMR test set.", "1 We were then able to create pairs of human-produced sentences with human-produced AMR graphs.", "A diagram summarizing the different evaluation stages is shown in Figure 2. In the case of MT-based systems, the full-cycle corresponds to first translating from English to the target language and then back to English (back-translation), and only then parsing the sentences with the English AMR parser.", "At the end of this process, a noisy version of the original sentence will be returned and its parsed graph will be a noisy version of the graph parsed from the original sentence.", "We run experiments on four languages: Italian, Spanish, German and Chinese.", "We use Europarl (Koehn, 2005) as the parallel corpus for Italian, Spanish and German, containing around 1.9M sentences for each language pair.", "For Chinese, we use the first 2M sentences from the United Nations Parallel Corpus (Ziemski et al., 2016).", "For each target language we extract two parallel datasets of 20,000/2,000/2,000 (train/dev/test) sentences for the two step of the annotation projection (English target and target English).", "These are used to train the AMR parsers.", "The projection approach also requires training the word alignments, for which we use all the remaining sentences from the parallel corpora (Europarl for Spanish/German/Italian and UN Parallel Corpus for Chinese).", "These are also the sentences we use to train the MT models.", "The gold AMR dataset is LDC2015E86, containing 16,833 training sentences, 1,368 development sentences, and 1,371 testing sentences.", "Word alignments were generated using fast align (Dyer et al., 2013), while AMR alignments were generated with JAMR (Flanigan et al., 2014).", "AMREager (Damonte et al., 2017) was chosen as the pre-existing English AMR parser.", "AMREager is an open-source AMR parser that 1 These datasets are currently available upon request from the authors.", "needs only minor modifications for re-use with other languages.", "2 It requires tokenization, POS tagging, NER tagging and dependency parsing, which for English, German and Chinese are provided by CoreNLP (Manning et al., 2014).", "We use Freeling (Carreras et al., 2004) for Spanish, as CoreNLP does not provide dependency parsing for this language.", "Italian is not supported in CoreNLP: we use Tint (Aprosio and Moretti, 2016), a CoreNLP-compatible NLP pipeline for Italian.", "In order to experiment with the approach of Section 2.2, we experimented with translations from Google Translate.", "3 As Google Translate has access to a much larger training corpus, we also trained baseline MT models using Moses (Koehn et al., 2007) and Nematus (Sennrich et al., 2017), with the same training data we use for the projection method and default hyper-parameters.", "Smatch (Cai and Knight, 2013) is used to evaluate AMR parsers.", "It looks for the best alignment between the predicted AMR and the reference AMR and it then computes precision, recall and F 1 of their edges.", "The original English parser achieves 65% Smatch score on the test split of LDC2015E86.", "Full-cycle and gold evaluations use the same dataset, while silver evaluation is performed on the split of the parallel corpora we reserved for testing.", "Results are shown in Table 1. The Google Translate system outperforms all other systems, but is not directly comparable to them, as it has the unfair advantage of being 2 The multilingual adaptation of AMREager is available at http://www.github.com/mdtux89/ amr-eager-multilingual .", "A demo is available at http://cohort.inf.ed.ac.uk/amreager.html .", "3 https://translate.google.com/toolkit .", "trained on a much larger dataset.", "Due to noisy JAMR alignments and silver training data involved in the annotation projection approach, the MT-based systems give in general better parsing results.", "The BLEU scores of all translation systems are shown in Table 2. There are several sources of noise in the annotation projection method, which affect the parsing results: 1) the parsers are trained on silver data obtained by an automatic parser for English; 2) the projection uses noisy word alignments; 3) the AMR alignments on the source side are also noisy; 4) translation divergences exist between the languages, making it sometimes difficult to project the annotation without loss of information.", "Figure 3 shows examples of output parses 4 for all languages, including the AMR alignments byproduct of the parsing process, that we use to discuss the mistakes made by the parsers.", "In the Italian example, the only evident error is that Infine ( Lastly ) should be ignored.", "In the Spanish example, the word medida ( measure ) is wrongly ignored: it should be used to generate a child of the node impact-01 .", "Some of the :ARG roles are also not correct.", "In the German example, meines ( my ) should reflect the fact that the speaker is talking about his own country.", "Finally, in the Chinese example, there are several mistakes including yet another concept identification mistake: intend-01 is erroneously triggered.", "Most mistakes involve concept identification.", "In particular, relevant words are often erroneously ignored by the parser.", "This is directly related to the problem of noisy word alignments in annotation projection: the parser learns what words are likely to trigger a node (or a set of nodes) in the AMR by looking at their AMR alignments (which are induced by the word alignments).", "If an important word consistently remains unaligned, the parser will erroneously learn to discard it.", "More accurate alignments are therefore crucial in order to achieve better parsing results.", "We computed the percentage of words in the training data that are learned to be non-content-bearing in each parser and we found that the Chinese parser, which is our least accurate parser, is the one that most suffer from this, with 33% non-content-bearing words.", "On the other hand, in the German parser, which is the highest scoring, only 26% of the words are 4 In this section, all parsed graphs were generated with the projection-based system of Section 2.1.", "In order to investigate the hypothesis that AMR can be shared across these languages, we now look at translational divergence and discuss how it affects parsing, following the classification used in previous work (Dorr et al., 2002; Dorr, 1994), which identifies classes of divergences for several languages.", "Sulem et al. (2015) also follow the same categorization for French.", "Figure 4 shows six sentences displaying these divergences.", "The aim of this analysis is to assess how the parsers deal with the different kind of translational divergences, regardless of the overall quality of the output.", "Categorical.", "This divergence happens when two languages use different POS tags to express the same meaning.", "For example, the English sentence I am jealous of you is translated into Spanish as Tengo envidia de ti ( I have jealousy of you ).", "The English adjective jealous is translated in the Spanish noun envidia .", "In Figure 4a we note that the categorical divergence does not create problems since the parsers correctly recognized that envidia ( jealousy / envy ) should be used as the predicate, regardless of its POS.", "Conflational.", "This divergence happens when verbs expressed in a language with a single word can be expressed with more words in another language.", "Two subtypes are distinguished: manner and light verb .", "Manner refers to a manner verb that is mapped to a motion verb plus a manner-bearing word.", "For example, We will answer is translated in the Italian sentence Noi daremo una riposta ( We will give an answer ), where to answer is translated as daremo una risposta ( will give an answer ).", "Figure 4b shows that the Italian parser generates a sensible output for this sentence by creating a single node labeled answer-01 for the expression dare una riposta .", "In a light verb conflational divergence, a verb is mapped to a light verb plus an additional meaning unit, such as when I fear is translated as Io ho paura ( I have fear ) in Italian: to fear is mapped to the light verb ho ( have ) plus the noun paura ( fear ).", "Figure 4e shows that also this divergence is dealt properly by the Italian parser: ho paura correctly triggers the root fear-01 .", "Structural.", "This divergence happens when verb arguments result in different syntactic configura-tions, for example, due to an additional PP attachment.", "When translating He entered the house with Lui `e entrato nella casa ( He entered in the house ), the Italian translation has an additional in preposition.", "Also this parsed graph, in Figure 4c, is structurally correct.", "The missing node he is due to pronoun-dropping, which is frequent in Italian.", "Head swapping.", "This divergence occurs when the direction of the dependency between two words is inverted.", "For example, I like eating , where like is head of eating , becomes Ich esse gern ( I eat likingly ) in German, where the dependency is inverted.", "Unlike all other examples, in this case, the German parser does not cope well with this divergence: it is unable to recognize like-01 as the main concept in the sentence, as shown in Figure 4d.", "Thematic.", "Finally, the parse of Figure 4f has to deal with a thematic divergence, which happens when the semantic roles of a predicate are inverted.", "In the sentence I like grapes , translated to Spanish as Me gustan uvas , I is the subject in English while Me is the object in Spanish.", "Even though we note an erroneous reentrant edge between grape and I , the thematic divergence does not create problems: the parser correctly recognizes the :ARG0 relationship between like-01 and I and the :ARG1 relationship between like-01 and grape .", "In this case, the edge labels are important, as this type of divergence is concerned with the semantic roles.", "Can AMR be shared across these languages?", "As mentioned in Section 2.2, the MT-based systems are not helpful in answering this question and we instead focus on the projection-based parsers.", "Qualitative analysis showed that the parsers are able to overcome translational divergence and that concept identification must be more accurate in order to provide good parsing results.", "We therefore argue that the suboptimal performance of the parsers in terms of Smatch scores is due to the many sources of noise in the annotation projection approach rather than instability of AMR across languages.", "We provide strong evidence that cross-lingual AMR parsing is indeed feasible and hope that the release of the gold standard test sets will motivate further work in this direction.", "Are silver and full-cycle evaluations reliable?", "We computed the Pearson correlation coefficients for the Smatch scores of Table 1 to determine how well silver and full-cycle correlate with gold evaluation.", "Full-cycle correlates better than silver: the Pearson coefficient is 0.95 for full-cycle and 0.47 for silver.", "Figure 5 shows linear regression lines.", "Unlike silver, full-cycle uses the same dataset as gold evaluation and it does not contain parsing mistakes, which makes it more reliable than silver.", "Interestingly, if we ignore the scores obtained for Chinese, the correlation between silver and gold dramatically increases, perhaps indicating that Europarl is more suitable than the UN corpus for this task: the Pearson coefficient becomes 0.97 for full-cycle and 0.87 for silver.", "A good proxy for gold evaluation should rank different systems similarly.", "We hence computed the Kendall-tau score (Kendall, 1945), a measure for similarity between permutations, of the rankings extracted from Table 1. The results further confirm that full-cycle approximate gold better than silver does: the score is 0.40 for silver and 0.82 for full-cycle.", "Full cycle introduces additional noise but it is not as expensive as gold and is more reliable than silver.", "AMR parsing for languages other than English has made only a few steps forward.", "In previous work (Li et al., 2016; Xue et al., 2014; Bojar, 2014), nodes of the target graph were labeled with either English words or with words in the target language.", "We instead use the AMR annotation used for English for the target language as well, without translating any word.", "To the best of our knowledge, the only previous work that attempts to automatically parse AMR graphs for non-English sentences is by Vanderwende et al. (2015).", "Sentences in several languages (French, German, Spanish and Japanese) are parsed into a logical representation, which is then converted to AMR using a small set of rules.", "A comparison with this work is difficult, as the authors do not report results for the parsers (due to the lack of an annotated corpus) or release their code.", "Besides AMR, other semantic parsing frameworks for non-English languages have been investigated (Hoffman, 1992; Cinkova et al., 2009; Ges-mundo et al., 2009; Evang and Bos, 2016).", "Evang and Bos (2016) is the most closely related to our 1152 envy I :domain", "work as it uses a projection mechanism similar to ours for CCG.", "A crucial difference is that, in order to project CCG parse trees to the target languages, they only make use of literal translation.", "Previous work has also focused on assessing the stability across languages of semantic frameworks such as AMR (Xue et al., 2014; Bojar, 2014), UCCA (Sulem et al., 2015) and Propbank (Van der Plas et al., 2010).", "Cross-lingual techniques can cope with the lack of labeled data on languages when this data is available in at least one language, usually English.", "The annotation projection method, which we follow in this work, is one way to address this problem.", "It was introduced for POS tagging, base noun phrase bracketing, NER tagging, and inflectional morphological analysis (Yarowsky et al., 2001) but it has also been used for dependency parsing (Hwa et al., 2005), role labeling (Pado and Lap-ata, 2009; Akbik et al., 2015) and semantic parsing (Evang and Bos, 2016).", "Another common thread of cross-lingual work is model transfer, where parameters are shared across languages (Zeman and Resnik, 2008; Cohen and Smith, 2009; Cohen et al., 2011; McDonald et al., 2011; Sgaard, 2011).", "We introduced the problem of parsing AMR structures, annotated for English, from sentences written in other languages as a way to test the cross-lingual properties of AMR.", "We provided evidence that AMR can be indeed shared across the lan-1153 guages tested and that it is possible to overcome translational divergences.", "We further proposed a novel way to evaluate the target parsers that does not require manual annotations of the target language.", "The full-cycle procedure is not limited to AMR parsing and could be used for other cross-lingual problems in NLP.", "The results of the projection-based AMR parsers indicate that there is a vast room for improvements, especially in terms of generating better alignments.", "We encourage further work in this direction by releasing professional translations of the AMR test set into four languages.", "The authors would like to thank the three anonymous reviewers and Sameer Bansal, Gozde Gul Sahin, Sorcha Gilroy, Ida Szubert, Esma Balkr, Nikos Papasarantopoulos, Joana Ribeiro, Shashi Narayan, Toms Bergmanis, Clara Vania, Yang Liu and Adam Lopez for their helpful comments.", "This research was supported by a grant from Bloomberg and by the H2020 project SUMMA, under grant agreement 688139." ]
[ "abstain", "result", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "result", "abstain", "objective", "abstain", "method", "method", "objective", "result", "abstain", "result", "method", "objective", "method", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "other", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "other", "other", "method", "abstain", "other", "abstain", "other", "abstain", "other", "other", "other", "other", "method", "other", "other", "abstain", "method", "objective", "abstain", "abstain", "method", "other", "other" ]
[ "A critical challenge faced by supervised word sense disambiguation (WSD) is the lack of large annotated datasets with sufficient coverage of words in their diversity of senses.", "This inspired recent research on few-shot WSD using meta-learning.", "While such work has successfully applied meta-learning to learn new word senses from very few examples, its performance still lags behind its fully-supervised counterpart.", "Aiming to further close this gap, we propose a model of semantic memory for WSD in a meta-learning setting.", "Semantic memory encapsulates prior experiences seen throughout the lifetime of the model, which aids better generalization in limited data settings.", "Our model is based on hierarchical variational inference and incorporates an adaptive memory update rule via a hypernetwork.", "We show our model advances the state of the art in few-shot WSD, supports effective learning in extremely data scarce (e.g. one-shot) scenarios and produces meaning prototypes that capture similar senses of distinct words.", "Disambiguating word meaning in context is at the heart of any natural language understanding task or application, whether it is performed explicitly or implicitly.", "Traditionally, word sense disambiguation (WSD) has been defined as the task of explicitly labeling word usages in context with sense labels from a pre-defined sense inventory.", "The majority of approaches to WSD rely on (semi-)supervised learning (Yuan et al., 2016; Raganato et al., 2017a,b; Hadiwinoto et al., 2019; Huang et al., 2019; Scarlini et al., 2020; Bevilacqua and Navigli, 2020) and make use of training corpora manually annotated for word senses.", "Typically, these methods require a fairly large number of annotated training examples per word.", "This problem is exacerbated by the dramatic imbalances in sense frequencies, which further increase the need for annotation to capture a diversity of senses and to obtain sufficient training data for rare senses.", "This motivated recent research on few-shot WSD, where the objective of the model is to learn new, previously unseen word senses from only a small number of examples.", "Holla et al. (2020a) presented a meta-learning approach to few-shot WSD, as well as a benchmark for this task.", "Meta-learning makes use of an episodic training regime, where a model is trained on a collection of diverse few-shot tasks and is explicitly optimized to perform well when learning from a small number of examples per task (Snell et al., 2017; Finn et al., 2017; Tri-antafillou et al., 2020).", "Holla et al. (2020a) have shown that meta-learning can be successfully applied to learn new word senses from as little as one example per sense.", "Yet, the overall model performance in settings where data is highly limited (e.g. oneor two-shot learning) still lags behind that of fully supervised models.", "In the meantime, machine learning research demonstrated the advantages of a memory component for meta-learning in limited data settings (Santoro et al., 2016a; Munkhdalai and Yu, 2017a; Munkhdalai et al., 2018; Zhen et al., 2020).", "The memory stores general knowledge acquired in learning related tasks, which facilitates the acquisition of new concepts and recognition of previously unseen classes with limited labeled data (Zhen et al., 2020).", "Inspired by these advances, we introduce the first model of semantic memory for WSD in a meta-learning setting.", "In meta-learning, prototypes are embeddings around which other data points of the same class are clustered (Snell et al., 2017).", "Our semantic memory stores prototypical representations of word senses seen during training, generalizing over the contexts in which they are used.", "This rich contextual information aids in learning new senses of previously unseen words that appear in similar contexts, from very few examples.", "The design of our prototypical representation of word sense takes inspiration from prototype theory (Rosch, 1975), an established account of category representation in psychology.", "It stipulates that semantic categories are formed around prototypical members, new members are added based on resemblance to the prototypes and category membership is a matter of degree.", "In line with this account, our models learn prototypical representations of word senses from their linguistic context.", "To do this, we employ a neural architecture for learning probabilistic class prototypes: variational prototype networks, augmented with a variational semantic memory (VSM) component (Zhen et al., 2020).", "Unlike deterministic prototypes in prototypical networks (Snell et al., 2017), we model class prototypes as distributions and perform variational inference of these prototypes in a hierarchical Bayesian framework.", "Unlike deterministic memory access in memory-based meta-learning (Santoro et al., 2016b; Munkhdalai and Yu, 2017a), we access memory by Monte Carlo sampling from a variational distribution.", "Specifically, we first perform variational inference to obtain a latent memory variable and then perform another step of variational inference to obtain the prototype distribution.", "Furthermore, we enhance the memory update of vanilla VSM with a novel adaptive update rule involving a hypernetwork (Ha et al., 2016) that controls the weight of the updates.", "We call our approach -VSM to denote the adaptive weight for memory updates.", "We experimentally demonstrate the effectiveness of this approach for few-shot WSD, advancing the state of the art in this task.", "Furthermore, we observe the highest performance gains on word senses with the least training examples, emphasizing the benefits of semantic memory for truly few-shot learning scenarios.", "Our analysis of the meaning prototypes acquired in the memory suggests that they are able to capture related senses of distinct words, demonstrating the generalization capabilities of our memory component.", "We make our code publicly available to facilitate further research.", "1 2 Related work Word sense disambiguation Knowledge-based approaches to WSD (Lesk, 1986; Agirre et al., 1 https://github.com/YDU-uva/VSM_WSD 2014; Moro et al., 2014) rely on lexical resources such as WordNet (Miller et al., 1990) and do not require a corpus manually annotated with word senses.", "Alternatively, supervised learning methods treat WSD as a word-level classification task for ambiguous words and rely on sense-annotated corpora for training.", "Early supervised learning approaches trained classifiers with hand-crafted features (Navigli, 2009; Zhong and Ng, 2010) and word embeddings (Rothe and Schutze, 2015; Ia-cobacci et al., 2016) as input.", "Raganato et al. (2017a) proposed a benchmark for WSD based on the SemCor corpus (Miller et al., 1994) and found that supervised methods outperform the knowledge-based ones.", "Neural models for supervised WSD include LSTM-based (Hochreiter and Schmidhuber, 1997) classifiers (Kageback and Salomonsson, 2016; Melamud et al., 2016; Raganato et al., 2017b), nearest neighbour classifier with ELMo embeddings (Peters et al., 2018), as well as a classifier based on pretrained BERT representations (Hadiwinoto et al., 2019).", "Recently, hybrid approaches incorporating information from lexical resources into neural architectures have gained traction.", "GlossBERT (Huang et al., 2019) fine-tunes BERT with WordNet sense definitions as additional input.", "EWISE (Kumar et al., 2019) learns continuous sense embeddings as targets, aided by dictionary definitions and lexical knowledge bases.", "Scarlini et al. (2020) present a semi-supervised approach for obtaining sense embeddings with the aid of a lexical knowledge base, enabling WSD with a nearest neighbor algorithm.", "By further exploiting the graph structure of WordNet and integrating it with BERT, EWISER (Bevilacqua and Navigli, 2020) achieves the current state-of-the-art performance on the benchmark by Raganato et al. (2017a) an F1 score of 80 .", "1% .", "Unlike few-shot WSD, these works do not fine-tune the models on new words during testing.", "Instead, they train on a training set and evaluate on a test set where words and senses might have been seen during training.", "Meta-learning Meta-learning, or learning to learn (Schmidhuber, 1987; Bengio et al., 1991; Thrun and Pratt, 1998), is a learning paradigm where a model is trained on a distribution of tasks so as to enable rapid learning on new tasks.", "By solving a large number of different tasks, it aims to leverage the acquired knowledge to learn new, unseen tasks.", "The training set, referred to as the meta-training set , consists of episodes , each corresponding to a distinct task.", "Every episode is further divided into a support set containing just a handful of examples for learning the task, and a query set containing examples for task evaluation.", "In the meta-training phase, for each episode, the model adapts to the task using the support set, and its performance on the task is evaluated on the corresponding query set.", "The initial parameters of the model are then adjusted based on the loss on the query set.", "By repeating the process on several episodes/tasks, the model produces representations that enable rapid adaptation to a new task.", "The test set, referred to as the meta-test set , also consists of episodes with a support and query set.", "The meta-test set corresponds to new tasks that were not seen during meta-training.", "During meta-testing, the meta-trained model is first fine-tuned on a small number of examples in the support set of each meta-test episode and then evaluated on the accompanying query set.", "The average performance on all such query sets measures the few-shot learning ability of the model.", "Metric-based meta-learning methods (Koch et al., 2015; Vinyals et al., 2016; Sung et al., 2018; Snell et al., 2017) learn a kernel function and make predictions on the query set based on the similarity with the support set examples.", "Model-based methods (Santoro et al., 2016b; Munkhdalai and Yu, 2017a) employ external memory and make predictions based on examples retrieved from the memory.", "Optimization-based methods (Ravi and Larochelle, 2017; Finn et al., 2017; Nichol et al., 2018; Anto-niou et al., 2019) directly optimize for generalizability over tasks in their training objective.", "Meta-learning has been applied to a range of tasks in NLP, including machine translation (Gu et al., 2018), relation classification (Obamuyide and Vlachos, 2019), text classification (Yu et al., 2018; Geng et al., 2019), hypernymy detection (Yu et al., 2020), and dialog generation (Qian and Yu, 2019).", "It has also been used to learn across distinct NLP tasks (Dou et al., 2019; Bansal et al., 2019) as well as across different languages (Nooralahzadeh et al., 2020; Li et al., 2020).", "Bansal et al. (2020) show that meta-learning during self-supervised pretraining of language models leads to improved few-shot generalization on downstream tasks.", "Holla et al. (2020a) propose a framework for few-shot word sense disambiguation, where the goal is to disambiguate new words during meta-testing.", "Meta-training consists of episodes formed from multiple words whereas meta-testing has one episode corresponding to each of the test words.", "They show that prototype-based methods prototypical networks (Snell et al., 2017) and first-order ProtoMAML (Triantafillou et al., 2020) obtain promising results, in contrast with model-agnostic meta-learning (MAML) (Finn et al., 2017).", "Memory-based models Memory mechanisms (Weston et al., 2014; Graves et al., 2014; Krotov and Hopfield, 2016) have recently drawn increasing attention.", "In memory-augmented neural network (Santoro et al., 2016b), given an input, the memory read and write operations are performed by a controller, using soft attention for reads and least recently used access module for writes.", "Meta Network (Munkhdalai and Yu, 2017b) uses two memory modules: a key-value memory in combination with slow and fast weights for one-shot learning.", "An external memory was introduced to enhance recurrent neural network in Munkhdalai et al. (2019), in which memory is conceptualized as an adaptable function and implemented as a deep neural network.", "Semantic memory has recently been introduced by Zhen et al. (2020) for few-shot learning to enhance prototypical representations of objects, where memory recall is cast as a variational inference problem.", "In NLP, Tang et al. (2016) use content and location-based neural attention over external memory for aspect-level sentiment classification.", "Das et al. (2017) use key-value memory for question answering on knowledge bases.", "Mem2Seq (Madotto et al., 2018) is an architecture for task-oriented dialog that combines attention-based memory with pointer networks (Vinyals et al., 2015).", "Geng et al. (2020) propose Dynamic Memory Induction Networks for few-shot text classification, which utilizes dynamic routing (Sabour et al., 2017) over a static memory module.", "Episodic memory has been used in lifelong learning on language tasks, as a means to perform experience replay (d'Autume et al., 2019; Han et al., 2020; Holla et al., 2020b).", "We treat WSD as a word-level classification problem where ambiguous words are to be classified into their senses given the context.", "In traditional WSD, the goal is to generalize to new contexts of word-sense pairs.", "Specifically, the test set consists of word-sense pairs that were seen during training.", "On the other hand, in few-shot WSD, the goal is to generalize to new words and senses altogether.", "The meta-testing phase involves further adapting the models (on the small support set) to new words that were not seen during training and evaluates them on new contexts (using the query set).", "It deviates from the standard N -way, K -shot classification setting in few-shot learning since the words may have a different number of senses and each sense may have different number of examples (Holla et al., 2020a), making it a more realistic few-shot learning setup (Triantafillou et al., 2020).", "Dataset We use the few-shot WSD benchmark provided by Holla et al. (2020a).", "It is based on the SemCor corpus (Miller et al., 1994), annotated with senses from the New Oxford American Dictionary by Yuan et al. (2016).", "The dataset consists of words grouped into meta-training, meta-validation and meta-test sets.", "The meta-test set consists of new words that were not part of meta-training and meta-validation sets.", "There are four setups varying in the number of sentences in the support set | S | = 4 , 8 , 16 , 32 .", "| S | = 4 corresponds to an extreme few-shot learning scenario for most words, whereas | S | = 32 comes closer to the number of sentences per word encountered in standard WSD setups.", "For | S | = 4 , 8 , 16 , 32 , the number of unique words in the meta-training / meta-validation / meta-test sets is 985/166/270, 985/163/259, 799/146/197 and 580/85/129 respectively.", "We use the publicly available standard dataset splits.", "2 Episodes The meta-training episodes were created by first sampling a set of words and a fixed number of senses per word, followed by sampling example sentences for these word-sense pairs.", "This strategy allows for a combinatorially large number of episodes.", "Every meta-training episode has | S | sentences in both the support and query sets, and corresponds to the distinct task of disambiguating between the sampled word-sense pairs.", "The total number of meta-training episodes is 10 , 000 .", "In the meta-validation and meta-test sets, each episode corresponds to the task of disambiguating a single, previously unseen word between all its senses.", "For every meta-test episode, the model is fine-tuned on a few examples in the support set and its generalizability is evaluated on the query set.", "In contrast to 2 https://github.com/Nithin-Holla/ MetaWSD the meta-training episodes, the meta-test episodes reflect a natural distribution of senses in the corpus, including class imbalance, providing a realistic evaluation setting.", "We experiment with the same model architectures as Holla et al. (2020a).", "The model f , with parameters , takes words x i as input and produces a per-word representation vector f ( x i ) for i = 1 , ..., L where L is the length of the sentence.", "Sense predictions are only made for ambiguous words using the corresponding word representation.", "GloVe+GRU Single-layer bi-directional GRU (Cho et al., 2014) network followed by a single linear layer, that takes GloVe embeddings (Pen-nington et al., 2014) as input.", "GloVe embeddings capture all senses of a word.", "We thus evaluate a model's ability to disambiguate from sense-agnostic input.", "ELMo+MLP A multi-layer perception (MLP) network that receives contextualized ELMo embeddings (Peters et al., 2018) as input.", "Their contex-tualised nature makes ELMo embeddings better suited to capture meaning variation than the static ones.", "Since ELMo is not fine-tuned, this model has the lowest number of learnable parameters.", "BERT Pretrained BERTBASE (Devlin et al., 2019) model followed by a linear layer, fully fine-tuned on the task.", "BERT underlies state-of-the-art approaches to WSD.", "Our few-shot learning approach builds upon prototypical networks (Snell et al., 2017), which is widely used for few-shot image classification and has been shown to be successful in WSD (Holla et al., 2020a).", "It computes a prototype z k = 1 K (cid:80) k f ( x k ) of each word sense (where K is the number of examples for each word sense) through an embedding function f , which is realized as the aforementioned architectures.", "It computes a distribution over classes for a query sample x given a distance function d ( , ) as the softmax over its distances to the prototypes in the embedding space: p ( y i = k | x ) = exp( d ( f ( x ) , z k )) (cid:80) k (cid:48) exp( d ( f ( x ) , z k (cid:48) )) (1) However, the resulting prototypes may not be sufficiently representative of word senses as semantic categories when using a single deterministic vector, computed as the average of only a few examples.", "Such representations lack expressiveness and may not encompass sufficient intra-class variance, that is needed to distinguish between different fine-grained word senses.", "Moreover, large uncertainty arises in the single prototype due to the small number of samples.", "Variational prototype network (Zhen et al., 2020) (VPN) is a powerful model for learning latent representations from small amounts of data, where the prototype z of each class is treated as a distribution.", "Given a task with a support set S and query set Q , the objective of VPN takes the following form: LVPN = 1 | Q | | Q | (cid:88) i =1 (cid:104) 1 L z L z (cid:88) l z =1 log p ( y i | x i , z ( l z ) ) + D KL [ q ( z | S ) || p ( z | x i )] (cid:105) (2) where q ( z | S ) is the variational posterior over z , p ( z | x i ) is the prior, and L z is the number of Monte Carlo samples for z .", "The prior and posterior are assumed to be Gaussian.", "The re-parameterization trick (Kingma and Welling, 2013) is adopted to enable back-propagation with gradient descent, i.e., z ( l z ) = f ( S, (cid:15) ( l z ) ) , (cid:15) ( l z ) N (0 , I ) , f ( , ) = (cid:15) ( l z ) z + z , where the mean z and diagonal covariance z are generated from the posterior inference network with S as input.", "The amortization technique is employed for the implementation of VPN.", "The posterior network takes the mean word representations in the support set S as input and returns the parameters of q ( z | S ) .", "Similarly, the prior network produces the parameters of p ( z | x i ) by taking the query word representation x i Q as input.", "The conditional predictive log-likelihood is implemented as a cross-entropy loss.", "In order to leverage the shared common knowledge between different tasks to improve disambiguation in future tasks, we incorporate variational semantic memory (VSM) as in Zhen et al. (2020).", "It consists of two main processes: memory recall , which retrieves relevant information that fits with specific tasks based on the support set of the current task; Figure 1: Computational graph of variational semantic memory for few-shot WSD.", "memory update , which effectively collects new information from the task and gradually consolidates the semantic knowledge in the memory.", "We adopt a similar memory mechanism and introduce an improved update rule for memory consolidation.", "Memory recall The memory recall of VSM aims to choose the related content from the memory, and is accomplished by variational inference.", "It introduces latent memory m as an intermediate stochastic variable, and infers m from the addressed memory M .", "The approximate variational posterior q ( m | M, S ) over the latent memory m is obtained empirically by q ( m | M, S ) = | M | (cid:88) a =1 a p ( m | M a ) , (3) where a = exp (cid:0) g ( M a , S ) (cid:1) (cid:80) i exp (cid:0) g ( M i , S ) (cid:1) (4) g ( ) is the dot product, | M | is the number of memory slots, M a is the memory content at slot a and stores the prototype of samples in each class, and we take the mean representation of samples in S .", "where m ( l m ) is a Monte Carlo sample drawn from the distribution q ( m | M, S ) , and l m is the number of samples.", "By incorporating the latent memory m from Eq.", "(3), we achieve the objective for variational semantic memory as follows: LVSM = | Q | (cid:88) i =1 (cid:104) E q ( z | S, m ) (cid:2) log p ( y i | x i , z ) (cid:3) + z DKL (cid:2) q ( z | S, m ) || p ( z | x i ) (cid:3) + m DKL (cid:2) | M | (cid:88) i i p ( m | M i ) || p ( m | S ) (cid:3)(cid:105) (6) where p ( m | S ) is the introduced prior over m , z and m are the hyperparameters.", "The overall computational graph of VSM is shown in Figure 1. Similarly, the posterior and prior over m are also assumed to be Gaussian and obtained by using amortized inference networks; more details are provided in Appendix A.1.", "Memory update The memory update is to be able to effectively absorb new useful information to enrich memory content.", "VSM employs an update rule as follows: M c M c + (1 ) M c , (7) where M c is the memory content corresponding to class c , M c is obtained using graph attention (Velickovic et al., 2017), and (0 , 1) is a hyperparameter.", "Adaptive memory update Although VSM was shown to be promising for few-shot image classification, it can be seen from the experiments by Zhen et al. (2020) that different values of have considerable influence on the performance.", "determines the extent to which memory is updated at each iteration.", "In the original VSM, is treated as a hyperparameter obtained by cross-validation, which is time-consuming and inflexible in dealing with different datasets.", "To address this problem, we propose an adaptive memory update rule by learning from data using a lightweight hypernetwork (Ha et al., 2016).", "To be more specific, we obtain by a function f ( ) implemented as an MLP with a sigmoid activation function in the output layer.", "The hypernetwork takes M c as input and returns the value of : = f ( M c ) (8) Moreover, to prevent the possibility of endless growth of memory value, we propose to scale down the memory value whenever (cid:107) M c (cid:107) 2 > 1 .", "This is achieved by scaling as follows: M c = M c max(1 , (cid:107) M c (cid:107) 2 ) (9) When we update memory, we feed the new obtained memory M c into the hypernetwork f ( ) and output adaptive for the update.", "We provide a more detailed implementation of -VSM in Appendix A.1.", "Experimental setup The size of the shared linear layer and memory content of each word sense is 64, 256, and 192 for GloVe+GRU, ELMo+MLP and BERT respectively.", "The activation function of the shared linear layer is tanh for GloVe+GRU and ReLU for the rest.", "The inference networks g ( ) for calculating the prototype distribution and g ( ) for calculating the memory distribution are all three-layer MLPs, with the size of each hidden layer being 64, 256, and 192 for GloVe+GRU, ELMo+MLP and BERT.", "The activation function of their hidden layers is ELU (Clevert et al., 2016), and the output layer does not use any activation function.", "Each batch during meta-training includes 16 tasks.", "The hypernetwork f ( ) is also a three-layer MLP, with the size of hidden state consistent with that of the memory contents.", "The linear layer activation function is ReLU for the hypernetwork.", "For BERT and | S | = { 4 , 8 } , z = 0 .", "001 , m = 0 .", "0001 and learning rate is 5e 6 ; | S | = 16 , z = 0 .", "0001 , m = 0 .", "0001 and learning rate is 1e 6 ; | S | = 32 , z = 0 .", "001 , m = 0 .", "0001 and learning rate is 1e 5 .", "Hyperparameters for other models are reported in Appendix A.2.", "All the hyperparameters are chosen using the meta-validation set.", "The number of slots in memory is consistent with the number of senses in the meta-training set 2915 for | S | = 4 and 8 ; 2452 for | S | = 16 ; 1937 for | S | = 32 .", "The evaluation metric is the word-level macro F1 score, averaged over all episodes in the meta-test set.", "The parameters are optimized using Adam (Kingma and Ba, 2014).", "We compare our methods against several baselines and state-of-the-art approaches.", "The nearest neighbor classifier baseline ( NearestNeighbor ) predicts a query example's sense as the sense of the support example closest in the word embedding space (ELMo and BERT) in terms of co-sine distance.", "The episodic fine-tuning baseline ( EF-ProtoNet ) is one where only meta-testing is Embedding/Encoder Method Average macro F1 score | S | = 4 | S | = 8 | S | = 16 | S | = 32 MajoritySenseBaseline 0.247 0.259 0.264 0.261 GloVe+GRU NearestNeighbor EF-ProtoNet 0.522 0.008 0.539 0.009 0.538 0.003 0.562 0.005 ProtoNet 0.579 0.004 0.601 0.003 0.633 0.008 0.654 0.004 ProtoFOMAML 0.577 0.011 0.616 0.005 0.626 0.005 0.631 0.008 -VSM (Ours) 0.597 0.005 0.631 0.004 0.652 0.006 0.678 0.007 ELMo+MLP NearestNeighbor 0.624 0.641 0.645 0.654 EF-ProtoNet 0.609 0.008 0.635 0.004 0.661 0.004 0.683 0.003 ProtoNet 0.656 0.006 0.688 0.004 0.709 0.006 0.731 0.006 ProtoFOMAML 0.670 0.005 0.700 0.004 0.724 0.003 0.737 0.007 -VSM (Ours) 0.679 0.006 0.709 0.005 0.735 0.004 0.758 0.005 BERT NearestNeighbor 0.681 0.704 0.716 0.741 EF-ProtoNet 0.594 0.008 0.655 0.004 0.682 0.005 0.721 0.009 ProtoNet 0.696 0.011 0.750 0.008 0.755 0.002 0.766 0.003 ProtoFOMAML 0.719 0.005 0.756 0.007 0.744 0.007 0.761 0.005 -VSM (Ours) 0.728 0.012 0.773 0.005 0.776 0.003 0.788 0.003 Table 1: Model performance comparison on the meta-test words using different embedding functions.", "performed, starting from a randomly initialized model.", "Prototypical network ( ProtoNet ) and ProtoFOMAML achieve the highest few-shot WSD performance to date on the benchmark of Holla et al. (2020a).", "Results In Table 1, we show the average macro F1 scores of the models, with their mean and standard deviation obtained over five independent runs.", "Our proposed -VSM achieves the new state-of-the-art performance on few-shot WSD with all the embedding functions, across all the setups with varying | S | .", "For GloVe+GRU, where the input is sense-agnostic embeddings, our model improves disambiguation compared to ProtoNet by 1 .", "8% for | S | = 4 and by 2 .", "4% for | S | = 32 .", "With contextual embeddings as input, -VSM with ELMo+MLP also leads to improvements compared to the previous best ProtoFOMAML for all | S | .", "Holla et al. (2020a) obtained state-of-the-art performance with BERT, and -VSM further advances this, resulting in a gain of 0 .", "9 2 .", "2% .", "The consistent improvements with different embedding functions and support set sizes suggest that our -VSM is effective for few-shot WSD for varying number of shots and senses as well as across model architectures.", "To analyze the contributions of different components in our method, we perform an ablation study by comparing ProtoNet, VPN, VSM and -VSM and present the macro F1 scores in Table 2.", "Role of variational prototypes VPN consistently outperforms ProtoNet with all embedding functions (by around 1% F1 score on average).", "The results indicate that the probabilistic prototypes provide more informative representations of word senses compared to deterministic vectors.", "The highest gains were obtained in case of GloVe+GRU ( 1 . 7% F1 score with | S | = 8 ), suggesting that probabilistic prototypes are particularly useful for models that rely on static word embeddings, as they capture uncertainty in contextual interpretation.", "Role of variational semantic memory We show the benefit of VSM by comparing it with VPN.", "VSM consistently surpasses VPN with all three embedding functions.", "According to our analysis, VSM makes the prototypes of different word senses more distinctive and distant from each other.", "The senses in memory provide more context information, enabling larger intra-class variations to be captured, and thus lead to improvements upon VPN.", "Role of adaptive To demonstrate the effectiveness of the hypernetwork for adaptive , we compare -VSM with VSM where is tuned by cross-validation.", "It can be seen from Table 2 that there is consistent improvement over VSM.", "Thus, the learned adaptive acquires the ability to determine how much of the contents of memory needs to be updated based on the current new memory.", "VSM enables the memory content of different word senses to be more representative by better absorbing information from data with adaptive update, resulting in improved performance.", "Variation of performance with the number of senses In order to further probe into the strengths of -VSM, we analyze the macro F1 scores of the different models averaged over all the words in the meta-test set with a particular number of senses.", "In Figure 2, we show a bar plot of the scores obtained from BERT for | S | = 16 .", "For words with a low number of senses, the task corresponds to a higher number of effective shots and vice versa.", "It can be seen that the different models perform roughly the same for words with fewer senses, i.e., 2 4 .", "VPN is comparable to ProtoNet in its distribution of scores.", "But with semantic memory, VSM improves the performance on words with a higher number of senses.", "-VSM further boosts the scores for such words on average.", "The same trend is observed for | S | = 8 (see Appendix A.3).", "Therefore, the improvements of -VSM over ProtoNet come from tasks with fewer shots, indicating that VSM is particularly effective at disambiguation in low-shot scenarios.", "Visualization of prototypes To study the distinction between the prototype distributions of word senses obtained by -VSM, VSM and VPN, we visualize them using t-SNE (Van der Maaten and Hinton, 2008).", "Figure 3 shows prototype distributions based on BERT for the word draw .", "Different colored ellipses indicate the distribution of its different senses obtained from the support set.", "Different colored points indicate the representations of the query examples.", "-VSM makes the prototypes of different word senses of the same word more distinctive and distant from each other, with less overlap, compared to the other models.", "Notably, the representations of query examples are closer to their corresponding prototype distribution for VSM, thereby resulting in improved performance.", "We also visualize the prototype distributions of similar vs. dissimilar senses of multiple words in Figure 4 (see Appendix A.4 for example sentences).", "The blue ellipse corresponds to the set up' sense of launch from the meta-test samples.", "Green and gray ellipses correspond to a similar sense of the words start and establish from the memory.", "We can see that they are close to each other.", "Orange and purple ellipses correspond to other senses of the words start and establish from the memory, and they are well separated.", "For a given query word, our model is thus able to retrieve related senses from the memory and exploit them to make its word sense distribution more representative and distinctive.", "In this paper, we presented a model of variational semantic memory for few-shot WSD.", "We use a variational prototype network to model the prototype of each word sense as a distribution.", "To leverage the shared common knowledge between tasks, we incorporate semantic memory into the probabilistic model of prototypes in a hierarchical Bayesian framework.", "VSM is able to acquire long-term, general knowledge that enables learning new senses from very few examples.", "Furthermore, we propose adaptive -VSM which learns an adaptive memory update rule from data using a lightweight hypernetwork.", "The consistent new state-of-the-art performance with three different embedding functions shows the benefit of our model in boosting few-shot WSD.", "Since meaning disambiguation is central to many natural language understanding tasks, models based on semantic memory are a promising direction in NLP, more generally.", "Future work might investigate the role of memory in modeling meaning variation across domains and languages, as well as in tasks that integrate knowledge at different levels of linguistic hierarchy." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "objective", "abstain", "abstain", "objective", "objective", "objective", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "method", "method", "method", "abstain", "abstain", "objective", "objective", "abstain", "abstain" ]
[ "Peer-review plays a critical role in the scientific writing and publication ecosystem.", "To assess the efficiency and efficacy of the reviewing process, one essential element is to understand and evaluate the reviews themselves.", "In this work, we study the content and structure of peer reviews under the argument mining framework, through automatically detecting (1) argumentative propositions put forward by reviewers, and (2) their types (e.g., evaluating the work or making suggestions for im-provement).", "We first collect 14 .", "2 K reviews from major machine learning and natural language processing venues.", "400 reviews are annotated with 10 , 386 propositions and corresponding types of EVALUATION , REQUEST , FACT , REFERENCE , or QUOTE .", "We then train state-of-the-art proposition segmentation and classification models on the data to evaluate their utilities and identify new challenges for this new domain, motivating future directions for argument mining.", "Further experiments show that proposition usage varies across venues in amount, type, and topic.", "Peer review is a process where domain experts scrutinize the quality of research work in their field, and it is a cornerstone of scientific discovery (Hettich and Pazzani, 2006; Kelly et al., 2014; Price and Flach, 2017).", "In 2015 alone, approximately 63 .", "4 million hours were spent on peer reviews (Kovanis et al., 2016).", "To maximize their benefit to the scientific community, it is crucial to understand and evaluate the construction and limitation of reviews themselves.", "However, minimal work has been done to analyze reviews' content and structure, let alone to evaluate their qualities.", "As seen in Figure 1, peer reviews resemble arguments: they contain argumentative propositions (henceforth propositions) that convey reReview # 1 ( rating : 5 , # sentences : 11 ) [Quality: This paper demonstrates that convolutional and relational neural networks fail to solve visual relation problems ... ] FACT [This points at important limitations of current neural network architectures where architectures depend mainly on rote memorization. ] EVAL ... [Significance: This work demonstrates failures of relational networks on relational tasks ... ] FACT [Pros: Important message about network limitations. ] EVAL [Cons: Straightforward testing of network performance on specific visual relation tasks. ] EVAL ...", "Review # 2 ( rating : 5 , # sentences : 10 ) [The authors present two autoregressive models ... ] FACT ... [In that context , this work can be viewed as applying deep autoregressive density estimators to policy gradient methods. ] EVAL ... [At least one of those papers ought to be cited. ] REQ [It also seems like a simple, obvious baseline is missing from their experiments ... ] EVAL ... [The method could even be made to capture dependencies between different actions by adding a latent probabilistic layer ... ] EVAL ... [A direct comparison against one of the related methods in the discussion section would help ] REQ ...", "viewers' interpretation and evaluation of the research.", "Constructive reviews, e.g., review #2 , often contain in-depth analysis as well as concrete suggestions.", "As a result, automatically identifying propositions and their types would be useful to understand the composition of peer reviews.", "Therefore, we propose an argument mining-based approach to understand the content and structure of peer reviews .", "Argument mining studies the automatic detection of argumentative components and structure within discourse (Peld-szus and Stede, 2013).", "Specifically, argument types (e.g. evidence and reasoning) and their arrangement are indicative of argument quality (Habernal and Gurevych, 2016; Wachsmuth et al., 2017).", "In this work, we focus on two specific tasks: (1) proposition segmentation detecting elementary argumentative discourse units that are propositions, and (2) proposition classification labeling the propositions according to their types (e.g., evaluation vs. request).", "Since there was no annotated dataset for peer reviews, as part of this study, we first collect 14 .", "2 K reviews from major machine learning (ML) and natural language processing (NLP) venues.", "We create a dataset, AMPERE (Argument Mining for PEer REviews), by annotating 400 reviews with 10 , 386 propositions and labeling each proposition with the type of EVALUATION , REQUEST , FACT , REFERENCE , QUOTE , or NON-ARG .", "1 Significant inter-annotator agreement is achieved for proposition segmentation (Cohen's = 0 . 93 ), with good consensus level for type annotation (Krip-pendorf's U = 0 . 61 ).", "We benchmark our new dataset with state-of-the-art and popular argument mining models to better understand the challenges posed in this new domain.", "We observe a significant drop of performance for proposition segmentation on AMPERE, mainly due to its different argument structure.", "For instance, 25% of the sentences contain more than one proposition, compared to that of 8% for essays (Stab and Gurevych, 2017), motivating new solutions for segmentation and classification.", "We further investigate review structure difference across venues based on proposition usage, and uncover several patterns.", "For instance, ACL reviews tend to contain more propositions than those in ML venues, especially with more requests but fewer facts.", "We further find that reviews with extreme ratings, i.e., strong reject or accept, tend to be shorter and make much fewer requests.", "Moreover, we probe the salient words for different proposition types.", "For example, ACL reviewers ask for more examples when making requests, while ICLR reviews contain more evaluation of network and how models are trained.", "We collect review data from three sources: (1) openreview.net an online peer reviewing platform for ICLR 2017, ICLR 2018, and UAI 2018 2 ; (2) reviews released for accepted papers at NeurIPS from 2013 to 2017; and (3) opted-in reviews for ACL 2017 from Kang et al. (2018).", "//xinyuhua.github.io/Resources/naacl19/ .", "2 ICLR reviews are downloaded from the public API: https://github.com/iesl/openreview-py .", "UAI reviews are collected by the OpenReview team.", "EVALUATION : Subjective statements, often containing qualitative judgment.", "Ex: This paper shows nice results on a number of small tasks.", "REQUEST : Statements suggesting a course of action.", "Ex: The authors should compare with the following methods.", "FACT : Objective information of the paper or commonsense knowledge.", "Ex: Existing works on multi-task neural networks typically use hand-tuned weights ... REFERENCE : Citations and URLs.", "Ex: see MuseGAN (Dong et al), MidiNet (Yang et al), etc QUOTE : Quotations from the paper.", "Ex: The author wrote where r is lower bound of feature norm'.", "NON-ARG : Non-argumentative statements.", "Ex: Aha, now I understand.", "mining corpora, including # of annotated propositions.", "In total, 14 , 202 reviews are collected (ICLR: 4 , 057 ; UAI: 718 ; ACL: 275 ; and NeurIPS: 9 , 152 ).", "All venues except NeurIPS have paper rating scores attached to the reviews.", "Annotation Process.", "For proposition segmentation, we adopt the concepts from Park et al. (2015) and instruct the annotators to identify elementary argumentative discourse units on sentence or sub-sentence level, based on their discourse functions and topics.", "They then classify the propositions into five types with an additional non-argument category, as explained in Table 1.", "400 ICLR 2018 reviews are sampled for annotation, with similar distributions of length and rating to those of the full dataset.", "Two annotators who are fluent English speakers first label the 400 reviews with proposition segments and types, and a third annotator then resolves disagreements.", "We calculate the inter-annotator agreement between the two annotators.", "A Cohen's of 0 .", "93 is achieved for proposition segmentation, with each review treated as a BIO sequence.", "For classification, unitized Krippendorf's U (Krippendorff, 2004), which considers disagreements among segmentation, is calculated per review and then averaged over all samples, and the value is 0 .", "61 .", "Among the exactly matched proposition segments, we report a Cohen's of 0 .", "64 .", "Statistics.", "Table 2 shows comparison between AMPERE and some other argument mining datasets of different genres.", "We also show the number of propositions in each category in Table 3.", "The most frequent types are evaluation ( 38 . 3% ) and fact ( 36 . 5% ).", "We benchmark AMPERE with popular and state-of-the-art models for proposition segmentation and classification.", "Both tasks can be treated as sequence tagging problems with the setup similar to Schulz et al. (2018).", "For experiments, 320 reviews ( 7 , 999 propositions) are used for training and 80 reviews ( 2 , 387 propositions) are used for testing.", "Following Niculae et al. (2017), 5 -fold cross validation on the training set is used for hyperparam-eter tuning.", "To improve the accuracy of tokeniza-tion, we manually replace mathematical formulas, variables, URL links, and formatted citation with special tokens such as <EQN> , <VAR> , <URL> , and <CIT> .", "Parameters, lexicons, and features used for the models are described in the supplementary material.", "We consider three baselines.", "FullSent : treating each sentence as a proposition.", "PDTB-conn : further segmenting sentences when any discourse connective (collected from Penn Discourse Treebank (Prasad et al., 2007)) is observed.", "RST-parser : segmenting discourse units by the RST parser in Feng and Hirst (2014).", "For learning-based methods, we start with Conditional Random Field ( CRF ) (Lafferty et al., 2001) with features proposed by Stab and Gurevych ((2017), Table 7), and BiLSTM-CRF , a bidirectional Long Short-Term Memory network (BiLSTM) connected to a CRF output layer and further enhanced with ELMo representation (Pe-ters et al., 2018).", "We adopt the BIO scheme for sequential tagging (Ramshaw and Marcus, 1999), with O corresponding to NON-ARG .", "Finally, we consider jointly modeling segmentation and classification by appending the proposition types to BI tags, e.g., B-fact, with CRF ( CRF-joint ) and BiLSTM-CRF ( BiLSTM-CRF-joint ).", "Table 4 shows that BiLSTM-CRF outperforms other methods in F1.", "More importantly, the perfor-Prec.", "mance on reviews is lower than those reached on existing datasets, e.g., an F1 of 86 .", "7 is obtained by CRF for essays (Stab and Gurevych, 2017).", "This is mostly due to essays' better structure, with frequent use of discourse connectives.", "With given proposition segments, predicted or gold-standard, we experiment with proposition-level models to label proposition types.", "We utilize two baselines.", "Majority simply assigns the majority type in the training set.", "PropLexicon matches the following lexicons for different proposition types in order, and returns the first corresponding type with a match; if no lexicon is matched, the proposition is labeled as NONARG : REFERENCE : <URL> , <CIT> QUOTE : , , ' REQUEST : should, would be nice, why, please, would like to, need EVALUATION : highly, very, unclear, clear, interesting, novel, well, important, similar, clearly, quite, good FACT : author, authors, propose, present, method, parameters, example, dataset, same, incorrect, correct For supervised models, we employ linear SVM with a squared hinge loss and group Lasso regularizer (Yuan and Lin, 2006).", "It is trained with the top 500 features selected from Table 9 in (Stab and Gurevych, 2017) by 2 test.", "We also train a convolutional neural network ( CNN ) proposed by Kim (2014), with the same setup and pre-trained word embeddings from word2vec (Mikolov et al., 2013).", "Finally, results by joint models of CRF and BiLSTM-CRF are also reported.", "F1 scores for all propositions and each type are reported in Table 5.", "A prediction is correct when both segment and type are matched with the true labels.", "CNN performs better for types with significantly more training samples, i.e., evaluation and fact, indicating the effect of data size on neural model's performance.", "Joint models (CRF-joint and BiLSTM-CRF-joint) yield the best F1 scores for all categories when gold-standard segmentation is unavailable.", "Here we leverage the BiLSTM-CRF-joint model trained on the annotated AMPERE data to identify propositions and their types in unlabeled reviews from the four venues (ICLR, UAI, ACL, and NeurIPS), to understand the content and structure of peer reviews at a larger scale.", "Proposition Usage by Venue and Rating.", "Figure 2 shows the average number of propositions per review, grouped by venue and rating.", "Scores in 1 10 are scaled to 1 5 by (cid:100) x/ 2 (cid:101) , with 1 as strong reject and 5 as strong accept.", "ACL and NeurIPS have significantly more propositions than ICLR and UAI.", "Ratings, which reflect a reviewer's judgment of paper quality, also affect proposition usage.", "We find that reviews with extreme ratings, i.e., 1 and 5 , tend to have fewer propositions.", "We further study the distribution of proposition type in each venue.", "As observed in Figure 3, ACL Evaluation Request Fact Reference Quote Non-Arg 0 10 20 30 40 % of Proposition Type ICLR UAI ACL NeurIPS Figure 3: Distribution of proposition type per venue.", "reviews contain more requests but fewer facts than other venues.", "Specifically, we find that 94 .", "6% of ACL reviews have at least one REQUEST proposition, compared to 81 .", "5% for ICLR and 84 .", "7% for UAI.", "We also show proposition type distribution based on ratings in Figure 4.", "Reviews with the highest rating tend to use fewer evaluation and reference, while reviews with ratings of 3 4 (bor-derline or weak accept) contain more requests.", "We further observe a sharp decrease of QUOTE usage in rating group 4 , and a surge of NON-ARG for rating group 5 , while FACT remains consistent across rating ranges.", "Proposition Structure.", "Argumentative structure, which is usually studied as support and attack relations, reveals how propositions are organized into coherent text.", "According to Park and Cardie (2018), 75% of support relations happen between adjacent propositions in user comments.", "We thus plot the proposition transition probability matrix in Figure 5, to show the argument structure in AMPERE.", "The high probabilities along the diagonal line imply that propositions of the same type are often constructed consecutively, with the exception of quote, which is more likely to be followed by evaluation.", "Proposition Type and Content.", "We also probe the salient words used for each proposition type, and the difference of their usage across venues.", "For each venue, we utilize log-likelihood ratio test (Lin and Hovy, 2000) to identify the represen-E VALUATIONREQUESTFACTREFERENCEQUOTE All Venues overall, unclear, not, contribution, seem, interesting please, could, should, if, why, would, more, suggest think, each, some, data, useful, written, proposes <URL> , et, al., conference, paper, proceedings, arxiv , paper, we, :, our ICLR network, general, acceptance, convinced, trained network, appendix, recommend, because, novelty training, results, work, then, image deep, ;, nips,", "tative words in each proposition type compared to other types.", "Table 6 shows both the commonly used salient words across venues and the unique words with top frequencies for each venue ( = 0 . 001 , 2 test).", "For evaluation, all venues tend to focus on clarity and contribution, with ICLR discussing more about network and NeurIPS often mentioning equations.", "ACL reviews then frequently request for examples.", "There is a growing interest in understanding the content and assessing the quality of peer reviews.", "Authors' feedback such as satisfaction and helpfulness have been adopted as quality indicators (Latu and Everett, 2000; Hart-Davidson et al., 2010; Xiong and Litman, 2011).", "Nonetheless, they suffer from author subjectivity and are often influenced by acceptance decisions (Weber et al., 2002).", "Evaluation by experts or editors proves to be more reliable and informative (van Rooyen et al., 1999), but requires substantial work and knowledge of the field.", "Shallow linguistic features, e.g., sentiment words, are studied in Born-mann et al. (2012) for analyzing languages in peer reviews.", "To the best of our knowledge, our work is the first to understand the content and structure of peer reviews via argument usage.", "Our work is also in line with the growing body of research in argument mining (Teufel et al., 1999; Palau and Moens, 2009).", "Most of the work focuses on arguments in social media posts (Park and Cardie, 2014; Wei et al., 2016; Habernal and Gurevych, 2016), online debate portals or Oxford-style debates (Wachsmuth et al., 2017; Hua and Wang, 2017; Wang et al., 2017), and student essays (Persing and Ng, 2015; Ghosh et al., 2016).", "We study a new domain of peer reviews, and identify new challenges for existing models.", "We study the content and structure of peer reviews under the argument mining framework.", "AMPERE, a new dataset of peer reviews, is collected and annotated with propositions and their types.", "We benchmark AMPERE with state-of-the-art argument mining models for proposition segmentation and classification.", "We leverage the classifiers to analyze the proposition usage in reviews across ML and NLP venues, showing interesting patterns in proposition types and content.", "This research is based upon work supported in part by National Science Foundation through Grants IIS-1566382 and IIS-1813341.", "We are grateful to the OpenReview team, especially Michael Spector, for setting up the API to facilitate review data collection.", "We also thank three anonymous reviewers for their constructive suggestions." ]
[ "abstain", "abstain", "method", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "objective", "abstain", "method", "abstain", "objective", "result", "abstain", "objective", "abstain", "result", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "objective", "abstain", "other", "objective", "method", "abstain", "abstain", "result", "other", "other", "other" ]
[ "When training multilingual machine translation (MT) models that can translate to/from multiple languages, we are faced with imbalanced training sets: some languages have much more training data than others.", "Standard practice is to up-sample less resourced languages to increase representation, and the degree of up-sampling has a large effect on the overall performance.", "In this paper, we propose a method that instead automatically learns how to weight training data through a data scorer that is optimized to maximize performance on all test languages.", "Experiments on two sets of languages under both one-to-many and many-to-one MT settings show our method not only consistently outperforms heuristic baselines in terms of average performance, but also offers flexible control over the performance of which languages are optimized.", "1 1 Introduction Multilingual models are trained to process different languages in a single model, and have been applied to a wide variety of NLP tasks such as text classification (Klementiev et al., 2012; Chen et al., 2018a), syntactic analysis (Plank et al., 2016; Ammar et al., 2016), named-entity recognition (Xie et al., 2018; Wu and Dredze, 2019), and machine translation (MT) (Dong et al., 2015; Johnson et al., 2016).", "These models have two particularly concrete advantages over their monolingual counterparts.", "First, deploying a single multilingual model is much more resource efficient than deploying one model for each language under consideration (Arivazhagan et al., 2019; Aharoni et al., 2019).", "Second, multilingual training makes it possible to transfer knowledge from high-resource languages (HRLs) to improve performance on low-resource languages (LRLs) (Zoph et al., 2016; 1 The code is available at https://github.com/ cindyxinyiwang/fairseq/tree/multiDDS . Nguyen and Chiang, 2018; Neubig and Hu, 2018; Wang and Neubig, 2019; Aharoni et al., 2019).", "A common problem with multilingual training is that the data from different languages are both heterogeneous (different languages may exhibit very different properties) and imbalanced (there may be wildly varying amounts of training data for each language).", "Thus, while LRLs will often benefit from transfer from other languages, for languages where sufficient monolingual data exists, performance will often decrease due to interference from the heterogeneous nature of the data.", "This is especially the case for modestly-sized models that are conducive to efficient deployment (Arivazhagan et al., 2019; Conneau et al., 2019).", "To balance the performance on different languages, the standard practice is to heuristically adjust the distribution of data used in training, specifically by over-sampling the training data from LRLs (Johnson et al., 2016; Neubig and Hu, 2018; Arivazhagan et al., 2019; Conneau et al., 2019).", "For example, Arivazhagan et al. (2019) sample training data from different languages based on the dataset size scaled by a heuristically tuned temperature term.", "However, such heuristics are far from per-fect.", "First, Arivazhagan et al. (2019) find that the exact value of this temperature term significantly affects results, and we further show in experiments that the ideal temperature varies significantly from one experimental setting to another.", "Second, this heuristic ignores factors other than data size that affect the interaction between different languages, despite the fact that language similarity has been empirically proven important in examinations of cross-lingual transfer learning (Wang and Neubig, 2019; Lin et al., 2019).", "In this paper, we ask the question: is it possible to learn an optimal strategy to automatically balance the usage of data in multilingual model training?", "To this effect, we propose a method that learns a language scorer that can be used throughout training to improve the model performance on all languages.", "Our method is based on the recently proposed approach of Differentiable Data Selection (Wang et al., 2019b, DDS), a general machine learning method for optimizing the weighting of different training examples to improve a pre-determined objective.", "In this work, we take this objective to be the average loss from different languages, and directly optimize the weights of training data from each language to maximize this objective on a multilingual development set.", "This formulation has no heuristic temperatures, and enables the language scorer to consider the interaction between languages.", "Based on this formulation, we propose an algorithm that improves the ability of DDS to optimize multiple model objectives, which we name MultiDDS.", "This is particularly useful in the case where we want to optimize performance on multiple languages simultaneously.", "Specifically, MultiDDS (1) has a more flexible scorer parameterization, (2) is memory efficient when training on multiple languages, and (3) stabilizes the reward signal so that it improves all objectives simultaneously instead of being overwhelmed by a single objective.", "While the proposed methods are model-agnostic and thus potentially applicable to a wide variety of tasks, we specifically test them on the problem of training multilingual NMT systems that can translate many languages in a single model.", "We perform experiments on two sets of languages (one with more similarity between the languages, one with less) and two translation directions (one-to-many and many-to-one where the one is English).", "Results show that MultiDDS consistently outperforms various baselines in all settings.", "Moreover, we demonstrate MultiDDS provides a flexible framework that allows the user to define a variety of optimization objectives for multilingual models.", "Monolingual Training Objective A standard NMT model is trained to translate from a single source language S to a target language T .", "The parameters of the model are generally trained by preparing a training dataset D train , and defining the empirical distribution of sentence pairs (cid:104) x, y (cid:105) sampled from D train as P .", "We then minimize the empirical risk J ( , P ) , which is the expected value of the loss function (cid:96) ( x, y ; ) over this distribution: = argmin J ( , D train ) where J ( , D train ) = E x,y P ( X,Y ) [ (cid:96) ( x, y ; )] (1) Multilingual Training Formulation A multilingual NMT model can translate n pairs of languages { S 1 T 1 , S 2 T 2 , ..., S n T n } , from any source language S i .", "to its corresponding target T i .", "To train such a multilingual model, we have access to n sets of training data D train = D 1 train , D 2 train , . . . , D n train , where D i train is training data for language pair S i T i .", "From these datasets, we can define P i , the distribution of sentences from S i T i , and consequently also define a risk J ( , P i ) for each language following the monolingual objective in Eq.", "1. However, the question now becomes: how do we define an overall training objective given these multiple separate datasets?", "Several different methods to do so have been proposed in the past.", "To discuss all of these different methods in a unified framework, we further define a distribution PD over the n sets of training data, and define our overall multilingual training objective as J mult ( , PD , D train ) = E i PD ( i ; ) (cid:2) J ( , D i train ) (cid:3) .", "In practice, this overall objective can be approximated by selecting a language according to i PD ( i ) , then calculating gradients with respect to on a batch of data from D i train .", "Evaluation Methods Another important question is how to evaluate the performance of such multilingual models.", "During training, it is common to use a separate development set for each language D dev = D 1 dev , D 2 dev , ..., D n dev to select the best model.", "Given that the objective of multilingual training is generally to optimize the performance on all languages simultaneously (Arivazhagan et al., 2019; Conneau et al., 2019), we can formalize this objective as minimizing the average of dev risks 2 : J dev ( , D dev ) = 1 n n (cid:88) i =1 J ( , D i dev ) .", "2 In reality, it is common to have the loss (cid:96) be a likelihood-based objective, but finally measure another metric such as BLEU score at test time, but for simplicity we will assume that these two metrics are correlated.", "Relation to Heuristic Strategies This formulation generalizes a variety of existing techniques that define PD ( i ) using a heuristic strategy, and keep it fixed throughout training.", "Uniform: The simplest strategy sets PD ( i ) to a uniform distribution, sampling minibatches from each language with equal frequency (Johnson et al., 2016).", "Proportional: It is also common to sample data in portions equivalent to the size of the corresponding corpora in each language (Johnson et al., 2016; Neubig and Hu, 2018).", "Temperature-based: Finally, because both of the strategies above are extreme (proportional under-weighting LRLs, and uniform causing overfitting by re-sampling sentences from limited-size LRL datasets), it is common to sample according to data size exponentiated by a temperature term (Arivazhagan et al., 2019; Conneau et al., 2019): PD ( i ) = q 1 / i (cid:80) nk =1 q 1 /k where q i = | D i train | (cid:80) nk =1 | D k train | .", "(4) When = 1 or = this is equivalent to proportional or uniform sampling respectively, and when a number in the middle is chosen it becomes possible to balance between the two strategies.", "As noted in the introduction, these heuristic strategies have several drawbacks regarding sensitivity to the hyperparameter, and lack of consideration of similarity between the languages.", "In the following sections we will propose methods to resolve these issues.", "Now we turn to the question: is there a better way to optimize PD ( i ) so that we can achieve our final objective of performing well on a representative development set over all languages, i.e. minimizing J dev ( , D dev ) .", "In order to do so, we turn to a recently proposed method of Differentiable Data Selection (Wang et al., 2019b, DDS), a general purpose machine learning method that allows for weighting of training data to improve performance on a separate set of held-out data.", "Specifically, DDS uses a technique called bilevel optimization (Colson et al., 2007), that learns a second set of parameters that modify the training objective that we use to learn , so as to maximize the final objective J dev ( , D dev ) .", "Specifically, it proposes to learn a data scorer P ( x, y ; ) , parameterized by , such that training using data sampled from the scorer optimizes the model performance on the dev set.", "To take the example of learning an NMT system to translate a single language pair i using DDS, the general objective in Eq.", "1 could be rewritten as = argmin J ( ( ) , D i dev ) where ( ) = argmin E x,y P ( x,y ; ) [ (cid:96) ( x, y ; )] .", "(5) DDS optimizes and iteratively throughout the training process.", "Given a fixed , the update rule for is simply t t 1 E x,y P ( x,y ; ) [ (cid:96) ( x, y ; )] To update the data scorer, DDS uses reinforcement learning with a reward function that approximates the effect of the training data on the model's dev performance R ( x, y ; t ) J ( t , D i dev ) (cid:62) (cid:96) ( x, y ; t 1 ) cos (cid:0) J ( t , D i dev ) , (cid:96) ( x, y ; t 1 ) (cid:1) (6) where cos ( ) is the cosine similarity of two vectors.", "This reward can be derived by directly differentiating J ( ( ) , D i dev ) with respect to , but intuitively, it indicates that the data scorer should be updated to up-weigh the data points that have similar gradient with the dev data.", "According to the REINFORCE algorithm (Williams, 1992), the update rule for the data scorer then becomes t +1 t + R ( x, y ; t ) log P ( x, y ; ) (7) 4 DDS for Multilingual Training In this section, we use the previously described DDS method to derive a new framework that, instead of relying on fixed heuristics, adaptively optimizes usage of multilingual data for the best model performance on multiple languages.", "We illustrate the overall workflow in Fig.", "1. First, we note two desiderata for our multilingual training method: 1) generality : the method should be flexible enough so that it can be utilized universally for different multilingual tasks and settings (such as different translation directions for NMT).", "2) scalablity : the method should be stable and efficient if one wishes to scale up the number of languages that a multilingual model supports.", "Based on these two properties, we introduce MultiDDS, an extension of the DDS method tailored for multilingual training.", "Method MultiDDS directly parameterizes the standard dataset sampling distribution for multilingual training with : PD ( i ; ) = e i / (cid:80) nk =1 e k (8) and optimizes to minimize the dev loss.", "Notably, unlike standard DDS we make the design decision to weight training datasets rather than score each training example (cid:104) x, y (cid:105) directly, as it is more efficient and also likely easier to learn.", "= argmin J dev ( ( ) , D dev ) where = argmin E i PD ( i ; ) (cid:2) J ( , D i train ) (cid:3) (9)", "In other words, while the general DDS framework evaluates the model performance on a single dev set and optimizes the weighting of each training example, our multilingual training objective evaluates the performance over an aggregation of n dev sets and optimizes the weighting of n training sets.", "The reward signal for updating t is R ( i ; t ) cos (cid:16) ( J dev ( t , D dev )) , J ( t 1 , D i train ) (cid:17) = cos (cid:32) (cid:32) 1 n n (cid:88) k =1 J ( t , D k dev ) (cid:33) , J ( t 1 , D i train ) (cid:33) , (10) where J dev ( ) defines the combination of n dev sets, and we simply plug in its definition from Eq.", "3.", "Intuitively, Eq.", "10 implies that we should favor the training language i if its gradient aligns with the gradient of the aggregated dev risk of all languages.", "Implementing the Scorer Update The pseudo-code for the training algorithm using MultiDDS can be found in line 25.", "Notably, we do not update the data scorer on every training step, because it is too computationally expensive for NMT training (Wang et al., 2019b).", "Instead, after training the multilingual model for a certain number of steps, we update the scorer for all languages.", "This implementation is not only efficient, but also allows us to x Scorer Model J ( D i train ; t ) J dev ( \u0000 t +1 , D dev ) t D 1 train D n train D 1 dev D n dev t PD ( i ; t ) Figure 1 : An illustration of the MultiDDS algorithm.", "Solid lines represent updates for , and dashed lines represent updates for .", "The scorer defines the distribution over n training languages, from which training data is sampled to train the model.", "The scorer is updated to favor the datasets with similar gradients as the gradient of the aggregated dev sets.", "re-estimate more frequently the effect of languages that have low probability of being sampled.", "In order to do so, it is necessary to calculate the effect of each training language on the current model, namely R ( i ; t ) .", "We estimate this value by sampling a batch of data from each D i train to get the training gradient for t , and use this to calculate the reward for this language.", "This process is detailed in line 11 of the line 25.", "Unlike the algorithm in DDS which requires storing n model gradients, 3 this approximation does not require extra memory even if n is large, which is important given recent efforts to scale multilingual training to 100+ (Arivazhagan et al., 2019; Aharoni et al., 2019) or even 1000+ languages ( Ostling and Tiedemann, 2017; Malaviya et al., 2017).", "In our initial attempts to scale DDS to highly multilingual training, we found that one challenge was that the reward for updating the scorer became unstable.", "This is because the gradient of a multilingual dev set is less consistent and of higher variance than that of a monolingual dev set, which influences the fidelity of the data scorer reward.", "4 3 The NMT algorithm in (Wang et al., 2019b) estimates the reward by storing the moving average of n training gradients, which is not memory efficient (See Line.", "7 of Alg.", "2 in (Wang et al., 2019b)).", "In the preliminary experiments, our approximation performs as well as the moving average approximation (see App. A.1).", "Thus, we use our approximation method as the component for MultiDDS for the rest of the experiments.", "4 Suppose the dev set gradient of language k has variance of var ( g k dev ) = , and that the dev gradients of each language Algorithm 1: Training with MultiDDS Input : D train ; M: amount of data to train the multilingual model before updating ; Output : The converged multilingual model (cid:46) Initialize PD ( i, ) to be proportional to dataset size 1 PD ( i, ) | D i train | (cid:80) nj =1 | D j train | 2 while not converged do (cid:46) Load training data with 3 X, Y 4 while | X, Y | < M do 5 i PD ( i, t ) 6 ( x, y ) D i train 7 X, Y X, Y x, y 8 end (cid:46) Train the NMT model for multiple steps 9 for x, y in X, Y do 10 GradientUpdate ( , (cid:96) ( x, y ; )) 11 end (cid:46) Estimate the effect of each language R ( i ; ) 12 for i from 1 to n do 13 x (cid:48) , y (cid:48) D i train 14 g train (cid:96) ( x (cid:48) , y (cid:48) ; ) 15 (cid:48) GradientUpdate ( , g train ) 16 g dev 0 17 for j from 1 to n do 18 x d , y d D j dev 19 g dev g dev + (cid:48) (cid:96) ( x d , y d ; (cid:48) ) 20 end 21 R ( i ; ) cos ( g dev , g train ) 22 end (cid:46) Optimize 23 d (cid:80) n i =1 R ( i ; ) log ( PD ( i ; )) 24 GradientUpdate ( , d ) 25 end Thus, instead of using the gradient alignment between the training data and the aggregated loss of n dev sets as the reward, we propose a second approach to first calculate the gradient alignment reward between the data and each of the n dev sets, then take the average of these as the final reward.", "{ g 1 dev , ..., g n dev } are independent.", "Then the sum of the gradients from the n languages has a variance of var ( (cid:80) nk =1 g k dev ) = n .", "This can be expressed mathematically as follows: R (cid:48) ( i ; t ) cos (cid:32) (cid:32) 1 n n (cid:88) k =1 J ( t , D k dev ) (cid:33) , J ( t 1 , D i train ) (cid:33) 1 n n (cid:88) k =1 cos (cid:16) J ( t , D k dev ) , J ( t 1 , D i train ) (cid:17) (11) To implement this, we can simply replace the standard reward calculation at Line 11 of line 25 to use the stable reward.", "We name this setting MultiDDS-S.", "In 6.6 we show that this method has less variance than the reward in Eq.", "10.", "We use the 58-languages-to-English parallel data from Qi et al. (2018).", "A multilingual NMT model is trained for each of the two sets of language pairs with different level of language diversity: Related: 4 LRLs (Azerbaijani: aze , Belarusian: bel , Glacian: glg , Slovak: slk ) and a related HRL for each LRL (Turkish: tur , Russian: rus , Portuguese: por , Czech: ces ) Diverse: 8 languages with varying amounts of data, picked without consideration for relatedness (Bosnian: bos , Marathi: mar , Hindi: hin , Macedonian: mkd , Greek: ell , Bulgarian: bul , French: fra , Korean: kor ) Statistics of the datasets are in A.3.", "For each set of languages, we test two varieties of translation: 1) many-to-one (M2O): translating 8 languages to English; 2) one-to-many (O2M): translating English into 8 different languages.", "A target language tag is added to the source sentences for the O2M setting (Johnson et al., 2016).", "All translation models use standard transformer models (Vaswani et al., 2017) as implemented in fairseq (Ott et al., 2019) with 6 layers and 4 attention heads.", "All models are trained for 40 epochs.", "We preprocess the data using sentencpiece (Kudo and Richardson, 2018) with a vocabulary size of 8 K for each language.", "The complete set of hyperparameters can be found in A.2.", "The model performance is evaluated with BLEU score (Pap-ineni et al., 2002), using sacreBLEU (Post, 2018).", "Baselines We compare with the three standard heuristic methods explained in 2: 1) Uniform ( = ): datasets are sampled uniformly, so that LRLs are over-sampled to match the size of the HRLs; 2) Temperature: scales the proportional distribution by = 5 (following Arivazhagan et al. (2019)) to slightly over-sample the LRLs; 3) Proportional ( = 1 ): datasets are sampled proportional to their size, so that there is no over-sampling of the LRLs.", "Ours we run MultiDDS with either the standard reward (MultiDDS), or the stabilized reward proposed in Eq.", "11 (MultiDDS-S).", "The scorer for MultiDDS simply maps the ID of each dataset to its corresponding probability (See Eq. 8. The scorer has N parameters for a dataset with N languages.) 6.3 Main Results We first show the average BLEU score over all languages for each translation setting in Tab.", "2. First, comparing the baselines, we can see that there is no consistently strong strategy for setting the sampling ratio, with proportional sampling being best in the M2O setting, but worst in the O2M setting.", "Next, we can see that MultiDDS outperforms the best baseline in three of the four settings and is comparable to proportional sampling in the last M2O-Diverse setting.", "With the stabilized reward, MultiDDS-S consistently delivers better overall performance than the best baseline, and outperforms MultiDDS in three settings.", "From these results, we can conclude that MultiDDS-S provides a stable strategy to train multilingual systems over a variety of settings.", "Next, we look closer at the BLEU score of each language pair for MultiDDS-S and the best baseline.", "The results for all translation settings are in Tab.", "1. In general, MultiDDS-S outperforms the baseline on more languages.", "In the best case, for the O2M-Related setting, MultiDDS-S brings significant gains for five of the eight languages, without hurting the remaining three.", "The gains for the Related group are larger than for the Diverse group, likely because MultiDDS can take better advantage of language similarities than the baseline methods.", "It is worth noting that MultiDDS does not impose large training overhead.", "For example, for our M2O system, the standard method needs around 19 hours and MultiDDS needs around 20 hours for convergence.", "The change in training time is not siginificant because MultiDDS only optimizes a simple distribution over the training datasets.", "Prior works on multilingual models generally focus on improving the average performance of the model on all supported languages (Arivazhagan et al., 2019; Conneau et al., 2019).", "The formulation of MultiDDS reflects this objective by defining the aggregation of n dev sets using Eq.", "3, which is simply the average of dev risks.", "However, average performance might not be the most desirable objective under all practical usage settings.", "For example, it may be desirable to create a more egalitarian system that performs well on all languages, or a more specialized system that does particularly well on a subset of languages.", "In this section, we examine the possibility of using MultiDDS to control the priorities of the multilingual model by defining different dev set aggregation methods that reflect these priorities.", "To do so, we first train the model for 10 epochs using regular MultiDDS, then switch to a different dev set aggregation method.", "Specifically, we compare MultiDDS with three different priorities: Regular: this is the standard MultiDDS that optimizes all languages throughout training using the average dev risk aggregation in Eq.", "3 Low: a more egalitarian system that optimizes the average of the four languages with the worst dev perplexity, so that MultiDDS can focus on optimizing the low-performing languages High: a more specialized system that optimizes the four languages with the best dev perplexity, for MultiDDS to focus on optimizing the high-performing languages We performed experiments with these aggregation methods on the Diverse group, mainly because there is more performance trade-off among these languages.", "First, in Tab.", "3 we show the average BLEU over all languages, and find that MultiDDS with different optimization priorities still maintains competitive average performance compared to the baseline.", "More interestingly, in Fig. 2, we plot the BLEU score difference of High and Low compared to Regular for all 8 languages.", "The languages are ordered on the x -axis from left to right in decreasing perplexity.", "Low generally performs better on the low-performing languages on the left, while High generally achieves the best performance on the high-performing languages on Method Avg.", "Table 1 : BLEU scores of the best baseline and MultiDDS-S for all translation settings.", "MultiDDS-S performs better on more languages.", "For each setting, bold indicates the highest value, and means the gains are statistically significant with p < 0 .", "05 .", "Table 2 : Average BLEU for the baselines and our methods.", "Bold indicates the highest value.", "Table 3 : Average BLEU of the best baseline and three MultiDDS-S settings for the Diverse group.", "MultiDDS-S always outperform the baseline.", "the right, with results most consistent in the O2M setting.", "This indicates that MultiDDS is able to prioritize different predefined objectives.", "It is also worth noting that low-performing languages are not always low-resource languages.", "For example, Korean ( kor ) has the largest amount of training data, but its BLEU score is among the lowest.", "This is because it is typologically very different from English and the other training languages.", "Fig. 2 shows that Low is still able to focus on improving kor , which aligns with the predefined objective.", "This fact is not considered in baseline methods that only consider data size when sampling from the training datasets.", "In Fig. 3, we visualize the language distribution learned by MultiDDS throughout the training process.", "Under all settings, MultiDDS gradually increases the usage of LRLs.", "Although initialized m a r k o r h i n b o s m k d e ll bu l f r a 0 .", "Figure 2 : The difference between Low and High optimization objectives compared to Regular for the Diverse language group.", "MultiDDS successfully optimize for different priorities.", "left : M2O; right : O2M.", "with the same distribution for both one-to-many and many-to-one settings, MultiDDS learns to up-sample the LRLs more in the one-to-many setting, likely due to the increased importance of learning language-specific decoders in this setting.", "For the Diverse group, MultiDDS learns to decrease the usage of Korean (kor) the most, probably because it is very different from other languages in the group.", "Next, we study the effect of the stablized reward proposed in", "2. In Fig. 4, we plot the regular reward (used by MultiDDS) and the stable reward (used by MultiDDS-S) throughout training.", "For all settings, the reward in MultiDDS and MultiDDS-S follows the similar trend, while the stable reward used in MultiDDS-S has consistently less variance.", "MultiDDS-S also results in smaller variance in the final model performance.", "We run MultiDDS and MultiDDS-S with 4 different random seeds, and record the mean and variance of the average BLEU score.", "Tab.", "4 shows results for the Diverse group, which indicate that the model performance achieved using MultiDDS-S has lower 0 100 200 Step 0 .", "Figure 3 : Language usage by training step.", "Left : many-to-one; Right : one-to-many; Top : related language group; Bottom : diverse language group.", "Table 4 : Mean and variance of the average BLEU score for the Diverse group.", "The models trained with MultiDDS-S perform better and have less variance.", "variance and a higher mean than MultiDDS.", "Additionally, we compare the learned language distribution of MultiDDS-S and MultiDDS in Fig. 5.", "The learned language distribution in both plots fluctuates similarly, but MultiDDS has more drastic changes than MultiDDS-S.", "This is also likely due to the reward of MultiDDS-S having less variance than that of MultiDDS.", "Our work is related to the multilingual training methods in general.", "Multilingual training has a rich history (Schultz and Waibel, 1998; Mimno et al., 2009; Shi et al., 2010; Tackstrom et al., 2013), but has become particularly prominent in recent years due the ability of neural networks to easily perform multi-task learning (Dong et al., 2015; Plank et al., 2016; Johnson et al., 2016).", "As stated previously, recent results have demonstrated the importance of balancing HRLs and LRLs during multilingual training (Arivazhagan et al., 2019; Conneau et al., 0 100 200 0 . 00 0 . 05 multDDS, var=0.0012 multDDS-S, var=0.0003 0 100 200 0 . 05 0 . 00 0 . 05 0 . 10 0 . 15 multDDS, var=0.0026 multDDS-S, var=0.0007 0 100 200 0 . 02 0 . 00 0 . 02 0 . 04 multDDS, var=0.0015 multDDS-S, var=0.0005 0 100 200 0 . 025 0 . 000 0 . 025 0 . 050 0 . 075 multDDS, var=0.0018 multDDS-S, var=0.0004 Figure 4 : Variance of reward. Left : M2O; Right : O2M; Top : Related language group; Bottom : Diverse language group. 0 100 200 Step 0 . 00 0 . 05 0 . 10 0 . 15 0 . 20 0 . 25 L a n g u ag e P r o b a b ili t y 0 100 200 Step 0 . 00 0 . 05 0 . 10 0 . 15 0 . 20 0 . 25 0 . 30 korfra bulell mkdhin marbos Figure 5 : Language usage for the M2O-Diverse setting. Left : MultiDDS-S; Right : MultiDDS. The two figures follow similar trends while MultiDDS changes more drastically. 2019), which is largely done with heuristic sampling using a temperature term; MultiDDS provides a more effective and less heuristic method.", "Wang and Neubig (2019); Lin et al. (2019) choose languages from multilingual data to improve the performance on a particular language, while our work instead aims to train a single model that handles translation between many languages.", "(Zaremoodi et al., 2018; Wang et al., 2018, 2019a) propose improvements to the model architecture to improve multilingual performance, while MultiDDS is a model-agnostic and optimizes multilingual data usage.", "Our work is also related to machine learning methods that balance multitask learning (Chen et al., 2018b; Kendall et al., 2018).", "For example, Kendall et al. (2018) proposes to weigh the training loss from a multitask model based on the uncertainty of each task.", "Our method focuses on optimizing the multilingual data usage, and is both somewhat orthogonal to and less heuristic than such loss weighting methods.", "Finally, our work is related to meta-learning, which is used in hyperpa-rameter optimization (Baydin et al., 2018), model initialization for fast adaptation (Finn et al., 2017), and data weighting (Ren et al., 2018).", "Notably, Gu et al. (2018) apply meta-learning to learn an NMT model initialization for a set of languages, so that it can be quickly fine-tuned for any language.", "This is different in motivation from our method because it requires an adapted model for each of the language, while our method aims to optimize a single model to support all languages.", "To our knowledge, our work is the first to apply meta-learning to optimize data usage for multilingual objectives.", "In this paper, we propose MultiDDS, an algorithm that learns a language scorer to optimize multilingual data usage to achieve good performance on many different languages.", "We extend and improve over previous work on DDS (Wang et al., 2019b), with a more efficient algorithmic instantiation tailored for the multilingual training problem and a stable reward to optimize multiple objectives.", "MultiDDS not only outperforms prior methods in terms of overall performance on all languages, but also provides a flexible framework to prioritize different multilingual objectives.", "Notably, MultiDDS is not limited to NMT, and future work may consider applications to other multilingual tasks.", "In addition, there are other conceivable multilingual optimization objectives than those we explored in 6.4.", "The first author is supported by a research grant from the Tang Family Foundation.", "This work was supported in part by NSF grant IIS-1812327.", "The authors would like to thank Amazon for providing GPU credits." ]
[ "method", "abstain", "objective", "result", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "objective", "method", "abstain", "objective", "method", "abstain", "objective", "abstain", "abstain", "other", "abstain", "method", "objective", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "other", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "other", "abstain", "other", "method", "method", "other", "objective", "objective", "objective", "abstain", "abstain", "abstain", "objective", "other", "other", "other" ]
[ "We present a corpus of anaphoric information (coreference) crowdsourced through a game-with-a-purpose.", "The corpus, containing annotations for about 108,000 markables, is one of the largest corpora for coreference for English, and one of the largest crowdsourced NLP corpora, but its main feature is the large number of judgments per markable: 20 on average, and over 2.2M in total.", "This characteristic makes the corpus a unique resource for the study of disagreements on anaphoric interpretation.", "A second distinctive feature is its rich annotation scheme, covering singletons, expletives, and split-antecedent plurals.", "Finally, the corpus also comes with labels inferred using a recently proposed probabilistic model of annotation for coreference.", "The labels are of high quality and make it possible to successfully train a state of the art coreference resolver, including training on singletons and non-referring expressions.", "The annotation model can also result in more than one label, or no label, being proposed for a markable, thus serving as a baseline method for automatically identifying ambiguous markables.", "A preliminary analysis of the results is presented.", "A number of datasets for anaphora resolution / coreference now exist (Poesio et al., 2016), including ONTONOTES that has been the de facto standard since the CONLL shared tasks in 2011 and 2012 (Pradhan et al., 2012), and the just introduced and very substantial PRECO corpus (Chen et al., 2018).", "None of these datasets however take into account the research challenging the idea that a gold standard' interpretation can be obtained through adjudication, in particular for anaphora (Poesio and Artstein, 2005b; Wong and Lee, 2013; Aroyo and Welty, 2015).", "Virtually every project devoted to large-scale annotation of discourse or semantic phenomena has reached the conclusion that genuine disagreements are widespread.", "This has long been known for anaphora (Poesio and Artstein, 2005b; Versley, 2008; Recasens et al., 2011) (see also the analysis of disagreements in ONTONOTES in (Pradhan et al., 2012)) and word-senses (Passonneau et al., 2012), but more recent work has provided evidence that disagreements are frequent for virtually every aspect of language interpretation, not just in subjective tasks such as sentiment analysis (Kenyon-Dean et al., 2018), but even in the case of tasks such as part-of-speech tagging (Plank et al., 2014).", "In fact, researchers in the CrowdTruth project view disagreement as positive, arguing that disagreement is signal, not noise (Aroyo and Welty, 2015).", "In this paper we present what to our knowledge is the largest corpus containing alternative anaphoric judgments: 20.6 judgments per markable on average (up to 90 judgments in some cases) for about 108,000 markables.", "We are not aware of any comparable resource for studying disagreement and ambiguity in anaphora or indeed any other area of NLP .", "We present some preliminary analysis in the paper.", "The corpus presented in this paper is also the largest corpus for anaphora / coreference entirely created through crowdsourcing, and one of the largest corpus of coreference information for English in terms of markables.", "So far, only fairly small coreference corpora have been created using crowdsourcing (Chamberlain et al.; Guha et al., 2015).", "The corpus presented here provides annotations for about 108,000 markables, 55% of the number of markables in ONTONOTES .", "Another novelty is that the corpus was created through a quasi' Game-With-A-Purpose ( GWAP ) (von Ahn, 2006; Lafourcade et al., 2015), Phrase Detectives (Poesio et al., 2013), and is, to our knowledge, the largest GWAP -created corpus for NLP .", "(So far, the success of GWAP s in other areas of science (Clery, 2011; Cooper et al., 2010) has not been replicated in NLP", ".) Finally, the corpus is notable for a richer annotation scheme than the other large coreference corpora.", "Singletons were marked as well as mentions participating in coreference chains (the omission of singletons being one of the main problems with ONTONOTES ).", "Non-referring expressions were also annotated: both expletives (not annotated either in ONTONOTES or PRECO ) and predicative NPs.", "Finally, all types of plurals were annotated, including also split-antecedent plurals as in John met with Mary, and they went to dinner , which again are not annotated either in ONTONOTES or PRECO .", "Turning a crowdsourced corpus into a high-quality dataset suitable to train and evaluate NLP systems requires, however, an aggregation method appropriate to the data and capable of achieving sufficient quality, something that simple majority voting typically cannot guarantee (Dawid and Skene, 1979; Hovy et al., 2013).", "What made it possible to extract such a dataset from the collected judgments was the recent development of a probabilistic method for aggregating coreference annotations called MPA (Paun et al., 2018b).", "MPA extracts silver labels from a coreference annotation and associates them with a probability, allowing for multiple labels in cases of ambiguity.", "As far as we know, ours is the first use of MPA to create a large-scale dataset.", "We show in the paper that MPA can be used to extract from the judgments a high quality coreference dataset that can be used to develop standard coreference resolvers, as well as to investigate disagreements on anaphora.", "Since the two CONLL shared tasks (Pradhan et al., 2012), ONTONOTES has become the dominant resource for anaphora resolution research (Fernan-des et al., 2014; Bjorkelund and Kuhn, 2014; Martschat and Strube, 2015; Clark and Manning, 2015, 2016a,b; Lee et al., 2017, 2018).", "ONTONOTES contains documents in three languages, Arabic (300K tokens), Chinese (950K) and English (1.6M), from several genres but predominantly news.", "One frequently discussed limitation of ONTONOTES is the absence of singletons (De Marneffe et al., 2015; Chen et al., 2018), which makes it harder to train models for mention detection (Poesio et al., 2018).", "Another limitation is that expletives are not annotated.", "As a consequence, downstream applications such as machine translation (Guillou and Hardmeier, 2016) that require pronoun interpretation have to adopt various workarounds.", "Because of these two restrictions, ONTONOTES only has 195K markables, and a low markable density (0.12 markable/token).", "A number of smaller corpora provide linguistically richer information (Poesio et al., 2016).", "Examples include ANCORA for Spanish (Recasens and Mart, 2010), TUBA-D / Z for German (Hin-richs et al., 2005), the Prague Dependency Tree-bank for Czech and English (Nedoluzhko et al., 2009), and ARRAU for English (Uryupina et al., To Appear).", "In ARRAU , for example, singletons and expletives are annotated as well, as are split antecedent plurals, generic coreference, discourse deixis, and bridging references.", "The ARRAU corpus is relatively small in terms of tokens (350K), but has a higher markable density than ONTONOTES (0.29 markable/token), so it has around 100K markables, half the number of ONTONOTES .", "ARRAU was recently used in the CRAC 2018 shared task (Poesio et al., 2018) to evaluate a number of anaphora resolution tasks.", "The recently introduced PRECO corpus (Chen et al., 2018) is the largest existing coreference corpus, consisting of 35,000 documents for a total of 12.5M tokens and 3.8M markables, half of which are singletons.", "However, the corpus is not intended as a general purpose dataset as only the 3000 most common English words appear in the documents (the majority 2/3 of the documents are from Chinese high-school English tests).", "The corpus's annotation scheme mainly follows the ONTONOTES guidelines, with a few important differences: singleton mentions and generic coreference are annotated, event anaphora is not, and predicative NP s are annotated as co-referring with their argument, as previously done in the MUC (Grishman and Sundheim, 1995; Chinchor, 1998) and ACE (Doddington et al., 2004) corpora.", "1 As one could expect, the corpus is relatively easy for coreference systems.", "The Peters et al. (2018) system trained and tested on PRECO achieves an av-1 An example of predicative NP is 24 degrees in The temperature is 24 degrees .", "As discussed by van Deemter and Kibble (2000), annotating The temperature and 24 degrees as coreferent would result in nonsensical coreference chains for sentences like The temperature was 24 degrees but it is 27 degrees now.", "As a result, such markables were annotated as predicative in recent corpora.", "It's not clear why we find a return to the old practice in PRECO .", "A revolution in the way language annotation tasks are carried out was achieved by crowdsourcing (Howe, 2008; Snow et al., 2008).", "Crowdsourcing comes in many forms, including citizen science and microworking .", "A third approach is to use a game-with-a-purpose (GWAP) to aggregate data from non-expert players for collective decisions similar to those from an expert (von Ahn, 2006).", "The game-based approach to collecting language data is initially costly, but once a game is deployed it can continue to collect data with very little finan-cial support, especially if there is an active community.", "GWAP s such as Phrase Detectives (Poesio et al., 2013), JeuxDesMots (Joubert and Lafourcade, 2008) and Zombie Lingo (Fort et al., 2014) have been used in NLP to collect data on specific linguistic features; broader platforms such as Wordrobe (Venhuizen et al., 2013) to gamify the entire text annotation pipeline.", "Crowdsourcing is the most realistic approach to collect a large number of judgments about phenomena such as anaphora.", "Games in particular are the one type of crowdsourcing scalable to the goal of, for example, a 100M word corpus.", "So far, however, only small and medium scale resources for NLP have been created via crowdsourcing.", "For coreference we are only aware of two, both around 50K tokens in size (Chamberlain et al.; Guha et al., 2015).", "The Groningen Meaning Bank being collected through the Wordrobe platform (Bos et al., 2017) includes many more documents, but so far only very few interpretations have been obtained through the games (e.g., only around 4K judgments have been collected for anaphora).", "In most of the best known efforts at creating anaphoric corpora for English and other languages substantial disagreements between the coders were observed, but none of the resulting resources contains multiple anaphoric interpretations.", "Systematic analyses of the disagreements among coders observed in such annotation efforts were provided for ANCORA by Recasens et al. (2011) and for TUBA-D / Z by Versley (2008).", "The entire ONTONOTES corpus was double annotated, finding disagreements on around 20% of the markables, i.e., around 40,000 cases.", "An analysis of such disagreements can be found in (Pradhan et al., 2012), but ultimately only the result of adjudication was included in the corpus.", "Most of the PRECO corpus was doubly annotated and the results adjudicated, but only the result of adjudication is released.", "We are aware of only two corpus annotation schemes which explicitly allowed the annotation of anaphoric ambiguity: ARRAU and the Potsdam Commentary Corpus (Krasavina and Chiar-cos, 2007).", "Most of the ARRAU corpus was single-annotated by a highly experienced annotator, who was allowed to mark a variety of cases of ambiguity (Poesio and Artstein, 2005b).", "It is known, however, that such explicit marking of ambiguity is difficult (Poesio and Artstein, 2005b; Recasens et al., 2012), and indeed not many cases of ambiguity were marked in this way in ARRAU .", "In this Section we discuss what type of judgments were collected, and how.", "The gamified online platform Phrase Detectives 2 (Chamberlain et al., 2008; Poesio et al., 2013) was used to collect the judgments about anaphoric reference included in the corpus.", "Phrase Detectives is articulated around a number of tasks centered around the detective metaphor and uses scoring, progression and a variety of other mechanisms to make the activity enjoyable.", "In annotation mode ( Name the Culprit ), the participant provides an anaphoric judgment about a highlighted markable (the possible judgments according to the annotation scheme are discussed next).", "If different participants enter different interpretations for a markable then each interpretation is presented to other participants in validation mode ( Detectives Conference ), in which the participants have to agree or disagree with the interpretation.", "One of the key differences between Phrase Detectives and GWAP s such as those developed by von Ahn and his lab (von Ahn, 2006) is the much greater complexity of judgments required.", "Yet clearly we cannot expect participants to be experts about anaphora, or to be willing to read a manual explaining the annotation scheme, so all the training still has to be done while playing the game.", "2 http://www.phrasedetectives.com Therefore, we developed a number of mechanisms that could help in this respect: giving suggestions and tips (global, contextual and FAQ ), comparing decisions with the gold standard, and showing agreement with other players in Validation Mode.", "When participants begin to play they are shown training texts (in which the answer is known from a gold standard) and get feedback as to whether their decisions agree with the gold standard.", "Once the player has completed all training tasks they are given a user rating (the percentage of correct decisions out of the total number of training tasks).", "As of 17th of March 2019, 60,822 individuals have participated in Phrase Detectives over ten years and using different platforms, providing over 4.26 million judgments, about half of which are included in the present release.", "The judgments asked to the participants to Phrase Detectives follow a simplified version of the ARRAU annotation scheme, which is the result of extensive tests for intercoder agreement (Uryupina et al., To Appear).", "The participants are asked to make two basic distinctions: whether a markable is referring or not, and if referring, whether it is Discourse-Old ( DO ), i.e., it refers to an entity already mentioned (in which case the players were asked to indicate the latest mention of that en-tity), or Discourse-New ( DN ), i.e., it introduces a new entity in the discourse.", "Anaphoric reference marked include split antecedent anaphora, as in John met Mary, and they went out for drinks , where the antecedent for they is the set consisting of the separately introduced John and Mary .", "Two types of non-referring expressions were marked: expletives , as in It's five o'clock or There was a fireplace in the room ; and predicative NP s , as in The temperature is 24 degrees .", "In the case of predicative NP s, players were asked to mark the nearest mention of the entity that the predication applied to, following in this case the ONTONOTES approach instead of ARRAU 's.", "The key difference between this corpus and any other existing corpus for anaphora / coreference with the exception of ARRAU is that the corpus was designed to collect information about disagreement.", "The main difference from ARRAU is that no attempt was made to ask players to identify ambiguity, as that has proven hard or impossible to do (Poesio and Artstein, 2005b).", "Instead of explicit (marking of) ambiguity , the developers relied on implicit ambiguity : that genuine ambiguity would emerge if enough players supplied judgments.", "All the judgments produced by the players were therefore stored, without attempting to choose among them at collection.", "The differences between the four corpora being compared are summarized in Table 1, modelled on a similar table in (Chen et al., 2018).", "In the Phrase Detectives corpus predication and coreference are clearly distinguished, as in ONTONOTES and ARRAU but unlike in PRECO .", "Singletons are considered markables.", "Expletives and split antecedent plurals are marked, unlike in either ONTONOTES or PRECO .", "Most importantly, ambiguity of anaphoric interpretation (as in the example from the TRAINS corpus (Poesio and Artstein, 2005b)) is marked, but implicitly, i.e., by asking the judgment of at least 8 players per markable, as opposed to explicitly, as attempted in ARRAU (with little success).", "Following standard practice in anaphoric annotation and GWAP s, the markables to be annotated were not identified by the participants themselves; instead, markable identification was carried out semi-automatically.", "Each document would first be processed by a pipeline combining off-the-shelf tools (sentence splitting and tokenization using the OpenNLP pipeline 3 and parsing using the Berkeley Parser (Petrov and Klein, 2007)) and custom preprocessing and post-processing heuristic steps to correct the output.", "(See (Poesio et al., 2013) for more details about the pipeline and its", "perfor-mance.) Then one of the administrators would carry out a quick check of the document removing the most obvious mistakes before uploading it.", "After the document was uploaded, participants could report markable errors, which would then be corrected by hand.", "4 4 The corpus 4.1 Basic statistics This second release of the Phrase Detectives corpus consists of a total of 542 documents contain-3 http://opennlp.apache.org 4 As participants report over 10,000 errors per year, it became quickly apparent that carrying out the corrections ourselves was unfeasible.", "In subsequent work, we developed a gamified approach to markable identification and correction centered around the TileAttack!", "GWAP (Madge et al., 2017).", "ing 408K tokens and 108K markables from two main genres: Wikipedia articles and fiction from the Gutenberg collection.", "This corpus is divided in two subsets.", "The subset we refer to as PD silver consists of 497 documents, for a total of 384K tokens and 101K markables, whose annotation was completedi.e. 8 judgments per markable were collected, and 4 validations per interpretationas of 12th of October 2018.", "In these documents, an aggregated (silver') label obtained through MPA (see next Section) is also provided.", "45 additional documents were also gold-annotated by two experts annotators.", "We refer to the subset of the corpus for which both gold and silver annotations are available as PD gold , as it is intended to be used as test set.", "5 The gold subset consists of a total of 23K tokens and 6K markables.", "The contents of the corpus are summarized in Table", "2. By comparison, the English corpus used for the CONLL 2011 and 2012 shared tasks consists of 3493 documents, for a total of 1.6M tokens and 5 PD gold is the dataset released in 2016 as Phrase Detectives corpus, Release 1 (Chamberlain et al.).", "194480 markables.", "In other words, although the current release of the corpus is only about 25% of the CONLL corpus in terms of tokens, it is 55.5% of its size in terms of annotated markables, i.e., actual training / testing items.", "In total, 2,235,664 judgments from 1958 players are included in the current release, of which 1,358,559 annotations and 867,844 validations.", "On average, 20.6 judgments were collected per markable: 12.6 annotations and 8 validations.", "In addition, around 10K expert judgments were collected for the gold portion of the corpus from two expert annotators.", "This compares with 600K estimated judgments for the entire ONTONOTES corpus, about 3 per markable (total number of annotators not known), and around 10M for PRECO , also 3 per markable, from about 80 annotators.", "The raw' statistics about disagreement in the corpus are shown in Table", "3. In total, only 35.7% of the markables in the corpus (38,579) were assigned only one interpretation by the participants, whereas 64.3% received more than one interpretation.", "This figure would seem to suggest massive ambiguity, but we are not saying that 64.3% of markables in the corpus are ambiguous.", "As already pointed out e.g. in (Pradhan et al., 2012), there are a number of reasons for disagreements among coders / players apart from ambiguity.", "In the case of ONTONOTES , the causes for the 20,000 observed disagreements include: Ambiguity proper, i.e., unclear interpretation ('Genuine Ambiguity' in (Pradhan et al., 2012)) and/or disagreement on reference (31% of the disagreements in ONTONOTES , around 7% of all markables); Annotator error (another 25% of the cases of disagreement in ONTONOTES ); Various limitations of the coding scheme: unclarity in the guidelines, inability to mark certain types of coreference e.g., between generics, etc. (36.5% of the cases of disagreement in ONTONOTES ).", "Some of the disagreements due to other causes and in particular annotation errorscan be filtered through validation, i.e., by excluding those interpretations of a markable for which the validation score (annotations + agreements disagreements) falls below a threshold.", "For example, if only interpretations with a validation score > 0 are considered, we find that 51,075 / 107,971 markables have at least two such interpretations, i.e., 47.3% of the total, which is considerably less than the 64.3% of markables with more than one interpretation, but it's still a large number.", "We will discuss a more sophisticated method for automatically identifying plausible interpretations, as well as the results of a preliminary hand-analysis of the disagreements in a few documents in our corpus, in Section", "7. 5 Aggregation 5.1 Probabilistic Aggregation Methods The data collected via Phrase Detectives require an aggregation method to help choose between the different interpretations provided by the players.", "Simple heuristics such as majority voting are known to underperform compared to probabilistic models of annotation (Whitehill et al., 2009; Raykar et al., 2010; Quoc Viet Hung et al., 2013; Sheshadri and Lease, 2013; Hovy et al., 2013; Passonneau and Carpenter, 2014; Paun et al., 2018a).", "The models offer a rich framework of interpretation and can employ distinct prior and likelihood structures (pooled, unpooled, and partially pooled) and a diverse set of effects (annotator ability, item difficulty, or a subtractive relationship between the two).", "However, most work on models of annotation assumes that the set of classes the annotators can choose from is fixed across the annotated items, which is not the case for anaphoric annotation.", "More specifically, in Phrase Detectives the participants can classify a markable as non-referring (expletive or predicative); as introducing a new discourse entity; or as discourse-old, in which case they link it to the most recent mention of its antecedentand coreference chains are document-specific and not fixed in number (see Section 3.2 for more details on the annotation scheme).", "Recently, however, Paun et al. (2018b) developed a probabilistic model ( MPA ) able to aggregate such crowdsourced anaphoric annotations.", "In MPA , the term label is used to refer to a specific interpretation provided by a player, and the term class to refer to general interpretation categories such as discourse old, discourse new, expletive, or predicative NP .", "Please note that under this formalism each label belongs to a class: the antecedents belong to the discourse old category, while the other possible labels (e.g., discourse new) coincide with the classes they belong to.", "The model assumes a preprocessing step in which the markable-level annotations are transformed into a series of binary decisions with respect to each candidate label.", "MPA then models these (label-level) decisions as the result of the sensitivity (the true positive rate) and specificity (the true negative rate) of the annotators which it assumes are class dependent.", "This latter assumption allows inferring different levels of annotator ability for each class (thus capturing, for instance, the fact that whereas most participants are generally able to recognize discourse-new mentions, they are much less good at identifying correct antecedents).", "We use the MPA model as a component in a standard mention-pair framework to extract coreference clusters: 1) link each markable with the most likely label as identified by the model, and 2) follow the link structure to build the coreference", "chains.", "We next evaluate both of these components against expert annotations.", "6 Using the corpus for coreference resolution Some NLP researchers may question the usefulness of the information about disagreements for coreference resolution (or other NLP tasks).", "Table 4 shows a per class evaluation of the aggregated interpretations from the PD gold subset.", "The results indicate an overall better agreement with the expert annotations of MPA compared with a simple majority voting ( MAJVOTE ) baseline.", "This is because MAJVOTE makes the an implicit assumption that the annotators have equal expertise, which is not true in general even with data crowdsourced on microworking platforms, and even more so with data collected through GWAP s (Paun et al., 2018a).", "After inferring the mention pairs, coreference chains can be extracted and their quality assessed using standard coreference metrics.", "Table 5 presents the evaluation against gold chains in PD gold .", "We compare the chains produced from the mention pairs inferred by MPA and by MAJVOTE , and the chains produced by the STANFORD deterministic coreference system (Lee et al., 2011) (for which we switched off post-processing to output singleton clusters).", "The results indicate a far better quality of the chains produced using MPA over the alternative methods.", "Another interesting result is that even a simple MAJVOTE baseline based on crowdsourced annotations performed far better than the STANFORD system, underlining the advantage of crowdsourced annotations for coreference over automatically produced annotations.", "In this Section, we demonstrate that even those purely interested in CONLL -style coreference resolution can use the Phrase Detectives corpus aggregated with MPA as a dataset.", "We use PD silver to train a coreference system able to simultaneously identify non-referring expression and build coreference chains (including singletons).", "As no other system of this type exists at the moment, we developed one ourselves.", "The system trained and tested on the corpus is a cluster ranking system that does mention detection and corefence resolution jointly.", "The system uses the mention representation from the state-of-the-art (Lee et al., 2018) system, but replaces their mention-ranking model with a cluster ranking model.", "Our cluster ranking model forms clusters by going through the candidate mentions in their text order and adding them to the clusters, which take into consideration the relative importance of the mentions.", "An attention mechanism is used to assign mentions within the clusters salience scores, and the clusters are represented as the weighted sums of the mention representations.", "Separate classifiers are used to identify non-referring markables and singletons.", "We randomly chose 1/20 of PD silver as a development set and use the rest as the training set; PD gold was used as test set.", "To get a baseline, we compare the results of our system on a simplified version of the corpus without singletons and expletives with those obtained by the current state-of-the-art system on ONTONOTES , Lee et al. (2018) trained and tested on the same data.", "Table 6 shows the results of both systems on the simplified corpus.", "Our cluster ranking system achieved an average CONLL score of 60.5%, outperforming the Lee et al. (2018) system by 2 percentage points.", "Note that the Lee et al. (2018) system achieved a higher score on the CONLL data, which suggests that the present corpus is different from that dataset.", "In the same Table, we also report the results obtained by training our system on the full corpus including both non-referring expressions and singletons.", "This version of system achieves an average CONLL score of 72.7%.", "6 We will note that although this score is on system mentions, it is very close to the score (74.3%) achieved by the Stanford deterministic system evaluated with gold mentions (see Table 5 in Section 5).", "Also, this model trained on the full corpus including single-6 The Extended Coreference Scorer developed for the 2018 CRAC shared task (Poesio et al., 2018) was used to evaluate coreference chains on a corpus using singletons and to assess non-referring expressions identification.", "tons achieves a gain of 1 percentage point when compared with the model trained on the simplified corpus even when evaluated in a singleton excluded setting.", "This indicates that the availability of the singletons is also helpful for resolving non-singleton clusters.", "In total, this model achieved a CONLL score on singletons excluded of 3 percentage points 3% better than our baseline.", "Regarding the task of identifying non-referring mentions, our model achieved a F1 score of 54.6% (see Table 7).", "The scores of the system on distinct types of non-referring expressions is presented in the following two rows of Table", "7. Our model achieved a higher F1 score of 72.3% on expletives, and a lower score (48.7%) on predicative NP s.", "Overall, these resultsthe first results on system mentions for PD gold suggest that the silver corpus is sufficient to train a state-of-the-art system and achieve a reasonably good performance.", "Also, that training a model on a corpus enhanced by singletons and non-referring markables results in a better CONLL score when compared with a model trained on the simplified corpus.", "In the previous Section we showed that MPA can be used to extract a silver standard out of the annotations that is suitable to train a CONLL -style coreference resolver or an extended coreference resolver also attempting identification of singletons and non-referring expressions.", "The key property of the corpus however is the information it provides about disagreements.", "The second useful contribution of MPA is that it can be used to get an assessment of the ambiguity of markables which is more refined than that discussed in Section 4.3.", "For each markable, MPA assigns a probability to each interpretation.", "Given that the model does not assume the existence of a gold', there are three possible cases for each markable: either only one interpretation has a probability above a None One Two Zero or more PD gold 2.3% 93.4% 4.3% 6.6% PD silver 3.5% 94% 2.4% 5.9% Table 8: Ambiguity in the corpus according to MPA certain thresholdsay, 0.5; or more than one interpretation is above that threshold; or none is.", "This assessment of ambiguity according to MPA is summarized in Table", "8. This assessment appears to suggest a similar prevalence of ambiguity in our corpus than found in ONTONOTES in the already mentioned analysis by Pradhan et al. (2012).", "In order to verify this, two experts hand-analyzed 2 documents in PD gold containing a total of 900 markables: Little Red Cap (LRC) and Robber Bridegroom (RG).", "Given that each markable has on average 20 interpretations, and that player errors are frequent (there is at least one player error for almost every markable) it wasn't possible to use the same categories as Pradhan et al.", "Instead, we simply attempted to assign markables to one of the categories: Genuine ambiguity (GA) , Interface or Coding Scheme Problem (ICP) , Other (O) .", "The results are summarized in Table", "9. The Table has one row per document.", "The first column lists the total number of markables in a document; the second (Dis) the percentage of markables on which there is disagreement; the third (GA) the percentage of the total number of markables which are cases of genuine ambiguity; and the fourth (ICP) the percentage which are cases of Interface or Coding Scheme Problem.", "As we can see from the Table, 9% of the total number of markables in these documents (80 out of 865) are genuinely ambiguous, i.e., that 12.6% of the disagreements (80 out of 633) are cases of genuine ambiguity.", "These are only preliminary figures, and we suspect that the ultimate figures on the prevalence of ambiguity are going to be much higher, given that Recasens et al. (2012) report that 12-15% of coreference relations in their corpus are cases of quasi-coreference, and that Poesio and Artstein (2005a) report a figure of 42.6% once ambiguity on discourse deictic reference are taken into account.", "We next checked the extent to which MPA can correctly predict genuine ambiguity.", "The results suggest that MPA is good at removing spurious ambiguity, but as a predictor of ambiguity it only has a recall of around 20% and a precision of slightly Total Dis GA ICP LRC 401 79.1% 7% (28) 7.7% (31) RG 464 68.3% 11.2% (52) 12.9% (60) Average 73.7% 9.1% 10.3% Table 9: Analysis of disagreements in two corpus documents under 50%.", "Improving these results is one of the objectives of our current research.", "We presented a novel resource for anaphora that, because of its annotation scheme and size, at the very least should be useful to those in the community interested in developing systems able to perform a more comprehensive form of anaphora resolution, including for instance expletive detection and split antecedent resolution.", "The key property of this new resource however is that it provides a large number of judgments about each anaphoric expression, thus enabling the development of systems that do not make the assumption that a gold standard' exists, an assumption questioned by all studies associated with the creation of the current resources for the task.", "The dataset is also to our knowledge the first solid evidence that the games-with-a-purpose approach can be successfully deployed to obtain substantial resources for NLP .", "The corpus is freely available from the Linguistic Data Consortium and from http:// www.dali-ambiguity.org .", "It is distributed in three formats: an XML format including all the judgments, suitable for analysis of disagreements and/or the development of systems taking disagreement into account; and in CONLL and CRAC 18 format, with only the gold annotation or the silver label extracted, for those interested in using the corpus as an alternative resource for developing coreference systems only.", "This research was supported by the DALI project, funded by the European Research Council ( ERC ), Grant agreement ID: 695662." ]
[ "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "objective", "abstain", "objective", "other", "abstain", "other" ]
[ "Learning Source Phrase Representations for Neural Machine Translation Hongfei Xu 1 , 2 Josef van Genabith 1 , 2 Deyi Xiong 3 Qiuhui Liu 4 Jingyi Zhang 2 1 Saarland University / Saarland, Germany 2 German Research Center for Artificial Intelligence / Saarland, Germany 3 Tianjin University / Tianjin, China 4 China Mobile Online Services / Henan, China [email protected], Josef.Van [email protected], [email protected], [email protected], [email protected] Abstract The Transformer translation model (Vaswani et al., 2017) based on a multi-head attention mechanism can be computed effectively in parallel and has significantly pushed forward the performance of Neural Machine Translation (NMT).", "Though intuitively the attentional network can connect distant words via shorter network paths than RNNs, empirical analysis demonstrates that it still has difficulty in fully capturing long-distance dependencies (Tang et al., 2018).", "Considering that modeling phrases instead of words has significantly improved the Statistical Machine Translation (SMT) approach through the use of larger translation blocks (phrases) and its reordering ability, modeling NMT at phrase level is an intuitive proposal to help the model capture long-distance relationships.", "In this paper, we first propose an attentive phrase representation generation mechanism which is able to generate phrase representations from corresponding token representations.", "In addition, we incorporate the generated phrase representations into the Transformer translation model to enhance its ability to capture long-distance relationships.", "In our experiments, we obtain significant improvements on the WMT 14 English-German and English-French tasks on top of the strong Transformer baseline, which shows the effectiveness of our approach.", "Our approach helps Transformer Base models perform at the level of Transformer Big models, and even significantly better for long sentences, but with substantially fewer parameters and training steps.", "The fact that phrase representations help even in the big setting further supports our conjecture that they make a valuable contribution to long-distance relations.", "years (Sutskever et al., 2014; Bahdanau et al., 2015; Gehring et al., 2017; Vaswani et al., 2017).", "Compared to plain SMT (Brown et al., 1993; Koehn et al., 2003; Chiang, 2005), a neural language model decoder (Sutskever et al., 2014) is better at long-distance re-ordering, and attention mechanisms (Bahdanau et al., 2015; Vaswani et al., 2017) have been proven effective in modeling long-distance dependencies, while these two issues were both challenging for SMT.", "The Transformer (Vaswani et al., 2017), which has outperformed previous RNN/CNN based translation models (Bahdanau et al., 2015; Gehring et al., 2017), is based on multi-layer multi-head attention networks and can be trained in parallel very efficiently.", "Though attentional networks can connect distant words via shorter network paths than RNNs, empirical results show that its ability in capturing long-range dependencies does not significantly outperform RNNs, and it is still a problem for the Transformer to fully model long-distance dependencies (Tang et al., 2018).", "Using phrases instead of words enables conventional SMT to condition on a wider range of context, and results in better performance in reordering and modeling long-distance dependencies.", "It is intuitive to let the NMT model additionally condition on phrase level representations to capture long-distance dependencies better, but there are two main issues which prevent NMT from directly using phrases: There are more phrases than tokens, and the phrase table is much larger than the word vocabulary, which is not affordable for NMT; Distribution over phrases is much sparser than that over words, which may lead to data sparsity and hurt the performance of NMT.", "Instead of using phrases directly in NMT, in this work, we address the issues above with the following contributions: To address the large phrase table issue, we propose an attentive feature extraction model and generate phrase representation based on token representations.", "Our model first summarizes the representation of a given token sequence with mean or max-over-time pooling, then computes the attention weight of each token based on the token representation and the summarized representation, and generates the phrase representation by a weighted combination of token representations; To help the Transformer translation model better model long-distance dependencies, we let both encoder layers and decoder layers of the Transformer attend the phrase representation sequence which is shorter than the token sequence, in addition to the original token representation.", "Since the phrase representations are produced and attended at each encoder layer, the encoding of each layer is also enhanced with phrase-level attention computation; To the best of our knowledge, our work is the first to model phrase representations and incorporating them into the Transformer.", "Our approach empirically brings about significant and consistent improvements over the strong Transformer model (both base and big set-tings).", "We conducted experiments on the WMT 14 English-German and English-French news translation task, and obtained +1 .", "29 and +1 .", "37 BLEU improvements respectively on top of the strong Transformer Base baseline, which demonstrates the effectiveness of our approach.", "Our approach helps Transformer Base models perform at the level of Transformer Big models, and even significantly better for long sentences, but with substantially fewer parameters and training steps.", "It also shows effectiveness with the Transformer Big setting.", "We also conducted length analysis with our approach, and the results show how our approach improves long-distance dependency capturing, which supports our conjecture that phrase representation sequences can help the model capture long-distance relations better.", "Most previous work focuses on utilizing phrases from SMT in NMT to address its coverage (Tu", "et al., 2016) problem.", "Dahlmann et al. (2017) suggested that SMT usually performs better in translating rare words and profits from using phrasal translations, even though NMT achieves better overall translation quality.", "They introduced a hybrid search algorithm for attention-based NMT which extended the beam search of NMT with phrase translations from SMT.", "Wang et al. (2017a) proposed that while NMT generally produces fluent but often inadequate translations, SMT yields adequate translations though less fluent.", "They incorporate SMT into NMT through utilizing recommendations from SMT in each decoding step of NMT to address the coverage issue and the unknown word issue of NMT.", "Wang et al. (2017b) suggested that phrases play a vital role in machine translation, and proposed to translate phrases in NMT by integrating target phrases from an SMT system with a phrase memory given that it is hard to integrate phrases into NMT which reads and generates sentences in a token-by-token way.", "The phrase memory is provided by the SMT model which dynamically picks relevant phrases with the partial translation from the NMT decoder in each decoding step.", "Our research is based on the Transformer translation model (Vaswani et al., 2017) shown in Figure 1, which significantly outperforms the previous recurrent sequence-to-sequence approach and can be efficiently computed in parallel.", "The Transformer includes an encoder and a decoder.", "Both encoder and decoder are a stack of 6 layers.", "Besides the embedding matrix and positional embedding matrix in both encoder and decoder, the decoder also has a softmax classifier layer to produce translated tokens.", "The weights of the softmax classifier are normally tied to the target embedding matrix.", "Both encoder layers and decoder layers make use of the multi-head attention mechanism.", "The multi-head attention mechanism calculates attention results of given queries on corresponding keys 6x Embedding Self-Attention Feed-Forward 6x Cross-Attention Embedding + + Self-Attention Feed-Forward Positional Embedding Source Shifted Target Output (Target) Classifier Figure 1: The Transformer Translation Model.", "and values.", "It first projects queries, keys and values with 3 independent linear transformations, then splits the transformed key, query and value embeddings into several chunks of d k dimension vectors, each chunk is called a head, 1 and scaled dot-product attention is independently applied in each head: Attn ( Q, K, V ) = softmax( QKT d k ) V (1) where Q , K and V stand for the query vectors, key vectors and value vectors.", "Finally, the network concatenates the outputs of all heads and transforms it into the target space with another linear layer.", "The self-attention network uses the query sequence also as the key sequence and the value sequence in computation, while the cross-attention feeds another vector sequence to attend as queries and values.", "Comparing the computation of the attentional network with RNNs, it is obvious that the attention computation connects distant words with a shorter network path, and intuitively it should perform better in capturing long-distance dependencies.", "However, empirical results show that its ability in modeling long-range dependencies does not significantly outperform RNNs.", "1 d k is 64 for both the Transformer Base and the Transformer Big, and the numbers of heads for them are 8 and 16 respectively.", "Compared to previous works using RNN-based NMT (He et al., 2016; Wang et al., 2017a,b; Dahlmann et al., 2017), our proposed approach is based on the Transformer model, with the following further important differences:", "Our approach aims to improve the long-distance dependency modeling ability of NMT instead of coverage (Tu et al., 2016); Our approach does not require to train an SMT system or to extract aligned phrase translation from the training corpus, which makes it efficient and avoids suffering from potential error propagation from the SMT system.", "The phrase representation learning model is a neural model, and is deeply integrated in the translation model, and the whole neural model is end-to-end trainable; We iteratively and dynamically generate phrase representations with token vectors.", "Previous work does not use SMT phrases in this way.", "In more recent work, Wang et al. (2019) augment self attention with structural position representations to model the latent structure of the input sentence; Hao et al. (2019) propose multi-granularity self-attention which performs phrase-level attention with several attention heads.", "For the segmentation of phrases, given that N-gram phrases are effective for tensor libraries, we first try to cut a token sequence into a phrase sequence with a fixed phrase length which varies with the sequence length.", "2 We pad the last phrase in case it does not have sufficient tokens, thus we can transform the whole sequence into a tensor.", "The N-gram phrase segmentation is efficient and simple, and we suggest the drawbacks of such ca-sual segmentation boundaries can be alleviated with self-attention computation across the whole sequence and the attention mechanism applied in the generation of phrase representation which values tokens differently to a large extent, given that 2 We implement this as: ntok = max ( min (8 , seql/ 6) , 3) , where ntok and seql stand for the number of tokens in each phrase and the length of a sentence respectively.", "Algorithm 1 Extracting Phrases from a Parse Tree.", "Input: A parse tree T , maximum tokens allowed in a phrase n ; Output: Extracted phrase sequence S .", "1: while T is not empty do 2: Initialize a phrase sequence p = [] , maximum tokens allowed in this phrase mt = n ; 3: Find the largest sub-tree ST with nst tokens ( nst < n ) and depth dst from the right side of T ; 4: Add the token sequence in ST into p ; 5: Remove ST from T ; 6: while mt > 0 do 7: Find the adjacent sub-tree STA of depth dst with nsta tokens from the right side of T ; 8: if STA exists and nsta mt then 9: Insert the token sequence of STA to the beginning of p ; 10: Remove STA from T ; 11: mt = mt nsta ; 12: else 13: Break; 14: end if 15: end while 16: Append p to S ; 17: end while 18: Reverse S ; 19: return S neural models have been proven good at learning competitively effective representations with gate or attention mechanism even without modeling linguistic structures (Cho et al., 2014; Hochreiter and Schmidhuber, 1997; Vaswani et al., 2017; Devlin et al., 2019).", "In our experiments we also explore phrases extracted from the Stanford Parser (Socher et al., 2013) as as an alternative to our simple segmentation strategy.", "The maximum number of tokens allowed is consistent with the simple segmentation approach, and we try to use the tokens from the largest sub-tree that complies with the maximum token limitation or from several adjacent sub-trees of the same depth as a phrase for efficiency.", "Our algorithm to extract phrases from parse trees is shown in Algorithm", "1. To efficiently parallelize parser-based phrases of various length in a batch of data, we pad short phrases to the same length of the longest phrases in the batch of sentences, thus a batch of sequences of phrases can be saved into a tensor.", "But significantly more < pad > tokens will be introduced, and the model is slightly slower than the simple approach.", "Merging several token vectors into one is very likely to incur information loss, and introducing an importance evaluation mechanism is better than treating tokens equally.", "To highlight the most important features in a segmented phrase chunk, we introduce an attentive phrase representation generation model to value tokens differently according to their importance in the phrase.", "The model first roughly extracts features from all tokens into a vector, then assigns a score to each token by comparing each token vector with the extracted feature vector, and produces the weighted accumulation of all token vectors according to their scores.", "Phrase representations are generated in every encoder layer, for the k th encoder layer, we generate phrase representation R ke phrase from its input representation.", "Assume the phrase contains m tokens { t 1 , ..., t m } , and { R ke t 1 , R ke t 2 , ..., R ke tm } are the corresponding input vectors to the encoder layer, we first generate a summary representation by: R ke all = F glance ( R ke t 1 , ..., R ke tm ) (2) where F glance is a function to extract features of the vector sequence into a fixed-dimension vector; We explore both element-wise mean operation and max-over-time pooling operation in our work.", "After the summarized representation is produced, we calculate a score for each token in the phrase, the score of the i th token s ki is calculated as: s ki = W k 2 ( W k 1 [ R ke ti | R ke all ] + b k 1 ) + b k 2 (3) where is the sigmoid activation function, and | means concatenation of vectors.", "The rationale for designing this approach is further explained below.", "Then we normalize the score vector to weights with the softmax function, and the probability of the i th token p ki is: p ki = e s ki m (cid:80) i =1 e s ki (4) Finally, the representation of the phrase in the k th encoder layer R ke phrase is generated by a weighted combination of all vectors: SelfAttention Feed-Forward Attentive Phrase Representation Attentive Combining query key/value Input Output SelfAttention Feed-Forward CrossAttention Attentive Combining Input Output Transparent Attentive Phrase Representation Encoder Representation key/value query Encoder Layer Decoder Layer Figure 2: The Encoder/Decoder Layer of the Transformer Model with Phrase Representation.", "The representation of the phrase sequence can be computed efficiently in parallel.", "Each encoder layer will produce a vector sequence as the phrase representation.", "We do not use the multi-head attention in the computation of the phrase-representation attention because of two reasons: The multi-head attention calculates weights through dot-product, we suggest that a 2-layer neural network might be more powerful at semantic level feature extraction, and it is less likely to be affected by positional embeddings which are likely to vote up adjacent vectors; Though we employ a 2-layer neural network, it only has one linear transformation and a vector to calculate attention weights, which contains fewer parameters than the multi-head attention model that has 4 linear transformations.", "Recent studies show that different encoder layers capture linguistic properties of different levels (Peters et al., 2018), and aggregating layers is of profound value to better fuse semantic information (Shen et al., 2018; Dou et al., 2018; Wang et al., 2018; Dou et al., 2019).", "We assume that different decoder layers may value different levels of information i.e. the representation of different encoder layers differently, thus we weighted combined phrase representations from every encoder layer for each decoder layer with the Transparent Attention (TA) mechanism (Bapna et al., 2018).", "For the decoder layer j , the phrase representation R jd phrase fed into that layer is calculated by: R jd phrase = d (cid:88) i =0 w j i R ie phrase (6) where w ji are softmax normalized parameters trained jointly with the full model to learn the importance of encoder layers for the j th decoder layer.", "d is the number of encoder layers, and 0 corresponds to the embedding layer.", "After the phrase representation sequence for each encoder layer and decoder layer is calculated with the approach described above, we propose an attentive combination network to incorporate the phrase representation for each layer into the Transformer translation model to aid it modeling long-distance dependencies.", "The attentive combination network is inserted in each encoder layer and each decoder layer to bring in information from the phrase representation.", "The structures of the encoder layer and the decoder layer of the Transformer model with phrase representation are shown in Figure", "2. For an encoder layer, the new computation order is: cross-attention to phrases self-attention over tokens feed-forward neural network to process collected features, while for a decoder layer it is: self-attention over decoded tokens cross-attention to source phrases cross-attention to source tokens feed-forward neural network to process collected features.", "Compared to the computation order of the standard Transformer, the new computation order performs additional attending at phrase level before attending source token representations at token level.", "We conjecture that attending at phrase level should be easier than at token level, and attention results at phrase level may aid the attention computation at the token-level.", "For a given input sequence x and a phrase vector sequence R phrase , the attentive combination network first attends the phrase representation sequence and computes the attention output out phrase as follows: out phrase = Attn MH ( x, R phrase ) (7) where Attn MH is a multi-head cross-attention network with x as keys and R phrase as corresponding queries and values.", "The attention result is then combined again with the original input sequence x with a 2-layer neural network which aims to make up for potential information loss in the phrase representation with the original token representation: out = W 4 ( W 3 [ x | out phrase ] + b 3 ) + b 4 (8) We also employ a residual connection around the attentive combination layer, followed by layer normalization to stabilize the training.", "Since the phrase representation is produced inside the Transformer model and utilized as the input of layers, and all related computations are differentiable, the attentive phrase representation model is simply trained as part of the whole model through backpropagation effectively.", "To compare with Vaswani et al. (2017), we conducted our experiments on the WMT 14 English to German and English to French news translation tasks.", "We implemented our approaches based on the Neutron implementation (Xu and Liu, 2019) of the Transformer translation model.", "We applied joint Byte-Pair Encoding (BPE) (Sennrich et al., 2016) with 32 k merge operations on both data sets to address the unknown word problem.", "We only kept sentences with a maximum of 256 subword tokens for training.", "Training sets were randomly shuf-fled in every training epoch.", "The concatenation of newstest 2012 and newstest 2013 was used for Models En-De En-Fr Transformer Base 27.38 39.34 +PR 28.67 40.71 Transformer Big 28.49 41.36 +PR 29.60 42.45 Table 1: Results on WMT 14 En-De and En-Fr.", "validation and newstest 2014 as test sets for both tasks.", "The number of warm-up steps was set to 8 k , and each training batch contained at least 25 k target tokens.", "Our experiments run on 2 GTX 1080 Ti GPUs, and a large batch size was achieved through gradient accumulation.", "We used a dropout of 0 .", "1 for all experiments except for the Transformer Big on the En-De task which was 0 .", "3 .", "The training steps for Transformer Base and Transformer Big were 100 k and 300 k respectively following Vaswani et al. (2017).", "The other settings were the same as (Vaswani et al., 2017) except that we did not bind the embedding between the encoder and the decoder for efficiency.", "We used a beam size of 4 for decoding, and evaluated tokenized case-sensitive BLEU 3 with the averaged model of the last 5 checkpoints for Transformer Base and 20 checkpoints for Transformer Big saved with an interval of 1 , 500 training steps (Vaswani et al., 2017).", "We also conducted significance tests (Koehn, 2004).", "We applied our approach to both the Transformer Base setting and the Transformer Big setting, and conducted experiments on both tasks to validate the effectiveness of our approach.", "Since parsing a large training set (specifically, the En-Fr dataset) is slow, we did not use phrases from parse results in this experiment (reported in Table 1).", "Results are shown in Table", "1. indicates p < 0 .", "01 compared to the baseline for the significance test.", "Table 1 shows that modeling phrase representation can bring consistent and significant improvements on both tasks, and benefit both the Transformer Base model and the stronger Transformer Big model.", "+PR is the Transformer with Phrase Representation, corresponding to the 3 https://github.com/moses-smt/ mosesdecoder/blob/master/scripts/generic/multi-bleu.perl Models BLEU Para.", "The En-Fr task used a larger dataset ( 36 M sentence pairs) and achieved a higher baseline BLEU than the En-De task, we suggest significant improvements obtained by our approach on the En-Fr task with the Transformer Big supports the effectiveness of our approach in challenging settings.", "We also conducted a Transformer Base based ablation study on the WMT 14 En-De task to assess the influence of phrase representation, attention mechanism in phrase representation generation, transparent attention and phrases from parser output on performance.", "Results are shown in Table", "2. +Mean and +Max are only using element-wise mean operation and max-over-time pooling to generate an initial rough phrase representation of a given token sequence.", "+Attn indicates generating phrase representations with our attentive approach, on top of the max-over-time pooling as F glance in Equation", "2. +TA indicates use of the Transparent Attention mechanism to fuse information generated from every encoder layer for different decoder layers, 4 otherwise only outputs of the last encoder layer are fed into all decoder layers.", "+Parse means using phrases extracted from parse results with Algorithm", "1. Table 2 shows that introducing phrase representation can significantly improve the strong Transformer Base baseline, even only with a simple element-wise mean operation over token repre-4 This only introduces an additional 7 6 parameter matrix, which does not show significant influence in view of the amount of parameters.", "sentations brings about a +0 .", "61 BLEU improvement ( p < 0 . 01 ).", "Summarizing representations with max-over-time pooling performs slightly better than with the element-wise mean operation.", "Our attentive phrase representation generation approach can bring further improvements over the max-over-time pooling approach.", "Though utilizing phrases from the parser can make use of linguistic knowledge and obtains most improvements, our simple and effective segmenting approach performs competitively, and we interpret these comparisons to show the positive effects of collapsing token sequences into shorter phrase sequences on the modeling of long-distance dependencies.", "Though a significant amount of parameters are introduced for incorporating phrase representation into the Transformer model, our approach (+Max+Attn+TA) improved the performance of the Transformer Base model by +1 .", "29 BLEU on the WMT 14 En-De news task, and the proposed Transformer model with phrase representation still performs competitively compared to the Transformer Big model with only about half the number of parameters and 1 / 3 of the training steps.", "Thus, we suggest our improvements are not only because of introducing parameters, but also due to the modeling and utilization of phrase representation.", "To analyze the effects of our phrase representation approach on performance with increasing input length, we conducted a length analysis on the news test set of the WMT 14 En-De task.", "Following Bahdanau et al. (2015) and Tu et al. (2016), we grouped sentences of similar lengths together and computed BLEU scores of Transformers and Transformers 25 26 27 28 29 30 31 32 Transformer Base Base+PR Transformer Big Big+PR Figure 3: BLEU scores with respect to various input sentence lengths.", "Figure 3 shows that our approach incorporating phrase representation into the Transformer significantly improves its performance in all length groups, and longer sentences show significantly more improvements than shorter sentences.", "In the Transformer Base setting, our approach improved the group with sentences of more than 45 tokens by +1 .", "72 BLEU, almost twice of the improvements for sentences with less than 15 tokens which was +0 .", "93 BLEU.", "The effects of incorporating phrase representations into the Transformer is more significant especially when compared to the Transformer Big which has about twice the number of parameters than our approach and consumes 3 times the training steps.", "According to Tang et al. (2018), the number of attention heads in Transformers impacts their ability to capture long-distance dependencies, and specifically, many-headed multi-head attention is essential for modeling long-distance phenomena with only self-attention.", "The Transformer Big model with twice the number of heads in the multihead attention network compared to those in the Transformer Base model, should be better at capturing long-distance dependencies.", "However, comparing with the Transformer Base, the improvement of the Transformer Big on long sentences ( +1 . 20 BLEU for sentences with more than 45 tokens) was similar to that on short sentences ( +1 . 14 BLEU for sentences with no more than 15 tokens), while our approach to model phrases in the Transformer model even brings significantly ( p < 0 . 01 ) more improvements ( +1 . 72 BLEU) on the performance of longer sentences with the Transformer Base setting (8 heads) than the Transformer Big with 16 95.0 95.5 96.0 96.5 97.0 97.5 98.0 98.5 99.0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 >15 Transformer Base Base+PR Transformer Big Big+PR Figure 4: Subject-Verb Agreement Analysis.", "The length analysis result is consistent with our conjecture to some extent given that there are likely to be more long-distance dependencies in longer source sentences.", "We suggest that phrase sequences which are shorter than corresponding token sequences can help the model capture long-distance dependencies better, and modeling phrase representations for the Transformer can enhance its performance on long sequences.", "Intuitively, in translating longer sentences we should encounter more long-distance dependencies than in short sentences.", "To verify whether our method can improve the capability of the NMT model to capture long-distance dependencies, we also conducted a linguistically-informed verb-subject agreement analysis on the Lingeval97 dataset (Sennrich, 2017) following Tang et al. (2018).", "In German, subjects and verbs must agree with one another in grammatical number and person.", "In Lingeval97 , each contrastive translation pair consists of a correct reference translation, and a contrastive example that has been minimally modified to introduce one translation error.", "The accuracy of a model is the number of times it assigns a higher score to the reference translation than to the contrastive one, relative to the total number of predictions.", "Results are shown in Figure", "4. Figure 4 shows that our approach can improve the accuracy of long-distance subject-verb dependencies, especially for cases where there are more than 10 tokens between the verb and the corresponding subject when comparing the Base+PR with the Transformer Big.", "Considering that the strong Transformer translation model still has difficulty in fully capturing long-distance dependencies (Tang et al., 2018), and that using a shorter phrase sequence (in addition to the original token sequence) is an intuitive approach to help the model capture long-distance features, in this paper, we first propose an attention mechanism to generate phrase representations by merging corresponding token representations.", "In addition, we incorporate the generated phrase representations into the Transformer translation model to help it capture long-distance relationships.", "We obtained statistically significant improvements on the WMT 14 English-German and English-French tasks over the strong Transformer baseline, which demonstrates the effectiveness of our approach.", "Our further analysis shows that the Transformer with phrase representation empirically improves its performance especially in long-distance dependency learning.", "We thank anonymous reviewers for their insightful comments and helpful advice.", "Hongfei Xu acknowledges the support of China Scholarship Council ([2018]3101, 201807040056).", "Deyi Xiong is supported by the National Natural Science Foundation of China (Grant No. 61861130364), the Natural Science Foundation of Tianjin (Grant No. 19JCZDJC31400) and the Royal Society (Lon-don) (NAF \\ R1 \\ 180122).", "Hongfei Xu, Josef van Genabith and Jingyi Zhang are supported by the German Federal Ministry of Education and Research (BMBF) under the funding code 01IW17001 (Deeplee)." ]
[ "other", "abstain", "abstain", "objective", "method", "result", "method", "objective", "abstain", "other", "abstain", "abstain", "abstain", "other", "objective", "other", "objective", "result", "result", "abstain", "objective", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "other", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "objective", "result", "other", "other", "other", "other" ]
[ "We study the task of toxic spans detection, which concerns the detection of the spans that make a text toxic, when detecting such spans is possible.", "We introduce a dataset for this task, TOXICSPANS , which we release publicly.", "By experimenting with several methods, we show that sequence labeling models perform best.", "Moreover, methods that add generic rationale extraction mechanisms on top of classifiers trained to predict if a post is toxic or not are also surprisingly promising.", "Finally, we use TOXICSPANS and systems trained on it, to provide further analysis of state-of-the-art toxic to non-toxic transfer systems, as well as of human performance on that latter task.", "Our work highlights challenges in finer toxicity detection and mitigation.", "In social media and online fora, toxic content can be defined as rude, disrespectful, or unreasonable posts that would make users want to leave the conversation (Borkan et al., 2019).", "Although several toxicity detection datasets (Wulczyn et al., 2017; Borkan et al., 2019) and models (Schmidt and Wie-gand, 2017; Pavlopoulos et al., 2017c; Zampieri et al., 2019) exist, most of them classify whole posts, without identifying the specific spans that make a text toxic .", "But highlighting such toxic spans can assist human moderators (e.g., news portal moderators) who often deal with lengthy comments, and who prefer attribution instead of a system-generated unexplained toxicity score per post.", "Locating toxic spans within a text is thus a major step towards successful semi-automated moderation and healthier online discussions.", "toxic spans, called TOXICSPANS .", "1 We discuss how it was created and propose an evaluation framework for toxic spans detection.", "We consider methods that", "(i) perform sequence labeling (tag words) or", "(ii) rely on an attentional binary classifier to predict if a post is toxic or not, then invoke its attention at inference time to obtain toxic spans as in rationale extraction.", "The latter approach allows leveraging larger existing training datasets, which provide gold labels indicating which posts are toxic or not, without providing gold toxic span annotations.", "Although sequence labeling performed overall better, the binary attentional classifier performed surprisingly well too, despite having been trained on data without span annotations.", "We then study some characteristics of supervised and self-supervised toxic-to-civil transfer models (Laugier et al., 2021) by comparing them on several datasets, including a recently released parallel toxic-to-civil dataset (Dementieva et al., 2021) and the new TOXICSPANS dataset.", "Using the latter, we introduce a measure to evaluate the elimination of explicit toxicity, and we use this measure to compare the behavior and performance of toxic-to-civil models.", "Lastly, by applying toxic span detection systems, we assess the performance of human crowdworkers on the toxic-to-civil task.", "Toxicity detection systems (Schmidt and Wiegand, 2017; Pavlopoulos et al., 2017c; Zampieri et al., 2019) are typically trained on datasets annotated at the post level (a text is annotated as toxic or not) (Wulczyn et al., 2017; Borkan et al., 2019).", "Our work differs from general toxicity detection 1 Our code and dataset are publicly available at https: //github.com/ipavlopoulos/toxic_spans with a CC0 licence.", "Part of the dataset was also used in the SemEval-2021 Task 5 (Pavlopoulos et al., 2021).", "in that we detect toxic spans , instead of assigning toxicity labels to entire texts.", "Toxic spans detection can be seen as a case of attribution or rationale extraction (Li et al., 2016; Ribeiro et al., 2016; Lei et al., 2016; Zhang et al., 2021; Jain et al., 2020; DeYoung et al., 2020), but specifically for toxic posts, a task that has never been considered in general toxicity detection before.", "Detecting spans, instead of entire posts, was recently also considered in propaganda (Martino et al., 2020) and hate speech detection (Mathew et al., 2021).", "Although the ground truth type is similar (spans), propaganda detection is a different task from ours.", "Hate speech is a particular type of toxicity (Borkan et al., 2019), which can be tackled by more general toxicity detectors (Van Aken et al., 2018), but not the other way round; i.e., we address a broader problem.", "This probably explains why a pattern-matching baseline, based on the data of Mathew et al. (2021), achieved only slightly better results than a random baseline on our dataset.", "Suggesting civil rephrases of posts found to be toxic (Nogueira dos Santos et al., 2018; Laugier et al., 2021) is the next step towards healthier online discussions, and can be viewed as style transfer (Shen et al., 2017; Fu et al., 2018; Lample et al., 2019).", "We show how toxic spans detection can contribute in the assessment of toxic-to-civil transfer, linking the two tasks together for the first time.", "We used posts (comments) from the publicly available Civil Comments dataset (Borkan et al., 2019), which already provides whole-post toxicity annotations.", "We followed the toxicity definition that was used in Civil Comments, i.e., we use toxic' as an umbrella term that covers abusive language phenomena, such as insults, hate speech, identity attack, or profanity.", "This definition of toxicity has been used extensively in previous work (Hosseini et al., 2017; Van Aken et al., 2018; Karan and na-jder, 2019; Han and Tsvetkov, 2020; Pavlopoulos et al., 2020).", "We asked crowd annotators to highlight the spans that constitute anything that is rude, disrespectful, or unreasonable that would make someone want to leave a conversation.", "Besides toxicity our annotators were also asked to select a subtype for each highlighted span, choosing between insult, threat, identity-based attack, profane/obscene, or other toxicity.", "Asking the annotators to also select a category was intended as a priming exercise to increase their engagement, but it may have also helped them align their notions of toxicity further, increasing inter-annotator agreement.", "For the purposes of our experiments, we collapsed all the subtypes into a single toxic class, and we did not study them further; but the subtypes are included in the new dataset we release.", "Annotation From the original Civil Comments dataset (1.2M posts), we retained only posts that had been found toxic by at least half of the crowd-raters.", "This left approx.", "30k toxic posts.", "We selected a random 11k subset of the 30k posts for toxic spans annotation.", "We used the crowd-annotation platform of Appen.", "2 We employed three crowd-raters per post, all of whom were warned for explicit content.", "Raters were selected from the smallest group of the most experienced and accurate contributors.", "The raters were asked to mark the toxic word sequences (spans) of each post by highlighting each toxic span on their screen.", "For each post, the dataset includes the spans of all three raters.", "If the raters believed a post was not actually toxic, or that the entire post would have to be annotated, they were instructed to select appropriate tick-boxes in the interface, without highlighting any span.", "The tick-boxes were separate and the dataset shows when (if) any of the two were ticked.", "Hence, when no toxic spans are provided (for a particular post by a particular rater), it is clear if the rater thought that the post was not actually toxic, or that the entire post would have to be annotated.", "message being conveyed may be inherently toxic (e.g., a sarcastic post indirectly claiming that people of a particular origin are inferior) and, hence, it may be difficult to attribute the toxicity of those posts to particular spans.", "In such cases, the posts may end up having no toxic span annotations, according to the guidelines given to the annotators; see the last post of Table 1 for an example.", "In other cases, however, it is easier to identify particular spans (possibly multiple per post) that make a post toxic, and these toxic spans often cover only a small part of the post (see Table 1 for examples).", "Agreement We measured inter-annotator agreement on 87 randomly selected posts of our dataset, using 5 crowd-annotators per post in this case.", "We calculated the mean pairwise (for a pair of annotators) Cohen's kappa per post, using character offsets as instances being classified as toxic (included in a toxic span) or non-toxic; we then averaged over the posts.", "Although our dataset contains only posts found toxic by at least half of the original crowd-raters, only 31 of the 87 posts were found toxic by all five of our annotators, and 51 were found toxic by the majority of our annotators; this is an indicator of the well-known subjectivity of toxicity detection.", "On the 31, 51, and 87 posts, the average kappa score was 65%, 55%, 48%, respectively, indicating that when the raters agree (at least by majority) about the toxicity of the post, there is also reasonable agreement regarding the toxic spans.", "Note that the toxic spans are typically short.", "This leads to class imbalance (most offsets are marked as non-toxic), increases agreement by chance (on the non-toxic offsets), and leads to low kappa scores (kappa adjusts for chance agreement).", "Another reason behind this modest (compared to other tasks) inter-annotator agreement is the inherent subjectivity of deciding if a post is toxic or not.", "Our kappa score is in fact slightly higher than in previous work on toxicity detection, classifying posts as toxic or not (Sap et al., 2020; Pavlopoulos et al., 2017a), and in that sense our inter-annotator agreement can be seen as an improvement.", "Ground truth To obtain the ground truth of our dataset, we averaged the labels per character of the annotators per post.", "We used the following process: for each post t , first we mapped each annotated span of each rater to its character offsets.", "We then assigned a toxicity score to each character offset of t , computed as the fraction of raters who annotated that character offset as toxic (included it in their toxic spans).", "We retained only character offsets with toxicity scores higher than 50%; i.e., at least two raters must have included each character offset in their spans.", "Table 1 shows examples.", "The dataset TOXICSPANS contains the 11,035 posts we annotated for toxic spans.", "The unique posts are actually 11,006, since a few were duplicates and were removed in subsequent experiments.", "A few other posts were used as quiz questions to check the reliability of candidate annotators and were also discarded in subsequent experiments.", "Exploratory analysis Although we instructed the crowd-raters to click the appropriate tick-box and not highlight any span when the whole post would have to be highlighted, the ground truth of 34 out of the 11k posts covers the entire post.", "However, 14 out of the 34 posts are single-word texts, while the other posts are very short (Appendix A shows more details); it seems that in very short posts the raters sometimes did not realize they ended up highlighting the entire post.", "Furthermore, about 5k of the 11k posts have an empty ground truth set of toxic character offsets (as in the last post of Table 1), even though all the posts of our dataset had been found toxic by the original raters.", "This is partly due to the fact that we include in the ground truth only character offsets that were included in the toxic spans of the majority of our annotators.", "It also confirms it is not always possible to attribute (at least not by consensus) the toxicity of a post to particular toxic spans.", "In almost all posts, the ground truth covers less than half of the post; and in the vast majority, less than 20% of the post.", "A dense toxic span of a post is a maximal sequence of contiguous toxic characters.", "There exist posts with more than one dense toxic span, but most posts include only one.", "Table 2 provides further statistics.", "For the newly introduced toxic spans detection task, we evaluate systems in terms of F 1 score, as in the work of Da San Martino et al. (2019).", "Given a test post t , let system A i return a set S tA i of character offsets, for parts of the post found to be toxic.", "Let S t G be the character offsets of the ground truth annotations of t .", "We compute the F 1 score of system A i with respect to the ground truth G for post t : F t 1 ( A i , G ) = 2 P t ( A i , G ) R t ( A i , G ) P t ( A i , G ) + R t ( A i , G ) (1) 3723 Mean Min Max Post length 208.14 4 1,000 Dense toxic span length 7.01 3 87 # Dense toxic spans 0.58 0 8 Table 2: TOXICSPANS statistics.", "If S tG is empty for some post t (no gold spans are given for t ), we set F t 1 ( A i , G ) = 1 if S tA i is also empty, and F t 1 ( A i , G ) = 0 otherwise.", "We average F t 1 ( A i , G ) over all test posts t to obtain a single score for system A i .", "We use F 1 as the main evaluation measure in experiments reported below.", "TRAIN-MATCH , is a simple lookup-based model that classifies as toxic any tokens encountered inside toxic spans of the training data.", "HATE-MATCH operates similarly, but the lookup is within the hate-ful/offensive spans of the data of Mathew et al. (2021).", "A naive baseline, RAND-SEQ , randomly classifies tokens as toxic or not.", "Toxic spans detection can be seen as sequence labeling (tagging words).", "As a baseline of this kind, we employ SPACY ' S Convolutional Neural Network, which is pre-trained for tagging, parsing, entity recognition (Honnibal and Montani, 2017).", "We call this model CNN-SEQ and fine-tune it on dense toxic spans, treated as entities'.", "We also train a bidirectional LSTM ( BILSTM-SEQ ), 3 and fine-tune BERT (Devlin et al., 2019) and SPANBERT (Joshi et al., 2020) for toxic spans ( BERTSEQ , SPAN-BERT-SEQ ).", "4 These methods require training data manually annotated with toxic spans.", "We trained binary classifiers to predict the toxicity label of each post, and we employed attention as a rationale extraction mechanism at inference to obtain toxic spans, an approach Pavlopoulos et al. (2017b) found to work reasonably well in toxicity detection.", "5 We experimented with two classifiers: 3 We used the probabilistic ground truth for training and mean square error as the loss function of BILSTM-SEQ , which yielded best results in preliminary experiments.", "4 More details can be found in the Appendix A.3.", "a BILSTM with deep self-attention as in the work of Pavlopoulos et al. (2017b), but training with a regression objective and probabilistic labels following D'Sa et al. (2020) and Wulczyn et al. (2017); and BERT with a dense layer and sigmoid on the [ CLS ] embedding.", "To detect toxic spans, we used the attention scores of the BILSTM and the attention scores from the heads of BERT 's last layer averaged over the heads, respectively.", "In both cases, we obtain a sequence of binary decisions (toxic, nontoxic) for the tokens of the post (inherited by their character offsets) by using a probability threshold (tuned on development data) applied to the attention scores.", "We refer to these two attention-based rationale extraction methods as BILSTM + ARE and BERT + ARE , respectively.", "These methods require training posts annotated only with toxicity labels per post (no toxic span annotations).", "We used a 5-fold Monte Carlo cross-validation (5 random training/development/test splits) on the 11k posts of TOXICSPANS .", "In each fold, we use 10% of the data for testing, 10% for development, and 80% for training.", "In ARE -based methods, which rely on an underlying classifier to predict if a post is toxic or not, the classifier is trained on the training part of the fold (which contains only toxic posts, ignoring the toxic span annotations) and a randomly selected but not in toxicity detection.", "See also Wiegreffe and Pinter (2019), Kobayashi et al. (2020), Ferrando and Costa-juss (2021) for a broader discussion of attention as an explainability mechanism.", "equal number of non-toxic posts from Civil Comments that are not included in our dataset.", "When measuring the (binary) classification performance of the underlying classifier, the classifier is evaluated on a new equally balanced test set of 3k randomly sampled unseen posts from Civil Comments.", "Both look-up methods ( TRAIN-MATCH , HATEMATCH ) outperform the random baseline (Table 3).", "However, TRAIN-MATCH performs much better, which agrees with our hypothesis that toxicity detection is a broader problem than hate speech detection.", "Both look-up methods are outperformed by the sequence labeling models (-SEQ ), especially SPAN-BERT-SEQ , which is pre-trained to predict spans.", "These results show that the tokens of toxic spans are context-dependent and their meaning is not captured well by context-unaware look-up lexicons.", "An error analysis of the best-performing SPAN-BERT-SEQ showed that mistakes include both false negatives (e.g., incorrectly returning an empty span, 1st row of Table 4) and false positives (2nd and 3rd row).", "BERT + ARE performs worse than BILSTM + ARE , despite the fact that the underlying BERT classifier is much better ( ROC AUC 96.1%) at separating toxic from nontoxic posts than the underlying BILSTM (90.9%).", "Interestingly, the BILSTM binary toxicity classifier with the attention-based toxic span detection mechanism (Pavlopoulos et al., 2017b) is close in performance with BILSTM-SEQ , despite the fact that the latter is directly trained on toxic span annotations, whereas the former is trained with binary post-level annotations only (toxic, non-toxic post ).", "Several large datasets with post-level toxicity annotations are publicly available (Pavlopoulos et al., 2019).", "Therefore, attribution-based toxic span detectors, such as BILSTM + ARE , can in principle perform even better if the underlying binary classifier is trained on a larger existing dataset.", "To investigate this, we increased the training set of the underlying BILSTM classifier of BILSTM + ARE .", "We added to the training set of each cross-validation fold 80k further toxic and non-toxic posts (still equally balanced, without toxic spans) from the dataset of Borkan et al. (2019), excluding posts used in TOXICSPANS .", "The ROC AUC score of the underlying BILSTM (in the task of separating toxic from nontoxic posts) improved from 90.9% to 94.2%, and the F 1 score of BILSTM + ARE (in toxic spans detection) improved from 57.7% to 58.8%, almost reaching the performance of BILSTM-SEQ .", "6 7 Toxic spans in toxic-to-civil transfer As shown in Section 6, a toxic span detection method can be used to highlight toxic parts of a post, to assist, for instance, human moderators.", "The new TOXICSPANS dataset and toxic span detection methods, however, can assist in more ways.", "This section describes how we combined the new dataset and the best-performing toxic span detector ( SPAN-BERT-SEQ ) to show how they can be useful in toxic-to-civil text transfer (Nogueira dos Santos et al., 2018; Laugier et al., 2021).", "In the context of detoxifying comments to nudge users towards healthier conversations online, this task aims at suggesting civil rephrasings of toxic posts.", "More specifically, we study the following research question: Can TOXICSPANS data and toxic span detectors be used to assess the mitigation of explicit toxicity in toxic-to-civil transfer?", "To answer this question, we proceeded in two ways:", "(i) evaluating the transfer of toxic spans in system -detoxified posts, and", "(ii) studying any remaining toxic spans in human -detoxified posts.", "We first compare the performance of two toxic-to-civil transfer models, CAE-T 5 and SED-T 5, both based on the T 5 transformer encoder-decoder architecture (Raffel et al., 2019); they both fine-tune the weights of the same pre-trained model, namely T 5-large.", "CAE-T 5 (Laugier et al., 2021) is a self-supervised Conditional Auto-Encoder, fine-tuned on a large non-parallel ( NP ) dataset based on preprocessed posts from the Civil Comments ( CC ) dataset, the dataset (with post-level annotations) that TOXICSPANS was also based on.", "SED-T 5 is a 6 Appendix A reports results for less added data.", "Supervised Encoder-Decoder; we fine-tuned it on a smaller parallel ( P ) dataset created by Dementieva et al. (2021), consisting of pairs of comments: a toxic comment and a detoxified paraphrase written by a crowdworker.", "Table 5 summarizes statistics of the two datasets ( P , NP ) and highlights a trade-off between the level of supervision and number of samples: there is a 1:40 ratio between toxic comments in P (direct supervision, parallel data) and NP (indirect supervision, no parallel data).", "Table 6 shows our experimental results.", "Following Laugier et al. (2021), we report accuracy ( ACC ), perplexity ( PPL ), similarity ( SIM ), and the geometric mean ( GM ) of ACC , 1 / PPL , SIM .", "Accuracy measures the rate of successful transfers from toxic to civil, and computes the fraction of posts whose civil version is classified as non-toxic by a BERT toxicity classifier; we used the BERT -based toxicity classifier of Laugier et al. (2021).", "Perplexity is used here as a measure of fluency and is computed with GPT -2 (Radford et al., 2019).", "Similarity measures content preservation between the original toxic text and its system-rephrased civil version (selfSIM ) or the gold (hu-man) civil rephrasing (refSIM , only for P ); in both cases, it is computed as the cosine similarity between the single-vector representations of the two texts, produced by the universal sentence encoder of Cer et al. (2018).", "As can be seen in Table 6, CAE-T 5 has better aggregated results (higher GM ) than SED-T 5 in all three datasets, which are due to lower perplexity and (in NP and TOXICSPANS ) higher accuracy.", "However, SED-T 5 learned to preserve content better (higher SIM in all three datasets), because of the parallel data ( P , with gold rephrases) it was trained on.", "By contrast, CAE-T 5 was trained without parallel data ( NP ) using a cycle-consistency loss, which leads to more frequent hallucinations of content that was not present in the original post (Laugier et al., 2021).", "These hallucinations may also help CAE-T 5 obtain better perplexity scores, by gener-Evaluation Dataset Metric CAE-T 5 SED-T 5 Non-Parallel ( NP ) ACC 75.0% 52.2% ACC 2 83.4% 67.3% PPL 5.2 11.8 selfSIM 70.0% 87.9% GM (self) 0.466 0.338 ACC 3 86.7% 64.1% ACC 4 83.2% 59.5% Parallel ( P ) ACC 94.3% 94.3% ACC 2 94.7% 94.3% PPL 9.1 38.3 refSIM 27.6 % 65.3% selfSIM 32.6 % 65.6% GM (ref) 0.306 0.252 GM (self) 0.323 0.252 ACC 3 98.8% 94.3% ACC 4 94.7% 91.9% TOXICSPANS ACC 92.9% 65.6% ACC 2 92.5% 63.7% PPL 7.2 24.9 selfSIM 34.5% 82.1% GM (self) 0.355 0.279 ACC 3 96.9% 62.0% ACC 4 92.0 % 54.7% Table 6: Automatic evaluation scores of CAE-T 5 (trained on NP 's training subset) and SED-T 5 (trained on P 's training subset), when the test sets are from NP , P , and TOXICSPANS .", "ACC 2, ACC 3, ACC 4 also consider toxic spans (Section 7.2).", "ating fluent civil rephrases' that do not preserve, however, the original semantics.", "Also, although the general trends are similar in all three datasets ( SED-T 5 preserves content better, CAE-T 5 is better in perplexity and GM ), there are several differences too across the three datasets.", "For example, CAET 5 is much better than SED-T 5 in accuracy (posts detoxified) on NP and TOXICSPANS , but both systems have the same accuracy on P ; and the scores of the systems vary a lot across the three datasets.", "These considerations motivated us to seek ways to further analyse the behavior of toxic-to-civil transfer models.", "TOXICSPANS and toxic span detectors are an opportunity to move towards this direction, by studying how well transfer models cope with explicit toxicity , i.e., spans that can be explicitly pointed to as sources of toxicity.", "We leave for future work the flip side of this study, i.e., studying cases where transfer models rephrase spans not explicitly marked (by toxic span detectors or human annotators) as explicitly toxic.", "Recall that the accuracy ( ACC ) scores of Table 6 measure the percentage of toxic posts that the transfer models ( CAE-T 5, SED-T 5) rephrased to forms that a ( BERT -based) toxicity classifier considered non-toxic.", "One could question, however, if it is possible (even for humans) to produce a civil rephrase 3726 of a toxic post when it is impossible to point to particular spans of the post that cause its toxicity (as in the last post of Table 1).", "Detoxifying posts of this kind may constitute a mission impossible for most models (possibly even for humans); the only way to produce a non-toxic rephrase' may be to change the original post beyond recognition, which may be rewarding systems like CAE-T 5 that often hallucinate in their rephrases, as already discussed.", "Hence, it makes sense to focus on posts that contain explicit toxic spans, marked by human annotators (for TOXICSPANS ) or our best toxic span detector ( SPAN-BERT-SEQ ).", "Using these toxic spans, we define three additional variants of accuracy: ACC 2 is the same as ACC , but ignores (in its denominator) posts that do not contain at least one toxic span; ACC 3 also considers (in its denominator) only posts that contained at least one toxic span, but computes the fraction of these posts that had all of their toxic spans rephrased (even partly) by the transfer model; ACC 4 is a stricter version of ACC 3 that requires the posts to also be judged non-toxic by the ( BERT based) toxicity classifier.", "Table 6 shows that restricting ACC to consider only posts with at least one toxic post ( ACC 2) substantially improves the performance of both models on the NP dataset, indicating that it contains many mission impossible' instances (posts with no toxic spans) that the original ACC considers.", "By contrast, switching from ACC to ACC 2 leads to mostly negligible changes on the P and TOXICSPANS datasets, which is in accordance with the fact that they contain fewer posts with no toxic spans (11.5% and 48.7%, respectively, compared to 67.4% for NP ).", "Another interesting observation is that ACC 4 is always substantially lower than ACC 3 (for both systems, on all three datasets), indicating that the models often successfully detect toxic spans and try to rephrase them, but the rephrases are still toxic, at least according to the toxicity classifier.", "In this experiment, we wished to study the extent to which humans rephrase known toxic spans, when asked to produce civil rephrases of toxic posts.", "We used the P dataset, the only one of the three considered that contains human rephrases.", "7 Since P does not contain gold toxic spans, we again employed SPAN-BERT-SEQ to add toxic spans to the source posts and retained only the 1,354 (out of 2,778 in 7 We used all the P data, since no training was involved. total) source-target pairs of posts with at least one toxic span in their source post.", "8 In all but 6 of the 1,354 posts, the humans have rephrased (in the gold target post they provided) all the toxic spans of the source post.", "The 6 posts were mainly cases where the human changed the context to mitigate toxicity, while retaining the original toxic span.", "For example, he's not that stupid became he's not stupid (original toxic span shown in bold); in this case removing the that' from the context arguably makes the post less offensive.", "Overall, we conclude that humans did rephrase almost all cases of explicit toxicity in the toxic posts they were given.", "We also applied SPAN-BERT-SEQ to the gold target (rephrased) posts that the humans provided to check if any explicit toxicity remained or was introduced by the rephrases.", "This flagged 93 gold target posts as comprising at least one toxic span.", "A manual inspection of the 93 posts revealed that they fall in two main categories.", "The first category comprises cases where a toxic span of the source post was rephrased, but the rephrase might not be considered totally civil; e.g., how freaking narcissistic do you have to be? became how narcissistic do you have to be? , where SPAN-BERT-SEQ marked the narcissistic' of the rephrase as a toxic span.", "The second category comprises cases where SPANBERT-SEQ produced false positives; e.g., the source post most of the information is total garbage became most of the information is totally useless , but SPAN-BERT-SEQ marked (arguably incorrectly) useless' as a toxic span.", "We also applied the BERT -based text toxicity classifier of Laugier et al. (2021) to the 2,778 posts of the P dataset, dividing them in two sets: posts that comprised at least one toxic span detected by SPAN-BERT-SEQ (1,354 posts with explicit toxicity) and the rest (implicit toxicity).", "The BERT -based toxicity classifier considered more toxic (higher average toxicity score) the 1,354 posts of the first set compared to the second one, i.e., it was more confident that the posts of the first set (explicit toxicity) were toxic, as one might expect.", "By resampling 1,000 subsets (of 50 posts each) from the two sets, we confirmed that this is a statistically significant difference ( P = 0 . 001 ).", "The difference of the average predicted toxicity score between the two sets 8 The most frequent spans were sh*t', st*p*d', f*ck'.", "is 14% (from 0.94 down to 0.80).", "The posts we annotated for toxic spans were extracted from an already heavily studied public do-main benchmark dataset (Civil Comments) that has been examined by thousands of teams in a Kag-gle competition, 9 and that has been cited in over 50 academic publications.", "The Civil Comments dataset was filtered to remove any potential personally identifiable information before it was released.", "Our annotation cost was $21,089 for 59,486 judgements, paying $0.30 per item.", "All raters were warned for the explicit content of the job and only high accuracy raters were selected (70+%), based on performance on quiz questions.", "The most common countries of origin of our crowd-annotators were Venezuela and USA (Fig. 6 in Appendix A.1).", "In the contributor satisfaction survey, 51 participants gave an overall task rating of 3.6/5.0, with pay and test question fairness rated slightly higher than ease of job and clarity of instructions.", "We note that it is more difficult and costly (ap-proximately 3 times more) to manually annotate toxic spans, instead of just labeling entire posts as toxic or not.", "This is why we also explored adding rationale extraction components on top of toxicity classifiers trained on existing much larger datasets.", "We showed that BILSTM + ARE has the potential to reach the performance of BILSTM-SEQ , which is important for future work aiming to build toxic span detectors without any toxic span annotations in the training data.", "This may be particularly useful in low-resourced languages with limited resources for text toxicity (Zampieri et al., 2020).", "Having two separate systems, one for toxicity detection and one for toxic spans identification, is more easily compatible with existing deployed toxicity detectors.", "One can simply add a component for toxic spans at the end of a pipeline for toxicity detection, and the new component would be invoked only when toxicity would be detected, leaving the rest of the existing pipeline unchanged.", "Since the vast majority of posts in real-world applications is non-toxic (Borkan et al., 2019), this pipeline approach would only increase the computational load for the relatively few posts classified as toxic.", "Using only toxic posts in this study was also a way to simplify this first approach to toxic spans detection, assuming an oracle system achieved the first step 9 shorturl.at/hqEJ3 (deciding which posts are toxic).", "However, we note that future work could study adding non-toxic posts to our dataset and requiring systems to first detect toxic posts, then extract toxic spans for toxic posts.", "A direct comparison (in terms of size) of TOXICSPANS with other existing toxicity datasets is only possible if one focuses on the toxic class, typically the minority one, since our dataset contains only toxic posts.", "By adding non-toxic posts, much larger versions of our dataset can be compiled, of sizes similar to those of existing previous datasets (that provide post-level annotations only).", "Hence, our TOXICSPANS dataset is accessible with the following versions: First, only toxic posts included (11,006 posts), which is the version we discuss in this work.", "Second, the previous version will be augmented with the same number of randomly selected non-toxic Civil Comments posts.", "Third, a version similar to the previous one, but where the ratio of toxic to non-toxic posts will be 1:40 to be closer to that of real-world datasets (325,499 posts).", "As shown in Section 7, the TOXICSPANS dataset and toxic span detectors can also help study and evaluate explicit toxicity removal when rephrasing toxic posts to be civil.", "In this case, toxic spans can be used to get a better understanding of how toxic-to-civil models operate, by showing the toxic spans and their context, along with their rephrases.", "The toxic span detection systems we consider are trained (the sequence-labeling ones) and tested (all systems) on posts with binary ground-truth character offset labels (toxic or not), reflecting the majority opinion of the annotators (Section 3).", "This runs the risk of ignoring the opinions of minorities, who may also be minorities among crowd-annotators.", "To address this issue, we also release the toxic spans of all the annotators and the pseudonymous rater identities, not just the spans that reflect the majority opinion, to allow different label binarisation strategies and further studies.", "Toxic span detection systems are intended to assist the decision making of moderators, not to replace moderators.", "When they operate correctly, systems of this kind are expected to ease decision making (reject/accept a post).", "Incorrect results could be of two types; toxic spans that were not highlighted and non-toxic spans that were highlighted.", "Mistakes of both types, especially of the first one, may mislead a moderator working under time pressure.", "As with other content filtering systems (e.g., spam filters, phishing detectors), toxic span detectors may trigger an adversarial reaction of malicious users, who may study which types of toxic expressions evade the detectors (esp. publicly available ones) and may gradually start using more implicit toxic language (e.g., irony, false claims), which may be more difficult to detect.", "However, this is a danger that concerns any toxicity detection system, including systems that classify user content at the post level (without detecting toxic spans).", "We studied toxicity detection, which aims to identify the spans of a user post that make it toxic.", "Our work is the first of this kind in general toxicity detection.", "We constructed and released a dataset for the new task, along with baselines and models.", "Fine-tuning the SPAN-BERT sequence labelling model of Joshi et al. (2020), yielded the best results.", "A post-level BILSTM toxicity classifier that was combined with an attention-based attribution method, not trained on annotations at the span level, performed well for the task.", "By leveraging the dataset of posts annotated as toxic or nontoxic (without spans), we showed that this method can reach the performance of a BILSTM sequence labelling approach that was trained on the more costly toxic spans annotations.", "This result is particularly interesting for future work aiming to perform toxic spans detection by using only datasets with whole-post toxicity annotations.", "In a final experiment, we examined toxic-to-civil transfer, showing how toxic spans can help shed more light on this task too, by helping assess how well systems and humans address explicit toxicity.", "In future work we plan to study toxic span detection in multiple languages and in context-dependent toxic posts.", "We thank Lucas Dixon for discussions, insight, and useful comments.", "We also thank the anonymous reviewers for their comments.", "This research was funded in part by an unrestricted gift from Google." ]
[ "method", "abstain", "result", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "other", "other", "abstain", "other", "other", "other", "method", "abstain", "other", "objective", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "abstain", "result", "abstain", "result", "method", "other", "other", "other" ]
[ "Using natural language as a hint can supply an additional reward for playing sparse-reward games.", "Achieving a goal should involve several different hints, while the given hints are usually incomplete.", "Those unmentioned latent hints still rely on the sparse reward signal, and make the learning process difficult.", "In this paper, we propose semi-supervised initialization (SSI) that allows the agent to learn from various possible hints before training under different tasks.", "Experiments show that SSI not only helps to learn faster ( 1.2x ) but also has a higher success rate ( 11% relative improvement) of the final policy.", "Most Reinforcement Learning (RL) methods (Mnih et al., 2013, 2016) rely on an agent to explore and maximize the feedback reward.", "Since designing a reward for each step is impractical, a common setting is only to give out the achieved signal.", "In Atari Grand Challenge (Kurin et al., 2017), only if achieving the goal, the environmental reward is 1.", "However, this sparse-reward setting makes the agent difficult to learn (Vecerik et al., 2017).", "ExtLang (Goyal et al., 2019) incorporates language hints as an additional reward to overcome the sparse-reward issue.", "They first build 45 different tasks under Montezuma's Revenge game, where each task consists of a starting state, an unknown goal position, and a given hint.", "They also collect demo clips, which are partial playing records (states and actions) that each corresponds to a hint, as shown in Fig. 1.", "To provide an additional reward, ExtLang pre-trains a reward module to reflect the relevance between agent actions and the given hint when exploring a task.", "In this way, they can supply a hint reward instead of only the sparse environmental reward to make the agent easier to explore.", "Though providing an additional reward, the hints are usually incomplete (Kuhlmann et al., 2004).", "Considering a task in Fig. 1, to achieve the goal, Figure 1: For a task with hint jump over the skull\", both climb up the ladder\" and jump to get the key\" are useful latent hints and can be learned during SSI. the agent should jump over the skull,\" climb up the ladder,\" and jump to get the key.\"", "However, the given hint only contains the first one, and learning those latent hints still relies on the sparse environmental reward. To deal with this issue, we propose semi-supervised initialization (SSI) that enables the agent to experience various possible hints in advance. We adopt a hint module to generate possible hints for random states and allow the agent to learn from them. With SSI, agents have a better-initialized policy during training for each task. From another point of view, in this paper, we propose a semi-supervised initialization method and investigate the abilities of NLP on controlling complex actions in game environments (Narasimhan et al., 2015; Ammanabrolu and Riedl, 2019). We perform SSI first and train-evaluate on those tasks built from ExtLang. Experimental results show that with SSI, better-initialized policy not only learns faster but also has a higher success rate. 2 Approach 2.1 Architecture Fig. 2 illustrates our semi-supervised initialization (SSI). First, the hint module H generates possible hints l for random states s . With s , the policy module P rollouts and step actions a . Then, the Figure 2: Overview of our semi-supervised initialization (SSI). Random states, along with the hints generated from the hint module, update the policy module by the reward module, and get a better-initialized policy. reward module R updates P based on the relevance between a and l . With different s , P has the opportunity to learn from various possible hints, and finally serves as a better-initialized policy. Hint Module ( H ) H generates a possible hint l for a state s . H adopts CNN to extract the visual feature v of s and attention-based (Bahdanau et al., 2015) GRU (Chung et al., 2014) as the decoder to produce a series of words w as a hint l : v = CNN ( s ) , h t = GRU ( w t 1 , h t 1 ) , w t FC ([ h t , (cid:88) softmax ( h t W v T ) v ]) , l = { w 1 , w 2 , ..., w L } , (1) where W is a learnable attention matrix. Each example in demo clips D consists of a hint l and a playing record { ( s 1 , a 1 ) , ( s 2 , a 2 ) , ... } . We randomly select s and pre-train H with ( s , l ) pair. Policy Module ( P ) The policy module P is a recurrent action selector which steps a t for a state s t at time step t . P applies CNN (Krizhevsky et al., 2012) to extract the visual feature v t of s t , GRU to model previous history of s as h , and fully connected layer (FC) to decide which action to step. By rollout, we get a 1: T : v t = CNN ( s t ) , h t = GRU ( v t , h t 1) , a t FC ( h t ) . (2) Reward Module ( R ) R is a binary classifier 1 which reflects the relevance between l and a , as Fig. 3. Similar to ExtLang (Goyal et al., 2019), we first transform the actions a 1: T into action frequency vector f where each value is the ratio of 1 Though more input ( e . g ., state frames) may make R more robust, to compare with ExtLang fairly, we use the same setting (the action frequency vector f ) as the input. Figure 3: Reward module is a binary classifier which reflects the relevance between the hint and actions. that action in a 1: T . R utilizes LSTM (Hochreiter and Schmidhuber, 1997) to encode l as e l and FC to extract e f for f . Then, another FC serves as the binary classifier according to e l and e f : f = Frequency ( a 1: T ) , e l = LSTM ( l ) , e f = FC ( f ) , r = FC ([ e l , e f ]) , (3) where r is the output of the binary classification and represents the relevance between l and a . 2.2 Semi-Supervised Initialization (SSI) For a random state s , we adopt H to generate a possible hint l . With starting state s , the agent rollouts and steps actions a 1: T by P . Then, R provides the hint reward r l t for time step t as following: r l t = R ( l, a 1: t ) R ( l, a 1: t 1 ) , (4) where is a discount factor. This hint reward motivates P to step relevant actions with l . To update P , we adopt widely used Proximal Policy Optimization (PPO) (Schulman et al., 2017) to maximize r l during SSI. In this way, the agent learns from various possible hints under different states in advance and has better-initialization for following task-training. 2.3 Task-Training A task consists of a starting state s , an unknown goal position g , and a given hint l . The agent explores in the environment, starting from s and receives the environmental reward r E . When achieving g , r E is 1; otherwise, it is 0 for all other steps. With better-initialization, we further train P for each task. Similar to our SSI, during task-training, we also have r l to reflect how relevant of a from P and the given l for this task. Therefore, during task-training, there are 2 kinds of reward, the sparse environmental reward r E and the hint reward r l .: a t = P ( s t ) , s t +1 , r E t = Env ( a t ) , r l t = R ( l, a 1: t ) R ( l, a 1: t 1 ) . (5) Finally, we optimize P by maximizing r E + r l for this task also using PPO. Algorithm 1 Learning of SSI and Task-Training 1: Env: the environment 2: P : policy module, R : reward module, H : hint module 3:4: while DO_SSI do 5: s Env (cid:46) random starting state 6: l H ( s ) (cid:46) generate a hint by H 7: a 1: T P ( s ) (cid:46) rollout s by P 8: r l t hint reward (cid:46) Eq. 4 9: Update P by maximizing r l using PPO 10: end while 11:12: while DO_Task-Training do 13: s, l Task (cid:46) starting state and hint of the task 14: a 1: T P ( s ) (cid:46) rollout s by P 15: r l t hint reward (cid:46) Eq. 5 16: r E t environmental reward (cid:46) Eq. 5 17: Update P by maximizing ( r l + r E ) using PPO 18: end while Learning Process of SSI and Task-Training Alg. 1 describes the learning process of our SSI and task-training. During SSI, P updates to step relevant actions to the generated l from H . Thus, P can consider different hints in advance. During task-training, with better-initialization, P is optimized by both the environmental reward r E and the hint reward r l to achieve the final goal. 3 Experiments Experimental Settings To fairly compare with the baseline ExtLang (Goyal et al., 2019), we conduct the experiments on the same 45 tasks they build under Montezuma's Revenge environment. H is pre-trained by the same demo clips D . We collect the same 160,000 ( f , l ) pairs as ExtLang to pre-train R . Then, H and R are fixed during SSI and task-training. A task consists of a starting state and a hint, and the agent explores the environment to achieve the unknown goal. We apply 3-layer CNN to extract the visual feature of a state. Both LSTM and GRU contain 128 hidden units. We utilize PPO to optimize during SSI and task-training with learning rate 7e-4. As a baseline, ExtLang consists of the same P and R to provide an additional hint reward during task-training. However, without H and SSI, ExtLang explores with a random-initialized policy. We compare ExtLang with our ExtLang-SSI (ExtLang with semi-supervised initialization). All results are averaged from 45 tasks and 5 times experiments. Quantitative Results Fig. 4 demonstrates the learning curve of ExtLang and our ExtLang-SSI. The x-axis is the training steps of PPO. The upper figure is about the success rate, and the downer one Figure 4: Comparison between learning curves of ExtLang and our ExtLang-SSI. is for accumulated successful episodes 2 . The results show that under the same training step, ExtLang-SSI can succeed in more episodes than ExtLang. With SSI to learn from possible latent hints in advance, ExtLang-SSI can learn faster than a random-initialized policy. In detail, ExtLang-SSI succeeds 2720 episodes using only 420K training steps where ExtLang requires a total 500K. With better-initialized policy, ExtLang-SSI brings out 1.2x speedup during task-training and success higher 3465.6 episodes in total. A similar tendency can be found for the success rate. ExtLang-SSI has a higher success rate than ExtLang under the same training step. Apart from the learning curve, we also evaluate the final policy for both ExtLang and ExtLang-SSI. The final success rate is shown in the chart where ExtLang-SSI has a higher 26.95% and outperforms ExtLang with 11% relative improvement. With a better initialization, ExtLang-SSI can lead to a better final policy. The results of both accumulated successful episodes and success rates show that our proposed SSI not only accelerates the learning process but also helps to achieve a higher final success rates. An interesting insight is that during the early training (before 100K training steps), ExtLang is slightly better than our ExtLang-SSI. Because of learning from various hints in advance, ExtLang-SSI explores the environment based on different latent hints at first. Then, ExtLang-SSI can train faster with experiencing those useful latent hints for this task, and finally, achieve more successful episodes and higher success rate. 2 Since it is an accumulated\" number, it will keep increasing with more training steps. Note that the training for ExtLang and ExtLang-SSI are both converged. Figure 5: The learning curve for task 5 and 7. Figure 6: Comparison between learning curves of ExtLang-SSI under different iterations during SSI. Fig, 5 presents the learning curve about the success rate for Task 5 and 7 3 . For task 5, ExtLang has about 35% success rate at the end, but our ExtLang-SSI outperforms 35% when the very early of training, which means SSI helps to learn faster. Task 7 is more difficult that ExtLang almost fails even with the hint reward. While, with learning from various latent hints, ExtLang-SSI can finally achieve a 40% success rate. Analysis of SSI To investigate our proposed SSI, We analyze the detailed effectiveness of ExtLang-SSI. Fig. 6 illustrates the learning curves under different iterations during SSI. Similar to Fig. 4, the x-axis is the training step of task-training, and each line is for each iteration number during SSI (250K-500K). We can see that when using 350K iterations to perform SSI, ExtLang-SSI can succeed more than 3000 episodes in 500K training steps. In general, more iterations during SSI enables the agent to access more latent hints with different 3 Task 5 requires the agent to get down and jump over a spider; task 7 needs the agent to turn left, jump, and get a key. Figure 7: The relative improvement of ExtLang-SSI's final policy under different iterations during SSI. Noise Rate 0% 10% 30% 50% Suc. Rate 26.95% 26.38% 24.52% 23.91% Table 1: The success rate under different noise rates of SSI hints (baseline: 24.01%). starting states and helps the agent to learn faster. Besides, SSI also benefits the policy by providing better initialization. Thus, more SSI also makes a higher success rate under task-training. We also evaluate the final policy. Fig. 7 shows the relative improvement of ExtLang-SSI's final policy under different iterations during SSI. Note that the x-axis in Fig. 7 represents the number of iterations during SSI. ExtLang-SSI has a 6.0% relative improvement when applying SSI for 250K iterations. Similar to the learning curves, more SSI brings out a more massive relative improvement and achieve 11% under 500K SSI iterations. Analysis of Generated Hints We randomly select 100 generated hints and ask people to check if they are relevant to the state. The result shows that 73 are totally corresponding, 21 are relatively corresponding, and only 6 are not corresponding. Our H can actually generate an appropriate hint for a given state so that SSI can help P for better initialization. We make the noise hints during SSI by randomly pairing a state with any other generated hint. The success rate under different noise rates is shown in Table 1. We can see that a high noise rate will make SSI not that robust. Moreover, if the hints are too noisy, it will even hurt the performance (24.01 down to 23.91). While, we have verified that our H can provide accurate hints. Therefore, SSI benefits the initialization, leading to a better success rate. Qualitative Results Fig. 8 demonstrates some examples of hint l generated by our H . By updating with hints like climb down the ladder or wait at the bridge appears , P can learn those latent but useful hints before task-training in a semi-supervised scenario.", "In this paper, we propose semi-supervised initialization that makes the agent learn from various possible hints in advance before play games with language hint.", "By semi-supervised initialization, the agent can have a better-initialization policy, which benefits further task-training.", "The experiments show that semi-supervised initialization not only helps the agent to learn faster but also has a higher success rate of the final policy.", "Our presented SSI can benefit future vision-and-language research for practical applications.", "In terms of negative impact, since the initialization is learned from those instructions, if there is bias in the original dataset, it may have some potential issues.", "Acknowledgments.", "Research was sponsored by the U.S. Army Research Office and was accomplished under Contract Number W911NF-19-D-0001 for the Institute for Collaborative Biotechnologies.", "The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government.", "The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "Technical support problems are often long and complex.", "They typically contain user descriptions of the problem, the setup, and steps for attempted resolution.", "Often they also contain various non-natural language text elements like outputs of commands, snippets of code, error messages or stack traces.", "These elements contain potentially crucial information for problem resolution.", "However, they cannot be correctly parsed by tools designed for natural language.", "In this paper, we address the problem of segmentation for technical support questions.", "We formulate the problem as a sequence labelling task, and study the performance of state of the art approaches.", "We compare this against an intuitive contextual sentence-level classification baseline, and a state of the art supervised text-segmentation approach.", "We also introduce a novel component of combining contextual embeddings from multiple language models pre-trained on different data sources, which achieves a marked improvement over using embeddings from a single pre-trained language model.", "Finally, we also demonstrate the usefulness of such segmentation with improvements on the downstream task of answer retrieval.", "Problems, reported by users of software or hardware products called tickets or cases , are often long and complex.", "Along with a description of the problem, users often report the setup, steps they have tried at mitigating the problem, and explicit requests.", "These problems also contain various nonnatural language elements like snippets of code or commands tried, outputs of commands or software tools, error messages or stack traces, contents of log files or configuration files, and lists of key-value Work done at IBM Research during a summer internship Now at Google Figure 1: Various non-natural language segments labelled from a problem on AskUbuntu pairs.", "Figure 1 shows a sample support problem from AskUbuntu 1 , where all such segments are labeled.", "While these segments are important sources of information for the human reader, they are difficult to handle for systems built to automatically answer support problems.", "As noted in Gupta et al. (2018), the non-natural language segments lead to parsing mistakes, and errors in the understanding of support problems.", "Correctly identifying these segments can also augment problem understanding.", "For instance, a retrieval engine with error messages and their solutions indexed in distinct fields would return better results with a fielded query containing just the error message from the ticket.", "Specialized tools for log analysis (He et al., 2016) could also be 1 https://askubuntu.com/ run specifically on the identified log segment of problems.", "In this paper, we aim to address the problem of identifying and extracting these non-natural language segments from support tickets.", "In particular, we choose to focus on the following six segment labels which appear often in support tickets (also shown in Figure 1): Command / Code : Includes terminal commands and programming code snippets Command Output / Info Message : Includes outputs of successful command/code executions Error Message / Stack Trace : Includes error traces resulting from unsuccessful com-mand/code executions File Content (Not Code) : Includes contents of log files, configuration files, etc. which do not contain programming source code Path / URL : Includes file paths or webpage URLs Semi-Structured Information : Includes text which is structured in the form of key-value pairs, lists, etc., often used to convey system configurations or lists of components We formulate the problem as a sequence labelling task, with word-level tags used to encode segments.", "To leverage the rich literature of supervised approaches in this framework, we also create a dataset with segments tagged for questions from AskUbuntu 2 .", "Our contributions are as follows 1. We introduce a novel task towards understanding technical support problems, which has implications on a variety of downstream applications.", "We also release a tagged dataset of problems for the task.", "2. We benchmark the performance of state of the art sequence labelling models on the task, studying their performance and limitations.", "This hopefully provides direction for future research.", "2 Data available at https://github.com/kushalchauhan98/ticket-segmentation 3. Given the relatively small size of tagged data, we also explore pre-training based approaches.", "Our model leverages activations from multiple language models pre-trained on different data sources, and we show how they can be used to improve performance on the task.", "Understanding technical support problems is a particularly difficult task, owing to the long text of problems.", "In Gupta et al. (2018), the authors propose that understanding can be approached by extracting attributes of the ticket that correspond to the description of the problem (symp-tom), steps taken for mitigation (attempt), and explicit requests (intent).", "They also propose a dependency parser-based approach for extracting these attributes.", "However, while this approach pays attention to the semantics of the problem, the syntactical idiosyncrasies are ignored.", "The idea of segmenting of questions for improvements on downstream tasks is not new.", "In Wang et al. (2010), the authors propose an unsupervised graph-based approach for segmenting questions from Community Question Answering (cQA) websites into sub-questions and their related context sentences.", "The authors demonstrate improvements in question retrieval by using these segments for more granular similarity matching.", "Chrupaa (2013) uses representations from a character-level language model for segmenting code spans in Stack Overflow posts.", "The author uses (cid:104) code (cid:105) tags in HTML sources of posts for supervised training of a character level sequence labelling model.", "However, the (cid:104) code (cid:105) tags in the posts usually include all forms of non-natural language text like code snippets, command outputs, error messages or stack traces, and file paths (See Fig 2).", "The resulting level of granularity is thus insufficient for effective application in downstream tasks such as automated problem resolution.", "The task of text-segmentation in itself has been well studied in the literature, with popular unsupervised approaches like TextTiling (Hearst, 1997) and C99 (Choi, 2000).", "While, the problem of ticket segmentation, as defined by us, involves both segmenting and identifying segment types, we compare the performance of a more recent supervised segmentation approach (Koshorek et al., 2018) against our proposed model.", "Significant amount of work has been done on", "us-(a)", "In Wang et al. (2018) the authors use ELMo embeddings and a biLSTM-CRF based architecture with self-attention for the task of neural discourse segmentation.", "We adopt a similar architecture, and explore the effect of using pre-trained contextual embeddings on our task.", "Given the fact that different segments in technical support problems have very different vocabularies, we also explore leveraging pre-trained Language Models on a variety of different datasets.", "Our dataset is derived from English questions on Ask Ubuntu.", "Questions posted on the website are similar to proprietary tech support tickets (in terms of question length, number of keywords/noun phrases, etc).", "We would like to point out that while posts on the website support the (cid:104) code (cid:105) HTML tag, it is not granular enough for our downstream tasks.", "These tags are also often abused to present snippets of command outputs/error messages/file paths etc.", "Figure 2 shows examples of such questions.", "We Figure 3: Relative frequencies of each tag in the dataset.", "also do not use other metadata available (like turn-based information) with the data dump because these are not available with proprietary tickets.", "Tagging is performed at the word level, and we use the BIO tagging scheme.", "We have a pair of Begin and Inside tags for each of the 6 non-natural language segments, and natural language segments are labelled O, totalling to 13 tags.", "We use the Doccano tool 3 for labelling, which provides better support for labelling long chunks in big documents compared to other popular sequence labelling annotation tools.", "We obtain labelling for 1,317 questions, totalling 3 https://github.com/chakki-works/doccano #Questions Avg.", "We divide the data into 80:10:10 train, val, and test splits, at random.", "High-level statistics for the dataset are presented in Table 1. Figure 3 shows the average number of words per tag in the dataset.", "The tags Command Output and Error Message are relatively infrequent (1.2 and 0.6 per question) compared to the tag Command Code (2.1 per question), however, they cover a much larger fraction of words because they tend to be quite verbose.", "In Figure 4 we show the inter-annotator agreement between two annotators on 50 questions.", "Few of the label pairs with large off-diagonal values include Command Output Error Message , which is understandable, as error messages are often interspersed in successful program runs.", "Conversely, unsuccessful program runs often contain a long train of success messages, only ending in one or few error logs.", "Information .", "This kind of confusion is due to the presence of network configurations, commands to view these, and files that contain these.", "They're often stored in configuration files as key-value pairs Command Output File Content .", "This particular confusion stems from the cat command, and its use to view the contents of files.", "The low inter-annotator agreement ( = 0 . 7637 ) illustrates the inherent difficulty of the task.", "At this point, it's important to note that while there's some confusion in identifying labels for these segments, the need for these separate labels stems from downstream tasks.", "Given a technical support question, we formulate the segmentation problem as a sequence labelling task.", "It is an intuitive choice, given its efficacy for similar text segmentation problems like discourse segmentation (Wang et al., 2018) and chunking (Pe-ters et al., 2017).", "Figure 5 presents an overview of our model.", "We explore different embeddings for each word (character-based embeddings, pre-trained embeddings, and pre-trained contextual em-beddings).", "These word embeddings are then fed to a bi-directional GRU for encoding context.", "On the output of the GRU layer, we explore the effect of attention.", "Finally, the representations are passed to a CRF to decode the segment labels.", "We also study the impact of combining pre-trained contextual embeddings from multiple language models, trained on different data sources.", "In the rest of this section we detail individual components of the model.", "For distributed word representations, we use skip-gram based word2vec embeddings (Mikolov et al., 2013) trained on all the questions from Ask Ubuntu.", "We also look at fastText word embeddings (Bo-janowski et al., 2017), which enrich word vectors by using subword information, emitting plausible word representations for unseen or rare words, giving us a significant gain.", "We use a 300-dimensional embedding from both word2vec and fastText.", "In addition to the word-level features we also use bi-directional LSTM based character-level features similar to Chiu and Nichols (2016), Lample et al. (2016), and Ma and Hovy (2016).", "These features encode rich character level information which can improve performance, especially in syntactic tasks.", "We obtain an 80-dimensional representation for each word through the character bi-LSTM, which is the concatenation of the last hidden state of the forward and backward LSTMs.", "Pre-trained contextual embeddings have been shown to work well on a wide variety of NLP tasks.", "In domains with relatively small task-specific training data, the gains have been substantial (McCann et al., 2017; Akbik et al., 2018; Peters et al., 2017).", "We also include contextual embeddings from the pre-trained bi-directional language model in ELMo (Peters et al., 2018).", "We observe that the non-natural language segments exhibit wide differences in syntactic and semantic structure, as is evident from Fig 1. We propose contextual embeddings from multiple language models; each trained on a different data source English text, code snippets, config/log file contents.", "We hypothesize that combined embeddings from language models trained on separate data sources can capture word relationships better and can give richer word representations, as opposed to a single model trained on a large English corpora.", "For combining multiple contextual embeddings, we explore two techniques (1) a naive concatenation, and (2) a weighted sum, with weights learned from context-independent DME (Dynamic Meta-Embeddings) and context-dependent CDME (Contextualised Dynamic Meta-Embeddings) self-attention mechanisms as proposed by Kiela et al. (2018).", "When using embeddings from n different LMs for a training instance with s tokens { t j } sj =1 , we get contextual embeddings { w i,j } sj =1 R d i ( i = 1 , 2 , . . . , n ) .", "For computing the weighted sum, the embeddings from multiple LMs are first projected to a common d (cid:48) -dimensional space by learned linear functions: w (cid:48) i,j = P i w i,j + b i ( i = 1 , 2 , . . . , n ) (1) where P i R d (cid:48) d i and b i R d (cid:48) .", "The projected embeddings are then combined with a weighted sum w DMEj = n (cid:88) i =1 i,j w (cid:48) i,j (2) where i,j = g ( { w (cid:48) i,j } sj =1 ) are scalar weights.", "In DME, they are learned with the self-attention mechanism: i,j = g (cid:0) w (cid:48) i,j (cid:1) = (cid:0) a w (cid:48) i,j + b (cid:1) (3) where a R d (cid:48) and b R are learned parameters and is the softmax function.", "For CDME, the self-attention mechanism is made context-dependent: i,j = g (cid:0) { w (cid:48) i,j (cid:9) s j =1 (cid:17) = ( a h j + b ) (4) where h j R 2 m is the j th hidden state of a bidirectional LSTM which takes { w (cid:48) i,j } sj =1 as input, a R 2 m and b R .", "m is the number of hidden units in this LSTM, and it is set to 2 as in the original paper.", "In addition to the pre-trained ELMo model, we train three additional language models on different data sources.", "Each of these are also trained with the ELMo architecture.", "The pre-trained model emits word embeddings of size 1024, while each of our domain-specific models emit embeddings of size 256.", "Code LM : This LM was trained on a concatenation of all text inside the (cid:104) code (cid:105) tags of Ask Ubuntu, Super User, and Unix Stack Exchange posts.", "The total size of this corpus was approximately 400 MB.", "Prog LM : This LM was trained on a corpus containing programming source code that was compiled from various code repositories on GitHub.", "Approximately 275 MB in size, it includes sources in most popular languages such as C, C++, Python, Java, Go, JavaScript, and Bash.", "Config LM : This LM was trained on a corpus of configuration and log files present in the system folders of Mac OS and Ubuntu installations.", "The total size of the corpus was about 60 MB.", "In Wang et al. (2018), the authors experiment with a restricted attention mechanism on top of the LSTM hidden representations.", "This is not appropriate for our task since the questions are fairly long (averaging around 900 words) and signals indicating the start or end of a segment might appear far away.", "Since RNNs are known to be poor at modelling very long-distance dependencies, we also experiment with the inclusion of the Scaled Dot-Product Attention layer (Vaswani et al., 2017) on top of the bi-directional GRU.", "This attention layer requires the computation of 3 matrices (Key, Query, Value) from the RNN hidden states, which entails a large number of extra parameters to be learned.", "Therefore, we also try a version of attention where all the three matrices are set equal to the hidden states of the GRU.", "With the setup above, we study the performance of various model components on the task of segmenting support problems.", "To put the performance in perspective, we also compare against three baselines detailed in Section 5.1.", "The evaluation metrics are carefully selected, avoiding an exact evaluation of such long and noisy segments, and rewarding partial retrieval of segments.", "The chosen evaluation metric is discussed in Section 5.2.", "Finally, to demonstrate the usefulness of the task, we evaluate the performance of answer retrieval with segmentation (Section 5.3).", "All baselines and sequence labelling models are trained on the train split, and fine-tuned on the validation split.", "For the baselines, we only tune the regularization strength parameter.", "For the sequence labelling model, we tune the dropout and recurrent dropout parameters, as well as the learning rate.", "Our best performing models have a dropout of 0.3, recurrent dropout of 0, and learning rate of 1e-3.", "All results are then reported on the test split.", "The task of segmenting technical support problems can be thought to be comprised of two distinct subtasks (1) segmentation of text, (2) identification of the segment label.", "With these in mind, we propose 3 baseline methods 1. Sentence Only Baseline Segmentation is done trivially with newlines and sentence boundaries serving as segment boundaries.", "The label for a segment is determined using just the current sentence as input.", "2. Sentence Context Baseline Segmentation is done identically to the Sentence Only baseline.", "The label for a segment is determined using the immediate neighbouring sentences along with the current sentence as input.", "3. Supervised Text Segmentation Baseline Segments are identified with the supervised algorithm for segmenting text as described in Koshorek et al. (2018).", "The label for each segment is identified with all the text contained in it as input.", "For training the supervised text segmentation model from Koshorek et al. (2018) we use the whole data dump from AskUbuntu, with the (cid:104) code (cid:105) and (cid:104) /code (cid:105) html tags serving as segment boundaries.", "For identifying segments (in all three baselines) we use a Logistic Regression classifier with representation from ELMo as input features.", "Segment representations are created by mean pooling the contextual representation of the comprising words from ELMo.", "Segments in our dataset are typically quite long, therefore evaluation based on an exact match is quite harsh.", "Keeping this in mind, we resort to soft precision and recall metrics.", "We adopt proportional overlap based metrics, used for the task of opinion expression detection, as proposed by Johansson and Moschitti (2010).", "Towards the calculation of soft precision and recall, consider two spans s and s (cid:48) with labels l and l (cid:48) respectively.", "The span coverage , c , is defined as how well s (cid:48) is covered by s : c (cid:0) s, s (cid:48) (cid:1) = | s s (cid:48) | | s (cid:48) | if l = l (cid:48) , 0 otherwise (5) Using span coverage, the span set coverage of a set of spans S with respect to another set of spans S (cid:48) is computed as follows: C (cid:0) S , S (cid:48) (cid:1) = (cid:88) s j S (cid:88) s (cid:48) k S (cid:48) c (cid:0) s j , s (cid:48) k (cid:1) (6) Using the span set coverage, we can now define the soft precision P and recall R of a predicted set of spans S with respect to the gold standard set of spans S : P ( S , S ) = C ( S , S ) | S | R ( S , S ) = C ( S , S ) | S | (7) In this equation, the operator | | counts the no.", "An important task in the automation of technical support is the retrieval of the most relevant answer document for a given ticket (from a corpus of product documentation, FAQ docs, frequent pro-cedures).", "In this experiment we demonstrate the usefulness of segmenting support tickets towards this goal.", "We index the text of about 250,000 answers from AskUbuntu with ElasticSearch 4 .", "Answers with a large number of downvotes, and very short answers are ignored.", "We use questions from our annotated dataset as search queries.", "We then compare the retrieval performance of querying with the whole question against a query with separate fields corresponding to each segment.", "In the fielded query, we set different boost values for the identified segments.", "Boosting a specific segment of the question with a higher value causes it to have more significance in the relevance score calculation in ElasticSearch.", "To decide the boost values, we calculate the average percentage word overlap between a segment in the question and its correct answer from AskUbuntu on the train and val sets.", "To compare retrieval performance, we evaluate the Mean Reciprocal Rank (MRR) of the correct answer for questions in the test set.", "Table 2 presents evaluation metrics for the three baselines against three variants of our sequence labelling model.", "The first variant does not use pre-trained embeddings from language models, the second uses just pre-trained ELMo, while the third combines pre-trained embeddings from multiple language models using CDME.", "All three variants use fastText for word embeddings (refer Section 6.1), character-based embeddings, and do not have attention mechanism before the final CRF layer (refer Section 6.2).", "As one would expect, the Context Baseline performs much better than the Sentence Only Baseline.", "The sequence labelling models, however, outperform both the baselines by a huge margin, demonstrating the effectiveness of the model on the task.", "Specifically, the best performance is achieved by combining pre-trained embeddings from multiple language models trained on different data sources.", "It significantly outperforms the model using embeddings from a single pre-trained model on English (explored in Section 6.3).", "In the following section we present results from the various model components we explored.", "Row 1 and 4 in Table 3 presents the comparison between models using word embeddings from", "4 https://www.elastic.co/products/elasticsearch", "word2vec and fastText.", "Both word2vec and fastText embeddings are trained on all posts in the Ask Ubuntu dataset.", "As we can see, fastText gives a marked improvement over using embeddings from word2vec.", "This is probably due to the nature of the vocabulary in our task.", "Since large portions of questions are spans of command output or error messages a lot of tokens appear very rarely.", "In fact, out of the 62,501 unique tokens in the dataset, 57% appear just once, and 78% appear 3 or fewer times.", "However, the characters in these tokens are probably very informative (for example http in a token would signal that the token is a URL).", "Therefore, fastText, which uses n-grams from a token to compute embeddings, would emit more meaningful representations.", "As a simple experiment, we check the similarity of two URLs from the dataset that appear just once http://paste.ubuntu.com/1403448/ and http://paste.ubuntu.com/14545476/ .", "While the co-sine similarity of Word2Vec vectors for the two is 0 .", "07 , the similarity between the fastText vectors is 0 .", "99 .", "Given the long tickets in our dataset, and unreasonably long lengths of spans for labels like", "command output or error messages , we explored the usefulness of attention in our model.", "We used the Scaled Dot-Product Attention as in (Vaswani et al., 2017).", "Rows 2 and 3 in Table 3 present the results of using attention.", "We find that weighted attention actually hurts performance.", "This could be because of the large number of extra parameters introduced in the calculation of Key, Value, and Query matrices.", "While the un-weighted version gets around this by using the bi-directional GRU hidden states as all 3 matrices, it doesn't improve results significantly either.", "As detailed in Section 4.3, we explore the impact of pre-trained contextual embeddings.", "We also test our hypothesis, that combining pre-trained embeddings from different data sources would perform better on our task than using embeddings from a language model trained on a single data source.", "The combination is also performed in two ways naive concatenation of embeddings from all language models, and weighted combination using DME and CDME as in Kiela et al. (2018).", "Table 4 summarizes these results.", "For the simple concatenation method, we present results for the best n -way combination of embeddings from different data sources, for each n (1, 2, 3, and 4).", "We find that combining embeddings from multiple language models trained on different data sources considerably outperforms using embeddings from a single pre-trained model (using both the naive concatenation and CDME).", "This is an artifact of the support problems containing large sections of nonnatural language text.", "We also find that contextual weighting does better than a simple concatenation.", "Table 5 presents results for the retrieval experiment.", "We show that weighing identified segments of the question with separate weights improves retrieval of the correct answer over a query with all tokens from the question.", "We also present results from the gold annotations of segments for these questions, as an upper-bound of the performance improvement we can hope to achieve.", "In this paper, we introduce and address an important problem towards a better understanding of support tickets segmentation of various nonnatural language segments.", "We create an annotated dataset for the task, on questions from the publicly available website, Ask Ubuntu.", "We also study the performance of the most recent Recurrent Neural Network-based approaches to sequence labelling, on this task.", "In the end, we propose the novel idea of combining pre-trained embeddings from language models trained on different data sources, which substantially improves performance.", "We also demonstrate the usefulness of the task with improvements in retrieval of the correct answer.", "Our future research direction includes a thorough study of differences in this dataset with actual tickets, and potential for transfer.", "It is still valuable to study models on open datasets, however, as these are readily available to the community." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "objective", "result", "method", "objective", "abstain", "method", "objective", "other", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "abstain", "other", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "result", "result", "abstain", "method", "method", "objective", "objective", "abstain", "abstain" ]
[ "Multilingual question answering over knowledge graph (KGQA) aims to derive answers from a knowledge graph (KG) for questions in multiple languages.", "To be widely applicable, we focus on its zero-shot transfer setting.", "That is, we can only access training data in a high-resource language, while need to answer multilingual questions without any labeled data in target languages.", "A straightforward approach is resorting to pre-trained multilingual models (e.g., mBERT) for cross-lingual transfer, but there is a still significant gap of KGQA performance between source and target languages.", "In this paper, we exploit unsupervised bilingual lexicon induction (BLI) to map training questions in source language into those in target language as augmented training data, which circumvents language inconsistency between training and inference.", "Furthermore, we propose an adversarial learning strategy to alleviate syntax-disorder of the augmented data, making the model incline to both language-and syntax-independence.", "Consequently, our model narrows the gap in zero-shot cross-lingual transfer.", "Experiments on two multilingual KGQA datasets with 11 zero-resource languages verify its effectiveness.", "With the advance of large-scale human-curated knowledge graphs (KG), e.g., DBpedia (Auer et al., 2007) and Freebase (Bollacker et al., 2008), question answering over knowledge graph (KGQA) has become a crucial natural language processing (NLP) task to answer factoid questions.", "It has been integrated into real-world applications like search engines and personal assistants, so it attracts more attention from both academia and industry (Liang et al., 2017; Hu et al., 2018; Shen et al., 2019).", "to focus on multilingual KGQA.", "However, building a large-scale KG, as well as annotating QA data, is costly for each new language, not to mention many minority languages with a few native annotators.", "Therefore, we adopt a zero-shot cross-lingual transfer setting a KGQA model is developed to perform inference on multilingual questions with the only access to training data and associated KG in a high-resource language (e.g., English).", "Providing the success of pre-trained monolingual encoders (Peters et al., 2018; Liu et al., 2019), some works (e.g., mBERT (Devlin et al., 2019) and XLM-R (Conneau et al., 2020)) pre-train a Transformer encoder (Vaswani et al., 2017) on large-scale non-parallel multilingual corpora in a self-supervised manner.", "Then given an NLP task, a general paradigm for zero-shot cross-lingual transfer is to fine-tune a pre-trained multilingual encoder on the data in a data-rich ( source ) language.", "And the fine-tuned model is generalizable enough to perform inference in other low-resource ( target ) languages with surprising quality of prediction.", "This paradigm can be adapted to KGQA to build symbolic logical forms (e.g., query graph (Yih et al., 2015)) for KG query.", "However, it is witnessed that there is a considerable KGQA performance gap between source and target languages, which is consistent with the empirical results on a wide range of other tasks by prior works (Conneau et al., 2020).", "To bridge the gap, translation approaches are proven effective on multilingual benchmarks (Hu et al., 2020; Liang et al., 2020).", "As a way of data augmentation, they perform source-to-target translation to obtain multilingual training data.", "Further with advanced techniques (Cui et al., 2019; Fang et al., 2020), they achieve state-of-the-art effectiveness.", "But these approaches rely heavily on a well-performing translator.", "The translator is not always available especially for a minority language since its training requires a large volume of parallel bilingual corpus.", "Therefore, to be applicable to more languages, we assume that neither translators nor parallel corpora are available in this work.", "In this paper, to adapt the translation approaches in our zero-resource scenario, we naturally propose to replace the full-supervised machine translator with unsupervised bilingual lexicon induction (BLI) for word-level translation.", "Specifically, as in prior works (Lample et al., 2018b; Artetxe et al., 2018), a BLI model is first trained on non-parallel bilingual corpora.", "Then, via bilingual word alignments in BLI, we map the training questions in source language into those in target languages to obtain augmented multilingual training data.", "Consequently, even simply learning a KGQA model on the augmented data can circumvent language inconsistency between training and inference and thus bridge the performance gap in zero-shot cross-lingual transfer.", "To explain why BLI is competent, it is observed that KGQA mainly involves phrase-level semantics (Berant et al., 2013).", "Compared to other tasks depending on sentence-level contextu-alization, KGQA is insensitive to long-term dependency but benefits from the language consistency.", "Moreover, we propose an adversarial strategy to mitigate the syntax-disorder caused by BLI.", "Specifically, we present a discriminator on top of the encoder, which is trained to distinguish whether the input is a grammatical question in source language or a BLI-translated one in target language.", "Meanwhile, jointly with KGQA goal, the encoder is fine-tuned to fool the discriminator so that the questions' representations are both languageand syntax-agnostic.", "So the trained KGQA model is robust to syntax-disorder and becomes insensitive to the question language, leading to superior performance on multilingual KGQA.", "Experiments conducted on two multilingual KGQA datasets with 11 zero-resource languages verify the effectiveness of our approach.", "We give a background of monolingual KGQA, followed by multilingual KGQA and its data format.", "Monolingual KGQA.", "A knowledge graph G is comprised of a set of directed triples ( h, p, t ) , where h E denotes a head entity, t E (cid:83) L denotes a tail entity or literal value, and p P denote a predicate between h and t .", "KGQA aims at generating answers for a natural language question q based on G .", "Usually a model M first parses the question q into an intermediate logical form, which is then transformed into a SPARQL query, and the answer is derived by executing the SPARQL query on G .", "An example is shown in Figure 1: the question in the bottom, intermediate logical form in the upper right and the corresponding SPARQL query in the top.", "Following Maheshwari et al. (2019), we take a restricted subset of -calculus query graph, as the intermediate logical form.", "Typically, a query graph consists of four types of nodes: grounded entity(s) (in rounded rectangle), existential variable(s) ?y (in circle), a lambda variable ?x (in shaded circle), and an aggregation function (in diamond).", "Considering entity-linking is a standalone system and there are many tools, we assume grounded entities in a question are given.", "This avoids uncertainty caused by entity-linking, and facilitates us to focus on the query graph construction process.", "Multilingual KGQA.", "We focus on a zero-shot cross-lingual transfer setting of KGQA.", "That is, we only have a labeled dataset D src = { ( q srcl , s srcl ) Nl =1 } , as well as the associated knowledge graph G , in a high-resource language src , where q srcl and s srcl denote a natural language question and a formal query, respectively.", "We will omit subscript l of example index in D src .", "Multilingual KGQA is to learn a model M which can answer questions q tgt in multiple target languages tgt .", "A recent baseline is to fine-tune pre-trained multilingual models (e.g. mBERT) in src and directly perform inference in tgt .", "This section starts with a base framework for monolingual KGQA, followed by our proposed multilingual solutions.", "Lastly, details about training and inference are elaborated.", "Following Maheshwari et al. (2019), we present a base pipeline framework as in Figure 1 to construct query graphs.", "It consists of three modules: 1) inferential chain ranking, 2) type constraint ranking, and 3) aggregator classification.", "Inferential Chain Ranking.", "An inferential chain (IC) refers to a sequence of directed predicate from a grounded entity to lambda variable ?", "x .", "Given an entity e grounded from the question q , we first search its chain candidates C e = ( c e 1 , . . . , c en ) by exploring legitimate predicate sequences starting from e in G .", "Following previous works (Yih Figure 1: Base framework for monolingual KG, consisting of three modules to construct a query graph. et al., 2015; Maheshwari et al., 2019), we fetch the chains whose length 2 .", "For example, as in the middle left of Figure 1, chain candidates are generated from the entity <dbr:Ven.-Ram.> within 2-hop on G .", "Then, a model is presented to measure the semantic relatedness between the question q and each candidate of inferential chain c ei , i.e., a ei =SemMatch( q, c ei ; ( IC ) ) , i =1 , . . . , n, (1) where a ei is a score for their relatedness, and ( IC ) parameterized SemMatch( ) can be any model for pairwise relatedness, such as Co-Attention network (Chen et al., 2019) and BERT-based Matching (De-vlin et al., 2019).", "Finally, the resulting of this module is the top-1 ranked inferential chain, i.e., c e = arg max c ei ( a ei , i = 1 , . . . , n ) .", "Note, if there are multiple grounded entities in q , we will predict an inferential chain for each entity.", "Type Constraint Ranking.", "Type constraints (TC) refer to the entity types specified in the question for each variable on an inferential chain.", "They can be used to disambiguate the entities and thus boost KGQA performance.", "For example, answer entity(s) to the example question in Figure 1 are constrained by type Scientist .", "Hence, type constraint ranking is proposed to capture such information, which is also achieved by a semantic matching model.", "Specifically, given the resulting inferential chain c e , we first enumerate type candidates T ey = { t ey 1 , . . . } for the existential variable and T ex = { t ex 1 , . . . } for the lambda variable.", "Then, because there is scarcely overlap of gold type constraints between the two variables, a single semantic matching model is adequate for both.", "Thus, we define the model to derive relatedness scores as b e j = SemMatch( q, t e j ; ( TC ) ) , (3) where, { y, x } , and j = 1 , . . . Finally, we get the type constraints for existential and lambda variable with a threshold ( thresh ) , i.e., T e = { t e j | b e j > ( thresh ) , j = 1 , . . . } .", "Aggregator Classification Given several answer formats in the dataset, aggregator classification (AC) is presented to distinguish the format among Bool , Count and Entity(s) .", "The principle of each is detailed in the middle right of Figure", "1. Formally, a simple text classifier can satisfy, i.e., p ( AC ) = Classifier( q ; ( AC ) ) R 3 , (5) where the Classifier( ) is composed of a contextualized encoder, a pooler and an MLP with softmax .", "Once the above is completed, their results can compose a query graph, which is transformed into SPARQL and then executed on G for the answer.", "Built upon the base framework detailed before, we extend it with a multilingual inference capability,", "i.e., multilingual KGQA.", "We are in line with a recent popular zero-shot transfer paradigm (Conneau et al., 2020; Fang et al., 2020) that: a pre-trained multilingual encoder is only fine-tuned in src , and a translation-based data augmentation technique is integrated to narrow the performance gap between src and tgt .", "To emphasize the gap in KGQA, 65% F1 score in English ( src ) vs. 54% in Italian ( tgt ) is observed by mBERT zero-shot transfer in our pipeline without any multilingual augmenting.", "Distinct from prior works in this paradigm requiring well-trained translators, we propose a fully unsupervised way for wide applicability with neither tgt KGQA data nor src tgt parallel corpora.", "It is natural to resort to bilingual lexicon induction (BLI) with unsupervised training and acceptable word-level translating quality.", "In the following, we first present a BLI-based augmentation for multilingual training data, followed by our adaptation of the monolingual base framework ( 3.1) to the augmented data.", "Finally, we propose an adversarial learning strategy coupled with BLI-based augmentation for robust cross-lingual transfer.", "An illustration of our proposed semantic matching model with symbolic candidates is in Figure", "2. 3.2.1 BLI-based Multilingual Augmentation We leverage the BLI model by Lample et al. (2018b).", "First, it pre-trains monolingual word embeddings U src R d |V src | and U tgt R d |V tgt | in src and tgt respectively.", "Then, it learns a linear transformation to unsupervisedly align the word embeddings in two languages to one space, i.e., W = arg min W M d ( R ) (cid:88) Distance( W U src : ,k , U tgt : ,l ) .", "The unsupervised alignment between k -th src word and l -th tgt word is captured by adversarial learning, and Distance( ) is implemented by cross-domain similarity local scaling (CSLS).", "Please refer to (Lample et al., 2018b) for its details.", "Based on the BLI model, we can build a word-by-word translator, BLI ( trans ) src tgt , from src to arbitrary tgt , as long as its monolingual corpus is available.", "Note, when performing word-level translation, we also employ CSLS to mitigate the hubness problem and find the most likely alignment.", "Then, we translate each question q src in D src to other languages: q tgt = BLI ( trans ) src tgt ( q src ) , (7) where src denotes English (en) in our experiments while tgt can be one of 11 other languages, such Farsi (fa), Italian (it), etc.", "Consequently, q tgt is the augmented multilingual data for model training.", "Remark: Although BLI provides multilingual data, open questions still remain.", "1) Why is BLI competent here: It is observed KGQA mainly involves word-/phrase-level semantics of symbolic candidates, rather than sentence-level one in most other NLP tasks.", "As the Module 1 and 2 in Figure 1, the matching only involves morphological similarity (e.g., scientist vs. <dbo:Scientist> ), synonym (e.g., won an award vs. <dbp:prizes> ), etc.", "Thus, KGQA is less sensitive to long-term context than other tasks.", "This has been leveraged by Berant et al. (2013) to propose a phrase matching model for monolingual KGQA.", "2) Will BLI lead to error propagation: Since BLI model achieves a high Precision@10 but a relatively low Preci-sion@1, wrong translation and the corresponding ground truth are semantically similar.", "Intuitively, their word embeddings are spatially close to each other, so wrong word-level translation is equivalent to applying tiny noise to word embeddings, which hardly leads to error propagation when robust pre-trained Transformer-based encoder is used.", "Symbolic Candidate Processing.", "For an inferential chain, we enrich each predicate on the chain by 1) transforming each camel-represented phrase into sequence-formatted words 2) prefixing +/for directional information, and 3) concatenating top-frequent types in local closed-world assumptions (Krompa et al., 2015).", "For a type constraint, we simply transform each camel-represented phrase into sequence-formatted words.", "In the following, we denote the text of a processed symbolic candidate as z no matter it is a chain or type.", "Multilingual Semantic Matching Model.", "As detailed in 3.1, both inferential chain ranking and type constraint ranking modules are built upon a semantic matching model between the question q and a symbolic candidate z .", "Note, z is always in src while q can be in either src or BLI-translated tgt .", "Following the common practice, we first concatenate q and z with special tokens (Devlin et al., 2019), which is passed into a pre-trained multilingual Transformer encoder, i.e., v = Pool(Transformer( text )) , (8) where, text = ( [CLS] , q, [SEP] , z, [SEP] ) .", "Pool( ) denotes using the contextualized embedding of [CLS] to represent the entire input.", "In this paper, the encoder is alternative between mBERT (Devlin et al., 2019) and XLM-R (Conneau et al., 2020).", "Lastly, a 1-way multi-layer perceptron (MLP) built upon v is presented to calculate the matching score in", "Eq.(1) or", "Eq.(3).", "Multilingual Classification Model.", "As detailed in 3.1, a text classification model is required to identify aggregator.", "To fit into our zero-resource multilingual scenario, the model, consisting of a pre-trained multilingual encoder and an MLP-based predicting layer, can be directly fine-tuned on the augmented questions, i.e., q src and q tgt .", "Although training the KGQA model on BLI-augmented multilingual data circumvents language inconsistency, it inevitably introduces syntax disorder and grammatical problem, which could hurt the performance.", "We thus present an adversarial strategy in pair with BLI-augmented data to push the Transformer encoder deriving languageand syntax-independent representations.", "Formally, a discriminator is built upon the single vector representation v produced by the Transformer encoder: p ( src ) = Sigmoid(MLP( v ; ( dis ) )) , (9) where p ( src ) is the probability of the question in source.", "I ( tgt ) denotes if the question in BLI-translated tgt , and ( enc ) is encoder's parameters in each module.", "Before constructing the objectives, we conduct uniform negative sampling for the two ranking models with the maximum negative number limited to 100.", "First, gold labels of a q for the three modules stem from the formal query s src .", "A margin-based hinge loss is defined for inferential chain ranking: L ( IC ) = 1 |D| (cid:88) D 1 |N | (cid:88) |N| i =1 ( a e + a ei ) , (12) where, D is the augmented dataset, N is a set of negative chains, a e is derived from the gold chain and a ei is derived from a negative chain.", "Similarly, the loss defined for type constraint ranking is L ( TC ) = 1 |D| (cid:88) D 1 2 |N | (cid:88) { y,x } |N| (cid:88) j =1 ( b e + b e j ) .", "Lastly, the loss of aggregator classification is L ( AC ) = 1 |D| (cid:88) D log p ( AC ) [ i = g ] , (13) where p ( AC ) [ i = g ] denotes probability corresponding to gold aggregator class.", "During training, the adversarial loss is added to the loss function of each module to compose the final training objective, i.e., L ( ) = L ( ) + L ( adv ) ( enc ) , { IC, T C, AC } .", "As in Algorithm 1, we provide a detailed procedure for model inference in target language.", "We also provide an explanation of query graph in Figure", "1. As the example query graph shown in the right of the figure: a topic entity is first grounded as e = <dbr:Ven.-Ram> in rounded rectangle, an existential variable in circle denotes intermediate entity set ?", "y = { h | ( h, leaderName , e ) } , a lambda variable in shaded circle denotes the answer entity set ?", "x = { h | ( h, prizes , e ) e ?", "y } , and an aggregator COUNT is finally applied to ?", "x that is constrained by entity type <dbo:Scientist> .", "Note that, the existential variable can not exist if only 1-hop relation is expressed in a question, and if multiple topic entities are grounded, multiple ?x will be merged by intersection.", "Algorithm 1 Inference in Target Language.", "Require: : A q in tgt and its grounded topic entities E q ; KG G ; Models ( IC ) , ( TC ) , ( AC ) 1: Search the chain candidates C e on G , e E q 2: Rank each C e by", "Eq.(1), and keep top-3 in C e 3: C e { c e | c e C e Size( ?x c e ) > 0 } 4: c e Null 5: if Size( C e ) > 0 then c e the top1 inferential chain in C e 6: end if 7: Merge chains { c e | e E q c e is not Null } 8: Rank type constraint candidates by", "Eq.(3) and apply the top-1 constraint w/ score > ( thresh ) 9: Generate SPARQL and execute on G for answer entity set A 10: Identify the aggregator for q by", "Eq.(5) 11: A Aggregate( A ) by following Figure 1 12: return A ; 4 Experiments 4.1 Datasets and Evaluation Metrics We evaluate the proposed approach on two datasets, LC-QuAD (Trivedi et al., 2017) and QALD-multilingual (Usbeck et al., 2018), both of which contain questions with corresponding SPARQL queries over DBpedia 1 .", "DBpedia is a large-scale knowledge graph extracted from Wikipedia pages with 6 million/60 thousands/13 billion enti-ties/predicates/triples in the English edition.", "LC-QuAD.", "LC-QuAD is a large-scale complex question answering dataset, which contains 5000 English question-SPARQL pairs 2 .", "We follow the official split with 1000 questions in the test set, and further split the original training set into train-ing/valid with 3500/500 questions.", "To evaluate the effectiveness of multilingual KGQA, questions in the test set are translated into 10 languages (fa, de, ro, it, ru, fr, nl, es, hi, pt) 3 using Google Translator 4 .", "QALD-multilingual.", "QALD is a series of evaluation campaigns on question answering over linked data 5 .", "We collect all multilingual questions along with their SPARQL queries from QALD4 1 We use the 2016-10 version, which can be downloaded at https://wiki.dbpedia.org/downloads-2016-10.", "2 https://github.com/AskNowQA/LC-QuAD.", "3 https://github.com/yczhou001/Multilingual-KBQA-Dataset/tree/main/LC-QuAD.", "4 https://translate.google.com/.", "5 https://github.com/ag-sc/QALD.", "to QALD9 and filter out some out-of-scope ones 6 .", "There are overall 429 distinct question-SPARQL pairs and most are expressed in 12 languages (en, fa, de, ro, it, ru, fr, nl, es, hi_IN, pt, pt_BR).", "Considering the small size of this dataset, we take all QALD-multilingual questions as test set, and use the training data of LC-QuAD for model training.", "Evaluation Metrics.", "We adopt two widely-used metrics as following (Maheshwari et al., 2019), i.e., inferential chain accuracy (ICA) and macro F1 score.", "The former is used to measure the accuracy (i.e., Precision@1) of inferential chain model, and defined as the percent of correctly-predicted inferential chains.", "The macro F1 score is used to measure the performance of final answers.", "Please refer to (Maheshwari et al., 2019) for the details.", "We evaluate our approach with 2 multilingual encoding models, i.e. mBERT base and XLM-R base .", "The embedding and hidden size in both models are set to 768.", "We use Adam optimizer (Kingma and Ba, 2015) to optimize the KGQA loss with the learning rate of 5 10 5 and a linear warmup (Vaswani et al., 2017).", "The maximum training epoch, warm-up epoch, and batch size are set to 35, 3, and 32.", "The discriminator is trained along with each module's objective, with set to 5 10 4 for learning to fool.", "The discriminator is optimized via the Adam optimizer with a learning rate of 5 10 5 .", "( thresh ) for the type constraint model is set to 0.7.", "We follow (Maheshwari et al., 2019) and use the same values for other parameters in model training.", "We compare our approach with a natural, widely-used baseline, which fine-tunes a pre-trained multilingual model (e.g., mBERT, XLM-R) on source language, and then directly apply it to target languages.", "The comparison on QALD-multilingual and LC-QuAD with mBERT are reported in Table 1 and 2 respectively.", "It is showed that our approach outperforms the baseline significantly on both datasets for all languages.", "ICA is improved by 1%-4%, and 2.9% on average on the QALD dataset.", "The improvement on LC-QuAD is even larger, i.e., averaged ICA and F1 score of all languages are increased by around 7% and 4% respectively.", "Notably, with the BLI-augmented data and syntax-6 https://github.com/yczhou001/Multilingual-KBQA-Dataset/tree/main/QALD.", "ICA en fa de ro it ru fr nl es hi_IN pt pt_BR Avg Avg w/o en Baseline 80.7 76.0 77.8 76.8 76.5 80.4 76.9 78.5 77.6 79.3 80.9 86.3 79.0 78.8 Ours 83.7 77.6 80.5 79.2 80.5 83.1 80.3 80.5 81.7 82.5 85.3 87.4 81.9 81.7 Lift +3.1 +1.6 +2.8 +2.5 +4.0 +2.7 +3.4 +2.0 +4.0 +3.2 +4.4 +1.1 +2.9 +2.9 F1 en fa de ro it ru fr nl es hi_IN pt pt_BR Avg Avg w/o en Baseline 65.0 58.0 60.8 60.2 53.7 60.5 59.8 64.3 55.2 59.3 60.5 70.0 60.6 60.2 Ours 66.7 60.0 62.2 62.1 57.7 63.5 63.6 65.9 58.8 62.6 63.5 70.0 63.0 62.7 Lift +1.7 +2.0 +1.4 +2.0 +4.0 +3.0 +3.8 +1.7 +3.7 +3.2 +3.1 +0.0 +2.5 +2.5 Table 1: Comparison on QALD-multilingual using mBERT.", "agnostic adversarial learning, the performance of source-language (i.e., English) questions are also increased by a large margin, i.e., F1 score increases from 65% to 66.7% on QALD, and from 80% to 85% on LC-QuAD.", "We also evaluate the propose approach using XLM-R as the multilingual encoder.", "The comparison on QALD-multilingual is shown in Table", "3. We can observe similar improvements as in mBERT, where both averaged ICA and F1 score are increased by around 1%, verifying the effectiveness of our proposed approach.", "Our approach consists of two important components, BLI-based data augmentation and a syntax-agnostic learning strategy.", "We conduct an ablation study to investigate the effect of each component.", "Table 4 reports the averaged results of all target-languages on QALD-multilingual and LC-QuAD-multilingual.", "From the table we can see that, with BLI-based data augmentation, our approach increases the ICA score on QALD by 1.7%, and the syntax-agnostic adversarial learning further improves it by 1.2%.", "Similar improvements are observed on LC-QuAD, which verifies the effectiveness of both components in our approach.", "Impact of BLI Accuracy.", "We assess the impact of BLI accuracy on five Romance languages (i.e. it, fr, es, pt, and ro) by injecting noise into BLI results.", "Specifically, when mapping source-language 0.1 0.2 0.3 0.4 0.5 noise of BLI 0.785 0.795 0.805 0.815 0.825 ICA BLI-only Baseline 0.1 0.2 0.3 0.4 0.5 noise of BLI 0.59 0.60 0.61 0.62 0.63 F 1 BLI-only Baseline Figure 3: Impact of BLI Accuracy in our approach.", "words into a target language via BLI, we randomly replace translated words with wrong ones with a probability of p (10%, 20%, 30%, 40%, and 50%).", "The averaged performance of our approach on the five languages is reported in Figure", "3. It is observed, with more noise added, the performance of our approach drops, which is in accordance with intuition.", "But even when 50% of the translated words are noisy, our method still outperforms the baseline model.", "For example, it is superior than the baseline by 1% in terms of ICA with 50% noise, showing the robustness of our approach.", "Deep Dive into Adversarial Learning.", "We take the inferential chain ranking model as an example, and take a deep dive into the impact of syntax-agnostic adversarial learning.", "The adversarial learning involves a discriminator to distinguish whether a question is grammatical or syntax-disorder, and an inferential chain ranking model to identify the gold chain.", "Their loss values, i.e., L ( dis ) ( dis ) and ICA en fa de ro it ru fr nl es hi_IN pt pt_BR Avg Avg w/o en Baseline (XLM-R base) 81.5 76.9 75.6 77.7 76.7 80.9 76.5 78.8 77.4 80.2 80.4 84.2 78.9 78.7 Ours (XLM-R base) 84.0 78.1 77.5 79.0 77.1 80.9 77.8 79.4 78.1 81.3 80.9 85.3 79.9 79.6 Lift +2.5 +1.2 +1.9 +1.4 +0.4 +0.0 +1.3 +0.7 +0.7 +1.2 +0.5 +1.1 +1.1 +0.9 F1 en fa de ro it ru fr nl es hi_IN pt pt_BR Avg Avg w/o en Baseline (XLM-R base) 63.4 57.1 54.7 58.8 50.1 59.4 56.3 61.3 51.2 59.2 57.5 66.1 57.9 57.4 Ours (XLM-R base) 64.6 57.6 56.1 61.4 50.9 59.4 58.2 62.1 52.2 60.6 57.4 66.1 58.9 58.4 Lift +1.2 +0.5 +1.4 +2.6 +0.8 +0.0 +1.9 +0.8 +1.0 +1.3 -0.1 +0.0 +1.0 +0.9 Table 3: Comparison on QALD-multilingual using XLM-R.", "L ( IC ) , are plot in Figure", "4. We can see that the classification loss of the discriminator quickly drops and then slowly goes up, indicating that the discriminator gets good performance and then it is fooled later by the language-/syntax-agnostic embeddings generated by mBERT.", "Meanwhile, the inferential ranking loss drops quickly and stays very small in following epochs, showing that when mBERT is generating syntax-agnostic embeddings, it also supports the inferential chain ranking very well.", "We take several examples of inferential chain ranking to show how our approach works.", "We use t-SNE (Maaten and Hinton, 2008) to map the embedding of a question-chain pair into a two-dimensional data point.", "A question in a specific 200 100 0 100 200 300 400 500 600 400 200 0 200 400 positive negative 300 200 100 0 100 200 300 400 300 200 100 0 100 200 300 positive negative 200 150 100 50 0 50 100 150 150 100 50 0 50 100 150 200 positive negative 300 200 100 0 100 200 300 300 200 100 0 100 200 300 positive negative Figure 5: Case study via t-SNE visualization.", "language is paired with its golden inferential chain and top-1 ranked negative candidate.", "Figure 5 compares the baseline with our approach for two questions.", "Positive and negative examples of the same question in different languages are plot in the same figure.", "We can see that the baseline model can not distinguish positive inferential chains from negative ones well, while our approach can learn a language-agnostic representation that focuses more on ranking inferential chain candidates.", "There are mainly two categories of approaches to handle monolingual question answering over knowledge graph (KGQA) task.", "(1) Information retrieval-based approaches align a question with its answer candidates in the same semantic space, where the candidates usually stem from KG neighbors of the topic entity detected in the questions (Bordes et al., 2014b,a; Dong et al., 2015; Jain, 2016; Xu et al., 2016; Hao et al., 2017; Chen et al., 2019).", "(2) Semantic parsing-based approaches first translate a question into the corresponding logical form, e.g., program (Guo et al., 2018; Shen et al., 2019) or query graph (Yih et al., 2015; Jia and Liang, 2016; Xiao et al., 2016; Dong and Lapata, 2016; Liang et al., 2017; Dong and Lapata, 2018; Maheshwari et al., 2019), and then execute the logical form over KG to derive the final answer.", "Note a logical form is usually composed of a series of grammars or operators pre-defined by experts.", "This paper is in line with the second category to generate query graph for KG execution.", "To the best of our knowledge, there are only few works targeting multilingual KGQA (Hakimov et al., 2017; Vey-seh, 2016), which rely on extensive multilingual training data with hand-crafted features while are inapplicable to the zero-shot transfer scenario.", "So we adopt the pipeline by Maheshwari et al. (2019) for monolingual scenario as our base model but update the encoders with the Transformer (Vaswani et al., 2017) to strengthen their expressive power and facilitate recent pre-trained multilingual initializations.", "Given task-specific data in a source language, cross-lingual models are trained to perform inference in target languages in a lowor zero-resource scenario.", "Typically, cross-lingual models are proposed in two paradigms.", "1) Universal encoding based paradigm represents multilingual natural language text into language-agnostic embeddings the same semantic space.", "Early works focus on aligning multilingual word embedding (Mikolov et al., 2013; Faruqui and Dyer, 2014; Xu et al., 2018), while recent efforts are mainly made on large-scale pre-trained multilingual encoder, such as mBERT (Devlin et al., 2019), XLM (Conneau and Lample, 2019), Unicoder (Huang et al., 2019a), XLM-R (Conneau et al., 2020), InfoXLM (Chi et al., 2020), and ALM (Yang et al., 2020).", "They can perform zero-shot cross-lingual transfer by training in the source language while directly inference in target language.", "2) translation -based paradigm employs well-trained machine translators to map the training or test examples in source language to those in target translation.", "Recent common practice tends to leverage the second paradigm to generate multilingual data to narrows the zero-shot cross-lingual performance gap in the first paradigm, which leads to state-of-the-art results on several cross-lingual benchmarks.", "In contrast, we consider a zero-resource scenario where translators are unavailable and we thus resort to unsupervised BLI in light of KGQA's characteristics.", "As a branch of universal encoding at word level, bilingual lexicon induction (BLI) (a.k.a cross-lingual word embedding CLWE) is learned to align bilingual word embeddings in the same space, where the embeddings are pre-trained on monolingual corpora and the alignment is trained in either a (semi-)supervised or unsupervised manner (Smith et al., 2017; Lample et al., 2018b; Artetxe et al., 2018, 2019; Huang et al., 2019b; Patra et al., 2019; Karan et al., 2020; Zhao et al., 2020; Ren et al., 2020).", "To alleviate hubness problem (Dinu and Baroni, 2015) in BLI, alternatives of the distance measurement are proposed to substitute nearest neighbor (NN) during the alignment, such as inverted-softmax (Smith et al., 2017) and CSLS (Lample et al., 2018b).", "In addition to building bilingual dictionary via word-level translation, a well-trained BLI model can serve as a weak baseline of sentence-level translation (Lample et al., 2018a), a seed model for unsupervised translation (Lample et al., 2018a) or a bilingual variant of copy mechanism in summarization (Zhu et al., 2020).", "Moreover, adversarial training is usually integrated into cross-lingual models for language-agnostic representation learning, such as unsupervised BLI (Lample et al., 2018b; Zhang et al., 2017), unsupervised translation (Lample et al., 2018a), cross-Lingual sequence labeling (Kim et al., 2017; Huang et al., 2019c) and cross-Lingual classification (Dong et al., 2020).", "In contrast, our adversarial strategy not only considers language-agnostic representations but also aims at making the model insensitive to syntax-disorder and thus competent in zero-resource scenario.", "We propose a novel approach for zero-shot cross-lingual transfer in multilingual KGQA, which augments training data by bilingual lexicon induction, and leverages a syntax-agnostic adversarial learning strategy to alleviate the syntax-disorder problem caused by BLI.", "Experimental results on two multilingual KGQA datasets in 11 zero-resource languages verify its effectiveness." ]
[ "abstain", "method", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "method", "abstain", "abstain", "other", "other", "other", "other", "method", "abstain", "method", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "objective", "objective", "abstain" ]
[ "The Mixture-of-Experts (MoE) technique can scale up the model size of Transformers with an affordable computational overhead.", "We point out that existing learning-to-route MoE methods suffer from the routing fluctuation issue, i.e., the target expert of the same input may change along with training, but only one expert will be activated for the input during inference.", "The routing fluctuation tends to harm sample efficiency because the same input updates different experts but only one is finally used.", "In this paper, we propose STABLEMOE with two training stages to address the routing fluctuation problem.", "In the first training stage, we learn a balanced and cohesive routing strategy and distill it into a lightweight router decoupled from the backbone model.", "In the second training stage, we utilize the distilled router to determine the token-to-expert assignment and freeze it for a stable routing strategy.", "We validate our method on language modeling and multilingual machine translation.", "The results show that STABLEMOE outperforms existing MoE methods in terms of both convergence speed and performance.", "The code is available at https://github.", "com/Hunter-DDM/stablemoe .", "In recent years, large-scale Transformers (Devlin et al., 2019; Dong et al., 2019; Raffel et al., 2020; Clark et al., 2020; Bao et al., 2020; Brown et al., 2020) have shown a striking ability to model languages.", "However, with the model scale growing, the training speed will go slower, and the extremely large memory requirement also introduces a heavy burden of engineering.", "Mixture of Experts (MoE) (Jacobs et al., 1991; Jordan and Jacobs, 1994; Shazeer et al., 2017), in a much easier way, enables Transformers to scale up the number of parameters meanwhile introducing an affordable Contribution during internship at Microsoft Research.", "computational overhead.", "MoE-based Transformers have a set of expert modules, and only a few experts will be activated for each input token.", "In this way, we can expand the model scale by adding expert modules, which will keep the computational and memory overhead within a tolerable range.", "Most existing MoE methods (Lepikhin et al., 2021; Fedus et al., 2021; Lewis et al., 2021) decide the token-to-expert routing according to the dynamically changing token representations.", "However, we point out that they face the routing fluctuation problem.", "As shown in Figure 1, the same input may be assigned to different experts along with training.", "However, during inference, only one expert will be activated for the input.", "The routing fluctuation problem tends to harm sample efficiency because the same input updates different experts while only one is finally used.", "Taking BASE Layer (Lewis et al., 2021) as an example, during the whole training process, we examine the token-to-expert assignment for tokens in the validation set.", "For an input token, we define the last fluctuation step as the last step where its target expert is different from the final step.", "We plot the cumulative token percentage with regard to the last fluctuation step (annotated as its percentage accounting for all training steps) in Figure", "2. We find that the last fluctuation step of 40.9% tokens exceeds 20%, which means 40.9% tokens do not have a stable target expert when 20% of all training steps have been done.", "Furthermore, 29.1% tokens still change their target experts after half of the whole training process, and 15.4% tokens even change the target expert after 80% of all training steps, which is nearing the training ending.", "These statistics prove that the routing fluctuation problem indeed exists in previous MoE methods.", "In this paper, we propose STABLEMOE with two training stages to address the routing fluctuation problem.", "In the first training stage, we follow the learning-to-route paradigm and aim to learn a 7085 Training Epochs Expert 3 Inference 1 2 3 Transformer Blocks Transformer Blocks The car is Expert 2 Expert 1 Expert 3 Transformer Blocks Transformer Blocks Expert 2 Expert 1 Expert 3 Transformer Blocks Transformer Blocks Expert 2 Expert 1 Transformer Blocks Transformer Blocks Expert 2 Expert 1 Expert 3 The car is The car is The car is Figure 1: Illustration of the routing fluctuation problem.", "balanced and cohesive routing strategy.", "We design a balance loss to guarantee the assignment is balanced.", "In addition, inspired by Lewis et al. (2021), we adopt a sigmoid gating mechanism, which enables the task objective to propagate supervised signal back to the routing strategy, to facilitate learning a more cohesive assignment.", "As the routing strategy is being learned, we synchronously distill it into a lightweight router decoupled from the backbone model.", "In the second training stage, we utilize the distilled router to determine the token-to-expert assignment.", "The distilled router is frozen in this stage to provide a stable routing strategy, which addresses the routing fluctuation problem in the remaining training.", "We conduct experiments on language modeling and multilingual machine translation.", "The results show that STABLEMOE outperforms existing MoE methods in terms of both convergence speed and performance.", "(1) We point out the routing fluctuation problem in existing learning-to-route MoE methods.", "(2) We propose STABLEMOE to address the routing fluctuation problem.", "(3) We conduct substantial experiments under various settings to show the advantages of STABLEMOE over existing MoE methods.", "We first introduce the MoE mechanism designed for Transformers (Vaswani et al., 2017).", "Given a standard L -layer Transformer model and an input sequence X containing T tokens, the Transformer output HL is calculated by HL = [ h L 1 ; h L 2 ; ... ; h LT ] , (1) h lt = FFN (cid:16) u lt (cid:17) + u lt , (2) u l 1: T = self-att (cid:16) h l 1 1: T (cid:17) + h l 1 1: T , (3) where h lt is the hidden state of t -th token after the l -th layer, Self-Att( ) is the self-attention module, and FFN( ) is short for the feed-forward network.", "For simplicity, we omit the layer normalization.", "We implement MoE for Transformers by inserting MoE layers, that are composed of a set of FFNs, into two neighboring Transformer blocks.", "At an MoE layer, for each input token, only a few or one expert will be activated, controlled by a gating function g ( ) : h lt = N (cid:88) i =1 g i (cid:16) h l 1 t (cid:17) FFN i (cid:16) h l 1 t (cid:17) + h l 1 t , (4) where N is the total number of experts, and FFN i is the i -th expert.", "Here, the gating function g i ( ) is sparse for computational efficiency.", "For simplicity, we omit the layer normalization.", "3. In the first training stage, we follow the learning-to-route paradigm and aim to learn a balanced and cohesive routing strategy.", "As the routing strategy is being learned, we synchronously distill it into a lightweight router decoupled from the backbone model.", "In the second training stage, we utilize the distilled router to determine the token-to-expert assignment.", "The distilled router is frozen in this stage to provide a stable routing strategy.", "During inference, we also use the frozen distilled router for consistent routing.", "3.1 Training Stage 1: Learn Routing Strategy Let h l 1 t R d be the input representation of token t and E RN d be the centroids of N experts.", "For each MoE layer, we assign each token to one expert FFN (Fedus et al., 2021; Lewis et al., 2021; Roller et al., 2021).", "The assignment score is: s t,i = E (cid:62) i h l 1 t , (5) where s t,i is the assignment score between token t and expert i , indicating their affinity.", "We use a greedy assignment algorithm, i.e., sending each token to the expert with the highest affinity.", "Then, we calculate the expert FFN output as: a t = arg max i ( s t,i ) , (6) h lt = ( s t,a t ) FFN a t (cid:16) h l 1 t (cid:17) + h l 1 t , (7) where a t is the expert index that token t is sent to, and is the sigmoid gate (Lewis et al., 2021).", "Considering the sigmoid gate ( s t,a t ) , if FFN a t is beneficial for token t , optimizing the training objective (e.g., minimizing the cross-entropy loss for language modeling) will urge the gate to be greater; otherwise, the gate will tend to be smaller.", "The gate signal urges similar tokens to be assigned to the same expert that is beneficial to them, thus producing cohesive token-to-expert assignments.", "Balance Loss We design a balance loss L bal to avoid imbalanced assignments that will result in a high computational bottleneck in the MoE layer and thus limit the computational efficiency: L bal = N (cid:88) i =1 ( |A i | n ) n (cid:88) t A i ( s t,i ) , (8) where is a hyper-parameter, A i denotes the set of tokens assigned to expert i , and n denotes the average number of tokens per expert.", "Intuitively, if an expert is overloaded, the balance loss will urge its assignment scores to be smaller.", "Otherwise, if an expert is unoccupied, the balance loss will increase its assignment scores to capture more tokens.", "Distilled Router As the routing strategy is being learned, we synchronously distill it into a lightweight router decoupled from the backbone model to mimic the original routing strategy.", "Let X be the input sequence and E be the distilled expert centroids, we use word embeddings D( ) to extract the routing features.", "We use the cross-entropy loss as the distillation loss L dis : h l 1 t = D( X t ) , s t,i = E (cid:62) i h l 1 t , (9) L dis = T (cid:88) t =1 log exp ( s t,a t ) (cid:80) Ni =1 exp ( s t,i ) , (10) where h l 1 t is the distilled routing feature of token t , s t,i is the distilled assignment score between token t and expert i , and a t is the expert index that token t is actually sent to.", "In practice, D( ) 7087 Methods Assignment Algorithm Gating Function Balance Loss Switch Transformer Greedy softmax Yes BASE Layer Auction (Bertsekas, 1992) sigmoid No Hash Layer Fixed Hashing { 0 , 1 } No STABLEMOE Training Stage 1 Greedy sigmoid Yes Training Stage 2 Fixed Routing sigmoid No Table 1: Comparison of three core elements among STABLEMOE and existing MoE-based Transformers.", "can also be other feature extractors such as CNNs or Transformers (we investigate other variants of distilled routers in Section 4.4.3), but the word embedding is the fastest one and achieves the best performance.", "At the end of training stage 1, we freeze all parameters for the distilled router (i.e., D( ) and E ) to prepare a stable routing strategy for training stage 2 and the inference stage.", "Given frozen D( ) and E , in training stage 2, we directly use them for a stable routing strategy.", "Keeping other processes the same as in training stage 1, we calculate the output of the MoE layer as follows: h l 1 t = D( X t ) , s t,i = E (cid:62) i h l 1 t , (12) a t = arg max i ( s t,i ) , (13) h lt = ( s t, a t ) FFN a t (cid:16) h l 1 t (cid:17) + h l 1 t .", "(14)", "Notice that the sigmoid gate ( ) still uses original assignment score s t, a t as input, so the gate signal can also be learned in training stage", "2. Since the routing strategy has been fixed in training stage 2, we no longer need the balance loss and distillation loss.", "Therefore, the training loss for training stage 2 contains only the task loss: LS 2 = L task .", "During inference, we also use the frozen distilled router for routing.", "The fixed routing strategy, which is consistent with training stage 2, makes information learned in MoE layers be utilized more thoroughly and thus leads to better performance.", "We compare three core elements, including the assignment algorithm, the gating function, and the balance loss, among STABLEMOE and existing MoE-based Transformers.", "In Table 1, we summarize their differences.", "Assignment Algorithm Switch Transformer and the training stage 1 in STABLEMOE simply assign each token to the expert with the highest affinity.", "BASE Layer adopts the auction algorithm (Bertsekas, 1992) to find a global balanced assignment with the maximum affinity sum.", "Hash layer and the training stage 2 in STABLEMOE have token-level fixed routing strategies, which have good stability.", "Gating Function Hash Layer uses a hard gating function, which means an expert is either fully activated or not activated, no any intermediate state.", "Switch Layer, BASE Layer, and STABLEMOE have soft gating functions, which can judge the affinity between a token and its target expert and determine a proper ratio to use the expert.", "Soft gating mechanisms also urge models to learn a more cohesive token-to-expert assignment.", "Balance Loss BASE Layer and Hash Layer do not apply any balance losses.", "By contrast, Switch Transformer and the training stage 1 in STABLEMOE design balance losses to control the balance of the token-to-expert assignment.", "In summary, combing two training stages, STABLEMOE has a stable, cohesive, and balanced routing strategy, while the other three MoE methods cannot meet them all simultaneously.", "(Lewis et al., 2021) and Roller et al. (2021), we use the combination of the corpora in RoBERTa (Liu et al.,", "2019) and the English subset of the CC100 (Con-neau et al., 2020) corpus.", "The corpus contains about 100B tokens, and we randomly sample 5M tokens for validation and 20M tokens for test.", "Multilingual Machine Translation We follow Wang et al. (2020) and Ma et al. (2020) to use a collection of parallel data in different languages from the WMT datasets.", "1 The dataset contains 32.5 million parallel data for language pairs between English and other 9 languages, including French (Fr), Czech (Cs), German (De), Finnish (Fi), Latvian (Lv), Estonian (Et), Romanian (Ro), Hindi (Hi), and Turkish (Tr).", "In our experiments, we combine the original parallel data with 180 million back-translation data as described in (Ma et al., 2020) and call the augmented dataset WMT for short.", "We conduct experiments based on fairseq 2 .", "All experiments are conducted on NVIDIA V100 GPUs with 32 GB memory.", "Language Modeling We adopt the tokenizer of GPT-2 (Radford et al., 2019), which uses byte-pair encoding (Sennrich et al., 2016) with a vocabulary size of 50,257.", "We set up two settings for STABLEMOE, a base one and a large one.", "For both settings, we insert one MoE layer after the middle Transformer block.", "We train the model for 60K steps in total (6K for training stage 1 and 54K for training stage 2).", "The dimension of the distilled routing features is 50, which brings 2.51M extra parameters for routing.", "The balance factor is set to 0.3.", "We 1 http://www.statmt.org 2 https://github.com/facebookresearch/fairseq use Adam (Kingma and Ba, 2015) with 1 = 0 .", "9 and 2 = 0 .", "98 as the optimizer.", "The rest of the hyper-parameters are summarized in Appendix A. Multilingual Machine Translation Following (Ma et al., 2020), we use the Sentence-Piece (Kudo and Richardson, 2018) model to tokenize sentences.", "The vocabulary is learned from the training set and consists of 64,000 tokens.", "We insert two MoE layers, one after the third encoder block and one after the third decoder block.", "We train the model for 352K steps in total (30K for training stage 1 and 322K for training stage 2).", "The dimension of the distilled routing features is also set to 50.", "The balance factor is set to 0.3.", "We use Adam with 1 = 0 .", "9 and 2 = 0 .", "98 as the optimizer.", "The rest of the hyper-parameters are summarized in Appendix B. 4.3 Results 4.3.1 Language Modeling We compare STABLEMOE with Switch Transformer, BASE Layer, Hash Layer, and the standard Transformer.", "All MoE models have the same number of shared parameters as the standard Transformer.", "Under the base setting, in addition, we compare two larger dense Transformers that add FFNs in a dense manner to achieve the same number of total parameters as MoE models.", "The deeper model stacks more FFNs, while the wider model uses FFNs with a larger hidden size.", "The floating point operations (FLOPs) per sequence are profiled by the torchprofile toolkit.", "Models # Params FLOPs De Ro Fr Cs Et Hi Tr Fi Lv Avg Standard Transformer 77M 290B 39.8 36.0 32.5 29.1 27.2 24.5 23.6 21.8 20.3 28.31 Larger Transformer 90M 317B 40.6 36.9 33.7 29.8 27.8 25.4 24.6 22.2 20.9 29.10 Switch Transformer 480M 317B 42.3 37.1 33.8 31.0 28.6 26.0 24.3 23.0 21.2 29.70 BASE Layer 480M 317B 42.6 37.8 34.2 31.0 29.0 26.9 25.1 23.2 21.6 30.16 Hash Layer 480M 317B 42.7 37.0 34.6 31.3 28.7 26.5 23.9 23.1 21.7 29.94 STABLEMOE 480M 317B 43.0 37.4 34.7 31.5 29.3 26.8 24.7 23.6 21.9 30.32", "Under the base setting, STABLEMOE outperforms existing MoE methods on both the validation and the test sets by 0.3-0.8 perplexity.", "Compared with dense models, STABLEMOE achieves about 3.7 lower perplexity than the standard Transformer, and about 1.3 higher perplexity than the deeper larger model.", "Under the large setting, consistently, STABLEMOE outperforms the other MoE methods, and achieves about 2.6 lower perplexity than the standard Transformer.", "We also compare the convergence speed of different models under the base setting.", "The results are plotted in Figure 4, which takes the validation perplexity as y-axis and the training wall time as x-axis.", "Although larger dense models achieve better validation perplexity at last, their training speed is quite slow.", "With regard to the convergence speed, MoE-based Transformers usually exceed dense models.", "Further, among the MoE methods, STABLEMOE has the fastest convergence speed.", "We compare STABLEMOE with Switch Transformer, BASE Layer, Hash Layer, the standard Transformer, and a larger Transformer.", "All MoE-16 32 64 Number of Experts 18.5 19.0 19.5 20.0 20.5 21.0 V a li d PPLBASE Layer Hash Layer StableMoE Figure 5: Comparison of MoE-based Transformers with different numbers of experts.", "based models have the same number of shared parameters as the standard Transformer.", "Except the standard Transformer, the other models have the same FLOPs.", "We translate other languages to English (X En) and report the test BLEU on WMT in Table", "3. STABLEMOE achieves the best average test BLEU among the compared MoE methods.", "Keeping the same FLOPs, STABLEMOE outperforms the dense model by 1.22 test BLEU.", "With the MoE technique, we expand the number of parameters by 523% and the FLOPs just increase by 9.3%.", "Number of Experts Figure 5 shows the results of BASE Layer, Hash Layer, and STABLEMOE with different numbers of experts.", "As the number of experts goes larger, the validation perplexity of each model tends to further descend.", "Consistently, STABLEMOE performs the best with dif-7090 3 Sublayers (454M) 10 Sublayers (1.51B) Number of Expert Sublayers (Parameters) 18.0 18.5 19.0 19.5 20.0 V a li d PPLBASE Layer Hash Layer StableMoE Figure 6: Comparison of MoE models with different numbers of expert sublayers (i.e., number of parame-ters).", "ferent numbers of experts.", "In addition, it is worth noting that STABLEMOE with 16 experts outperforms BASE Layer with 32 experts, and STABLEMOE with 32 experts achieves a similar perplexity to BASE Layer with 64 experts.", "Number of Expert Parameters We compare MoE models with different numbers of expert parameters by setting different expert sublayers.", "Models with 3 and 10 expert sublayers have 454M and 1.51B expert parameters, respectively.", "From Figure 6, we observe that more expert parameters bring better performance, and STABLEMOE consistently performs the best under both settings.", "Position of MoE Layers We investigate the effect of the inserting position of the MoE layer.", "By default, the MoE layer stacks 3 MoE sublayers and is inserted after the L 2 -th Transformer block (mid-dle).", "We also attempt to insert the MoE layer before the first Transformer block (bottom), and after the last Transformer block (top).", "In addition, we also investigate the effect if we scatter 3 MoE sublayers uniformly into the standard Transformer, i.e., after the L 4 -th, 2 L 4 -th, and 3 L 4 -th blocks, respectively.", "As shown in Table 4, among the above four settings, Models Valid PPL BASE Layer 20.04 + Fixed Routing Strategy (Stage 2) 19.41 (0.63 ) STABLEMOE with Only Stage 1 19.48 + Fixed Routing Strategy (Stage 2) 19.28 (0.20 ) Table 5: Effects of the fixed routing strategy.", "inserting stacked MoE sublayers into the middle position allows STABLEMOE to achieve the best performance.", "Ratio Between Two Training Stages We investigate the balance point of the ratio between two training stages in STABLEMOE.", "Given a fixed number of total steps, allocating more steps to training stage 1 can help to learn and distill a better routing strategy.", "On the other hand, a larger ratio of training stage 2 means longer stable training.", "Under the base setting of language modeling, we attempt to allocate 6K, 15K, and 30K steps to training stage 1 and show the results in Table 6.", "We find that if we use word embeddings as the distilled router, allocating 6K steps (10% of the total steps) to training stage 1 is a good balance point.", "We speculate that the word embedding is simple enough to be learned fast, so longer stable training is more important to achieve better performance.", "Based on the base setting of language modeling, we design two experiments to investigate how much performance improvement the fixed routing strategy can bring.", "On the one hand, we equip BASE Layer with a stable routing strategy to address its routing fluctuation problem.", "Specifically, as in STABLEMOE, we use word embeddings to distill the routing strategy of BASE Layer in the first 6K training steps, and freeze the distilled router for stable routing in the remaining training.", "As shown in Table 5, the fixed routing strategy decreases the validation perplexity of BASE Layer by 0.63.", "On the other hand, we attempt to disable the training stage 2 in STABLEMOE and always train the model as in training stage 1.", "As a result, the validation perplexity of STABLEMOE becomes 0.20 higher than the full version that has a fixed routing strategy.", "These two cases support that the fixed routing strategy, which addresses the routing fluctuation problem, can bring better performance for MoE-based Transformers.", "In Table 6, in addition to word embedding, we also investigate four variants of the distilled router including CNN and three Transformers with different numbers of layers.", "We allocate 15K steps to training stage 1 for all of them.", "From the table, we find that using word embedding achieves the best performance, while the 3-layer Transformer does not perform well.", "For the routing strategy distillation, the distilling signal from a 32-category classifica-tion objective may not be informative enough to learn a complex router.", "By contrast, it is more suitable for simpler routers.", "Therefore, we recommend using word embedding, which is simple and effective, as the distilled router in STABLEMOE.", "We compare the degree of routing fluctuations between STABLEMOE and BASE Layer to show our advantage with regard to the routing stability.", "During the 60K training steps, we examine the token-to-expert assignment for tokens in the validation set every 500 steps.", "For each token, we define the last fluctuation step as the last step where its target expert is different from the final step.", "We plot the cumulative token percentage about the last fluctuation step in Figure", "7. For ease of reading, we annotate the x-axis as the percentage it accounts for all training steps.", "From the figure, we find that the routing fluctuation problem is notable for BASE Layer.", "By contrast, for STABLEMOE, there is no routing fluctuation in training stage 2 since we apply a fixed routing strategy.", "Jacobs et al. (1991); Jordan and Jacobs (1994) propose Mixture of Experts (MoE) to compute different examples with independent expert modules.", "Shazeer et al. (2017) introduce MoE to build large-scale language models based on LSTMs (Hochre-iter and Schmidhuber, 1997).", "Recently, as Transformers become popular, many pieces of work design MoE-version FFNs to build MoE-based Transformers.", "GShard (Lepikhin et al., 2021), Switch Transformer (Fedus et al., 2021), and BASE Layer (Lewis et al., 2021) follow the learning-to-route paradigm and dynamically learn how to route each input token to experts.", "However, we point out that these learning-to-route methods face the routing fluctuation problem.", "Hash Layer (Roller et al., 2021) propose a non-parametric routing strategy, which uses a pre-designed token-level hash table to determine the token-to-expert assignment.", "The static routing strategy will not fluctuate, but the randomly determined hash table limits the upper bound of its performance.", "Our work includes the advantages of learning-to-route methods to learn a balanced and cohesive routing strategy, and further addresses the routing fluctuation problem through applying a frozen lightweight router that mimics the original routing strategy.", "In this paper, we point out the routing fluctuation problem that exists in previous learning-to-route MoE methods.", "In order to address this problem, we propose STABLEMOE with two training stages.", "We first learn a balanced and cohesive routing strategy and synchronously distill it into a lightweight router decoupled from the backbone model.", "Then, we freeze the distilled router for a stable routing strategy in the remaining training.", "We validate STABLEMOE on language modeling and multilingual machine translation.", "The results show that STA 7092 BLEMOE outperforms existing MoE methods in terms of both convergence speed and performance.", "William Fedus, Barret Zoph, and Noam Shazeer.", "2021.", "Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity.", "CoRR , abs/2101.03961.", "Robert A. Jacobs, Michael I. Jordan, Steven J. Nowlan, and Geoffrey E. Hinton.", "1991.", "Adaptive mixtures of local experts.", "Neural Computing , 3(1):7987.", "Michael I. Jordan and Robert A. Jacobs.", "1994.", "Hierarchical mixtures of experts and the EM algorithm.", "Neural Computing , 6(2):181214.", "Diederik P. Kingma and Jimmy Ba.", "2015.", "Adam: A method for stochastic optimization.", "In 3rd International Conference on Learning Representations, ICLR 2015 .", "Damai Dai, Zhifang Sui, and Baobao Chang are supported by the National Key Research and Development Program of China 2020AAA0106701 and NSFC project U19A2065." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "result", "abstain", "abstain", "objective", "other", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "objective", "result", "abstain", "other", "method", "method", "other", "other", "method", "objective", "abstain", "method", "other", "method", "other", "abstain", "other", "method", "other", "other", "other", "other", "other", "other", "abstain", "method", "method", "other", "objective", "abstain", "method", "method", "other", "other", "abstain", "other", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "other", "other", "other", "abstain", "other", "other", "method", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other" ]
[ "Answering natural language questions over tables is usually seen as a semantic parsing task.", "To alleviate the collection cost of full logical forms, one popular approach focuses on weak supervision consisting of denotations instead of logical forms.", "However, training semantic parsers from weak supervision poses difficulties, and in addition, the generated logical forms are only used as an intermediate step prior to retrieving the denotation.", "In this paper, we present TAPAS , an approach to question answering over tables without generating logical forms.", "TAPAS trains from weak supervision, and predicts the denotation by selecting table cells and optionally applying a corresponding aggregation operator to such selection.", "TAPAS extends BERT's architecture to encode tables as input, initializes from an effective joint pre-training of text segments and tables crawled from Wikipedia, and is trained end-to-end.", "We experiment with three different semantic parsing datasets, and find that TAPAS outperforms or rivals semantic parsing models by improving state-of-the-art accuracy on SQA from 55 .", "1 to 67 .", "2 and performing on par with the state-of-the-art on WIKISQL and WIKITQ, but with a simpler model architecture.", "We additionally find that transfer learning, which is trivial in our setting, from WIKISQL to WIKITQ, yields 48 .", "7 accuracy, 4 .", "2 points above the state-of-the-art.", "Question answering from semi-structured tables is usually seen as a semantic parsing task where the question is translated to a logical form that can be executed against the table to retrieve the correct denotation (Pasupat and Liang, 2015; Zhong et al., 2017; Dasigi et al., 2019; Agarwal et al., 2019).", "Semantic parsers rely on supervised training data that pairs natural language questions with logical forms, but such data is expensive to annotate.", "the burden of data collection for semantic parsing, including paraphrasing (Wang et al., 2015), human in the loop (Iyer et al., 2017; Lawrence and Rie-zler, 2018) and training on examples from other domains (Herzig and Berant, 2017; Su and Yan, 2017).", "One prominent data collection approach focuses on weak supervision where a training example consists of a question and its denotation instead of the full logical form (Clarke et al., 2010; Liang et al., 2011; Artzi and Zettlemoyer, 2013).", "Although appealing, training semantic parsers from this input is often difficult due to the abundance of spurious logical forms (Berant et al., 2013; Guu et al., 2017) and reward sparsity (Agarwal et al., 2019; Muhlgay et al., 2019).", "In addition, semantic parsing applications only utilize the generated logical form as an intermediate step in retrieving the answer.", "Generating logical forms, however, introduces difficulties such as maintaining a logical formalism with sufficient expressivity, obeying decoding constraints (e.g. well-formedness), and the label bias problem (Andor et al., 2016; Lafferty et al., 2001).", "In this paper we present TAPAS (for Ta ble Pa r s er), a weakly supervised question answering model that reasons over tables without generating logical forms.", "TAPAS predicts a minimal program by selecting a subset of the table cells and a possible aggregation operation to be executed on top of them.", "Consequently, TAPAS can learn operations from natural language, without the need to specify them in some formalism.", "This is implemented by extending BERT's architecture (Devlin et al., 2019) with additional embeddings that capture tabular structure, and with two classification layers for selecting cells and predicting a corresponding aggregation operator.", "Importantly, we introduce a pre-training method for TAPAS , crucial for its success on the end task.", "We extend BERT's masked language model objective to structured data, and pre-train the model over millions of tables and related text segments crawled from Wikipedia.", "During pre-training, the model masks some tokens from the text segment and from the table itself, where the objective is to predict the original masked token based on the textual and tabular context.", "Finally, we present an end-to-end differentiable training recipe that allows TAPAS to train from weak supervision.", "For examples that only involve selecting a subset of the table cells, we directly train the model to select the gold subset.", "For examples that involve aggregation, the relevant cells and the aggregation operation are not known from the denotation.", "In this case, we calculate an expected soft scalar outcome over all aggregation operators given the current model, and train the model with a regression loss against the gold denotation.", "In comparison to prior attempts to reason over tables without generating logical forms (Neelakantan et al., 2015; Yin et al., 2016; M uller et al., 2019), TAPAS achieves better accuracy, and holds several advantages: its architecture is simpler as it includes a single encoder with no auto-regressive decoding, it enjoys pre-training, tackles more question types such as those that involve aggregation, and directly handles a conversational setting.", "We find that on three different semantic parsing datasets, TAPAS performs better or on par in comparison to other semantic parsing and question answering models.", "On the conversational SQA (Iyyer et al., 2017), TAPAS improves state-of-the-art accuracy from 55 .", "1 to 67 .", "2 , and achieves on par performance on WIKISQL (Zhong et al., 2017) and WIKITQ (Pasupat and Liang, 2015).", "Transfer learning, which is simple in TAPAS , from WIKISQL to WIKITQ achieves 48.7 accuracy, 4 .", "2 points higher than state-of-the-art.", "Our code and pre-trained model are publicly available at https: //github.com/google-research/tapas .", "Our model's architecture (Figure 1) is based on BERT's encoder with additional positional embeddings used to encode tabular structure (visualized in Figure 2).", "We flatten the table into a sequence of words, split words into word pieces (tokens) and concatenate the question tokens before the table tokens.", "We additionally add two classification layers for selecting table cells and aggregation operators that operate on the cells.", "We now describe these modifications and how inference is performed.", "Additional embeddings We add a separator token between the question and the table, but unlike Hwang et al. (2019) not between cells or rows.", "Instead, the token embeddings are combined with table-aware positional embeddings before feeding them to the model.", "We use different kinds of positional embeddings: Position ID is the index of the token in the flat-tened sequence (same as in BERT).", "Segment ID takes two possible values: 0 for the question, and 1 for the table header and cells.", "Column / Row ID is the index of the colum-n/row that this token appears in, or 0 if the token is a part of the question.", "Rank ID if column values can be parsed as floats or dates, we sort them accordingly and assign an embedding based on their numeric rank (0 for not comparable, 1 for the smallest item, i + 1 for an item with rank i ).", "This can assist the model when processing questions that involve superlatives, as word pieces may not represent numbers informatively (Wallace et al., 2019).", "Previous Answer given a conversational setup where the current question might refer to the previous question or its answers (e.g., question 5 in Figure 3), we add a special embedding that marks whether a cell token was the answer to the previous question (1 if the token's cell was an answer, or 0 otherwise).", "Cell selection This classification layer selects a subset of the table cells.", "Depending on the selected col1 col2 0 1 2 3 [CLS] query ?", "[SEP] col ##1 col ##2 0 1 2 3 SEG 0 SEG 0 SEG 0 SEG 0 SEG 1 SEG 1 SEG 1 SEG 1 SEG 1 SEG 1 SEG 1 SEG 1 COL 0 COL 0 COL 0 COL 0 COL 1 COL 1 COL 2 COL 2 COL 1 COL 2 COL 1 COL 2 ROW 0 ROW 0 ROW 0 ROW 0 ROW 0 ROW 0 ROW 0 ROW 0 ROW 1 ROW 1 ROW 2 ROW 2 SegmentEmbeddings ColumnEmbeddings RowEmbeddings RANK 0 TokenEmbeddings RANK 0 RANK 0 RANK 0 RANK 0 RANK 0 RANK 0 RANK 1 RANK 1 RANK 2 RANK 2 RankEmbeddings RANK 0 POS 0 POS 1 POS 2 POS 3 POS 4 POS 5 POS 6 POS 7 POS 8 POS 9 POS 10 POS 11 PositionEmbeddings Table Figure 2: Encoding of the question query? and a simple table using the special embeddings of TAPAS .", "aggregation operator, these cells can be the final answer or the input used to compute the final answer.", "Cells are modelled as independent Bernoulli variables.", "First, we compute the logit for a token using a linear layer on top of its last hidden vector.", "Cell logits are then computed as the average over logits of tokens in that cell.", "The output of the layer is the probability p ( c ) s to select cell c .", "We additionally found it useful to add an inductive bias to select cells within a single column.", "We achieve this by introducing a categorical variable to select the correct column.", "The model computes the logit for a given column by applying a new linear layer to the average embedding for cells appearing in that column.", "We add an additional column logit that corresponds to selecting no column or cells.", "We treat this as an extra column with no cells.", "The output of the layer is the probability p ( co ) col to select column co computed using softmax over the column logits.", "We set cell probabilities p ( c ) s outside the selected column to 0 .", "Aggregation operator prediction Semantic parsing tasks require discrete reasoning over the table, such as summing numbers or counting cells.", "To handle these cases without producing logical forms, TAPAS outputs a subset of the table cells together with an optional aggregation operator .", "The aggregation operator describes an operation to be applied to the selected cells, such as SUM , COUNT , AVERAGE or NONE .", "The operator is selected by a linear layer followed by a softmax on top of the final hidden vector of the first token (the special [CLS] token).", "We denote this layer as p a ( op ) , where op is some aggregation operator.", "Inference We predict the most likely aggregation operator together with a subset of the cells (using the cell selection layer).", "To predict a discrete cell selection we select all table cells for which their probability is larger than 0 .", "5 .", "These predictions are then executed against the table to retrieve the answer, by applying the predicted aggregation over the selected cells.", "Following the recent success of pre-training models on textual data for natural language understanding tasks, we wish to extend this procedure to structured data, as an initialization for our table parsing task.", "To this end, we pre-train TAPAS on a large number of tables from Wikipedia.", "This allows the model to learn many interesting correlations between text and the table, and between the cells of a columns and their header.", "We create pre-training inputs by extracting text-table pairs from Wikipedia.", "We extract 6.2M tables: 3.3M of class Infobox 1 and 2.9M of class WikiTable .", "We consider tables with at most 500 cells.", "All of the end task datasets we experiment with only contain horizontal tables with a header row with column names.", "Therefore, we only extract Wiki tables of this form using the <th> tag to identify headers.", "We furthermore, transpose Infoboxes into a table with a single header and a single data row.", "The tables, created from Infoboxes, are arguably not very typical, but we found them to improve performance on the end tasks.", "As a proxy for questions that appear in the end tasks, we extract the table caption, article title, article description, segment title and text of the segment the table occurs in as relevant text snippets.", "In this way we extract 21.3M snippets.", "We convert the extracted text-table pairs to pretraining examples as follows: Following Devlin et al. (2019), we use a masked language model pre-training objective.", "We also experimented with adding a second objective of predicting whether 1 en.wikipedia.org/wiki/Help:Infobox the table belongs to the text or is a random table but did not find this to improve the performance on the end tasks.", "This is aligned with Liu et al. (2019) that similarly did not benefit from a next sentence prediction task.", "For pre-training to be efficient, we restrict our word piece sequence length to a certain budget (e.g., we use 128 in our final experiments).", "That is, the combined length of tokenized text and table cells has to fit into this budget.", "To achieve this, we randomly select a snippet of 8 to 16 word pieces from the associated text.", "To fit the table, we start by only adding the first word of each column name and cell.", "We then keep adding words turn-wise until we reach the word piece budget.", "For every table we generate 10 different snippets in this way.", "We follow the masking procedure introduced by BERT.", "We use whole word masking 2 for the text, and we find it beneficial to apply whole cell masking (masking all the word pieces of the cell if any of its pieces is masked) to the table as well.", "We note that we additionally experimented with data augmentation, which shares a similar goal to pre-training.", "We generated synthetic pairs of questions and denotations over real tables via a grammar, and augmented these to the end tasks training data.", "As this did not improve end task performance significantly, we omit these results.", "Overview We formally define table parsing in a weakly supervised setup as follows.", "Given a training set of N examples { ( x i , T i , y i ) } Ni =1 , where x i is an utterance, T i is a table and y i is a corresponding set of denotations, our goal is to learn a model that maps a new utterance x to a program z , such that when z is executed against the corresponding table T , it yields the correct denotation y .", "The program z comprises a subset of the table cells and an optional aggregation operator.", "The table T maps a table cell to its value.", "As a pre-processing step described in Section 5.1, we translate the set of denotations y for each example to a tuple ( C, s ) of cell coordinates C and a scalar s , which is only populated when y is a single scalar.", "We then guide training according to the content of ( C, s ) .", "For cell selection examples, for which s is not populated, we train the model to select the cells in C .", "For scalar answer examples, 2 https://github.com/google-research/ bert/blob/master/README.md where s is populated but C is empty, we train the model to predict an aggregation over the table cells that amounts to s .", "We now describe each of these cases in detail.", "Cell selection In this case y is mapped to a subset of the table cell coordinates C (e.g., question 1 in Figure 3).", "For this type of examples, we use a hierarchical model that first selects a single column and then cells from within that column only.", "We directly train the model to select the column col which has the highest number of cells in C .", "For our datasets cells C are contained in a single column and so this restriction on the model provides a useful inductive bias.", "If C is empty we select the additional empty column corresponding to empty cell selection.", "The model is then trained to select cells C col and not select ( T \\ C ) col. The loss is composed of three components: (1) the average binary cross-entropy loss over column selections: J columns = 1 | Cols | (cid:88) co Cols CE( p ( co ) col , 1 co = col ) where the set of columns Cols includes the additional empty column, CE( ) is the cross entropy loss, 1 is the indicator function.", "(2) the average binary cross-entropy loss over column cell selections: J cells = 1 | Cells ( col ) | (cid:88) c Cells ( col ) CE( p ( c ) s , 1 c C ) , where Cells ( col ) is the set of cells in the chosen column.", "(3) As for cell selection examples no aggregation occurs, we define the aggregation supervision to be NONE (assigned to op 0 ), and the aggregation loss is: J aggr = log p a ( op 0 ) .", "The total loss is then JCS = J columns + J cells + J aggr , where is a scaling hyperparameter.", "Scalar answer In this case y is a single scalar s which does not appear in the table (i.e. C = , e.g., question 2 in Figure 3).", "This usually corresponds to examples that involve an aggregation over one or more table cells.", "In this work we handle aggregation operators that correspond to SQL, namely COUNT , AVERAGE and SUM , however our model is not restricted to these.", "For these examples, the table cells that should be selected and the aggregation operator type are not known, as these cannot be directly inferred from # Question Answer Example Type 1 Which wrestler had the most number of reigns?", "the scalar answer s .", "To train the model given this form of supervision one could search offline (Dua et al., 2019; Andor et al., 2019) or online (Berant et al., 2013; Liang et al., 2018) for programs (ta-ble cells and aggregation) that execute to s .", "In our table parsing setting, the number of spurious programs that execute to the gold scalar answer can grow quickly with the number of table cells (e.g., when s = 5 , each COUNT over any five cells is potentially correct).", "As with this approach learning can easily fail, we avoid it.", "Instead, we make use of a training recipe where no search for correct programs is needed.", "Our approach results in an end-to-end differentiable training, similar in spirit to Neelakantan et al. (2015).", "We implement a fully differentiable layer that latently learns the weights for the aggregation prediction layer p a ( ) , without explicit supervision for the aggregation type.", "Specifically, we recognize that the result of executing each of the supported aggregation operators is a scalar.", "We then implement a soft differentiable estimation for each operator (Table 1), given the token selection probabilities and the table values: compute ( op, p s , T ) .", "Given the results for all aggregation operators we then calculate the expected result according to the current model: s pred = (cid:88) i =1 p a ( op i ) compute ( op i , p s , T ) , where p a ( op i ) = p a ( op i ) (cid:80) i =1 p a ( op i ) is a probability distribution normalized over aggregation operators excluding NONE .", "We then calculate the scalar answer loss with Huber loss (Huber, 1964) given by: J scalar = (cid:40) 0 .", "Like Neelakantan et al. (2015), we find this loss is more stable than the squared loss.", "In addition, since a scalar answer implies some aggregation operation, we also define an aggregation loss that penalizes the model for assigning probability mass to the NONE class: J aggr = log( (cid:88) i =1 p a ( op i )) The total loss is then JSA = J aggr + J scalar , where is a scaling hyperparameter.", "As for some examples J scalar can be very large, which leads to unstable model updates, we introduce a cutoff hy-perparameter.", "Then, for a training example where J scalar > cutoff , we set J = 0 to ignore the example entirely, as we noticed this behaviour correlates with outliers.", "In addition, as computation done during training is continuous, while that being done during inference is discrete, we further add a temperature that scales token logits such that p s would output values closer to binary ones.", "Ambiguous answer A scalar answer s that also appears in the table (thus C (cid:54) = ) is ambiguous, as in some cases the question implies aggregation (question 3 in Figure 3), while in other cases a table WIKISQL WIKITQ SQA Logical Form (cid:51) (cid:55) (cid:55) Conversational (cid:55) (cid:55) (cid:51) Aggregation (cid:51) (cid:51) (cid:55) Examples 80654 22033 17553 Tables 24241 2108 982 Table 2: Dataset statistics.", "cell should be predicted (question 4 in Figure 3).", "Thus, in this case we dynamically let the model choose the supervision ( cell selection or scalar answer ) according to its current policy.", "Concretely, we set the supervision to be of cell selection if p a ( op 0 ) S , where 0 < S < 1 is a threshold hyperparameter, and the scalar answer supervision otherwise.", "This follows hard EM (Min et al., 2019), as for spurious programs we pick the most probable one according to the current model.", "We experiment with the following semantic parsing datasets that reason over single tables (see Table 2).", "WIKITQ (Pasupat and Liang, 2015) This dataset consists of complex questions on Wikipedia tables.", "Crowd workers were asked, given a table, to compose a series of complex questions that include comparisons, superlatives, aggregation or arithmetic operation.", "The questions were then veri-fied by other crowd workers.", "SQA (Iyyer et al., 2017) This dataset was constructed by asking crowd workers to decompose a subset of highly compositional questions from WIKITQ, where each resulting decomposed question can be answered by one or more table cells.", "The final set consists of 6 , 066 question sequences ( 2 . 9 question per sequence on average).", "WIKISQL (Zhong et al., 2017) This dataset focuses on translating text to SQL.", "It was constructed by asking crowd workers to paraphrase a template-based question in natural language.", "Two other crowd workers were asked to verify the quality of the proposed paraphrases.", "As our model predicts cell selection or scalar answers, we convert the denotations for each dataset to (cid:104) question, cell coordinates, scalar answer (cid:105) triples.", "SQA already provides this information (gold cells for each question).", "For WIKISQL and WIKITQ, we only use the denotations.", "Therefore, we derive cell coordinates by matching the denotations against the table contents.", "We fill scalar answer information if the denotation contains a single element that can be interpreted as a float, otherwise we set its value to NaN .", "We drop examples if there is no scalar answer and the denotation can not be found in the table, or if some denotation matches multiple cells.", "We apply the standard BERT tokenizer on questions, table cells and headers, using the same vocabulary of 32k word pieces.", "Numbers and dates are parsed in a similar way as in the Neural Programmer (Neelakantan et al., 2017).", "The official evaluation script of WIKITQ and SQA is used to report the denotation accuracy for these datasets.", "For WIKISQL, we generate the reference answer, aggregation operator and cell coordinates from the reference SQL provided using our own SQL implementation running on the JSON tables.", "However, we find that the answer produced by the official WIKISQL evaluation script is incorrect for approx.", "2% of the examples.", "Throughout this paper we report accuracies against our reference answers, but we explain the differences and also provide accuracies compared to the official reference answers in Appendix A. We start pre-training from BERT-Large (see Appendix B for hyper-parameters).", "We find it beneficial to start the pre-training from a pre-trained standard text BERT model (while randomly initializing our additional embeddings), as this enhances convergence on the held-out set.", "We run both pre-training and fine-tuning on a setup of 32 Cloud TPU v3 cores with maximum sequence length 512.", "In this setup pre-training takes around 3 days and fine-tuning around 10 hours for WIKISQL and WIKITQ and 20 hours for SQA (with the batch sizes from table 12).", "The resource requirements of our model are essentially the same as BERT-large 3 .", "For fine-tuning, we choose hyper-parameters using a black box Bayesian optimizer similar to Google Vizier (Golovin et al., 2017) for WIKISQL and WIKITQ.", "For SQA we use grid-search.", "We discuss the details in Appendix B. 3 https://github.com/google-research/ bert/blob/master/README.md#out-of-memory-issues Model Dev Test Liang et al. (2018) 71.8 72.4 Agarwal et al. (2019) 74.9 74.8 Wang et al. (2019) 79.4 79.3 Min et al. (2019) 84.4 83.9 TAPAS 85.1 83.6 TAPAS (fully-supervised) 88.0 86.4 Table 3: WIKISQL denotation accuracy 4 .", "All results report the denotation accuracy for models trained from weak supervision.", "We follow Niven and Kao (2019) and report the median for 5 independent runs, as BERT-based models can degenerate.", "We present our results for WIKISQL and WIKITQ in Tables 3 and 4 respectively.", "Table 3 shows that TAPAS , trained in the weakly supervised setting, achieves close to state-of-the-art performance for WIKISQL ( 83 .", "6 vs 83 .", "9 (Min et al., 2019)).", "If given the gold aggregation operators and selected cell as supervision (extracted from the reference SQL), which accounts as full supervision to TAPAS , the model achieves 86 .", "4 .", "Unlike the full SQL queries, this supervision can be annotated by non-experts.", "For WIKITQ the model trained only from the original training data reaches 42 .", "6 which surpass similar approaches (Neelakantan et al., 2015).", "When we pre-train the model on WIKISQL or SQA (which is straight-forward in our setup, as we do not rely on a logical formalism), TAPAS achieves 48 .", "7 and 48 .", "8 , respectively.", "For SQA, Table 5 shows that TAPAS leads to substantial improvements on all metrics: Improving all metrics by at least 11 points, sequence accuracy from 28 .", "1 to 40 .", "4 and average question accuracy from 55 .", "1 to 67 .", "2 .", "Model ablations Table 6 shows an ablation study on our different embeddings.", "To this end we pretrain and fine-tune models with different features.", "As pre-training is expensive we limit it to 200 , 000 steps.", "For all datasets we see that pre-training on tables and column and row embeddings are the most important.", "Positional and rank embeddings are also improving the quality but to a lesser extent.", "We additionally find that when removing the scalar answer and aggregation losses (i.e., setting JSA =0 ) from TAPAS , accuracy drops for both datasets.", "For WIKITQ, we observe a substantial drop in performance from 29 .", "0 to 23 .", "1 when removing aggregation.", "For WIKISQL performance drops from 84 .", "7 to 82 .", "6 .", "The relatively small decrease for WIKISQL can be explained by the fact that most examples do not need aggregation to be answered.", "In principle, 17% of the examples of 4 As explained in Section 5.2, we report TAPAS numbers comparing against our own reference answers.", "Appendix A contains numbers WRT the official WIKISQL eval script.", "the dev set have an aggregation ( SUM , AVERAGE or COUNT ), however, for all types we find that for more than 98% of the examples the aggregation is only applied to one or no cells.", "In the case of SUM and AVERAGE , this means that most examples can be answered by selecting one or no cells from the table.", "For COUNT the model without aggregation operators achieves 28 .", "2 accuracy (by selecting 0 or 1 from the table) vs. 66 .", "5 for the model with aggregation.", "Note that 0 and 1 are often found in a special index column.", "These properties of WIKISQL make it challenging for the model to decide whether to apply aggregation or not.", "For WIKITQ on the other hand, we observe a substantial drop in performance from 29 .", "0 to 23 .", "1 when removing aggregation.", "Qualitative Analysis on WIKITQ We manually analyze 200 dev set predictions made by TAPAS on WIKITQ.", "For correct predictions via an aggregation, we inspect the selected cells to see if they match the ground truth.", "We find that 96% of the correct aggregation predictions where also correct in terms of the cells selected.", "We further find that 14% of the correct aggregation predictions had only one cell, and could potentially be achieved by cell selection, with no aggregation.", "We also perform an error analysis and identify the following exclusive salient phenomena:", "(i) 12% are ambiguous ( Name at least two labels that released the group's albums. ), have wrong labels or missing information ;", "(ii) 10% of the cases require complex temporal comparisons which could also not be parsed with a rich formalism such as SQL ( what country had the most cities founded in the 1830's? ) ;", "(iii) in 16% of the cases the gold denotation has a textual value that does not appear in the table, thus it could not be predicted without performing string operations over cell values ;", "(iv) on 10% , the table is too big to fit in 512 tokens ;", "(v) on 13% of the cases TAPAS selected no cells, which suggests introducing penalties for this behaviour ;", "(vi) on 2% of the cases, the answer is the difference between scalars, so it is outside of the model capabilities ( how long did anne churchill/spencer live? ) ;", "(vii) the other 37% of the cases could not be classified to a particular phenomenon.", "Pre-training Analysis In order to understand what TAPAS learns during pre-training we analyze its performance on 10,000 held-out examples.", "We split the data such that the tables in the held-out all text header cell all 71.4 68.8 96.6 63.4 word 74.1 69.7 96.9 66.6 number 53.9 51.7 83.6 53.2 Table 7: Mask LM accuracy on held-out data, when the target word piece is located in the text, table header, cell or anywhere (all) and the target is anything, a word or number.", "data do not occur in the training data.", "Table 7 shows the accuracy of masked word pieces of different types and in different locations.", "We find that average accuracy across position is relatively high (71.4).", "Predicting tokens in the header of the table is easiest (96.6), probably because many Wikipedia articles use instances of the same kind of table.", "Predicting word pieces in cells is a bit harder (63.4) than predicting pieces in the text (68.8).", "The biggest differences can be observed when comparing predicting words (74.1) and numbers (53.9).", "This is expected since numbers are very specific and often hard to generalize.", "The soft-accuracy metric and example (Appendix C) demonstrate, however, that the model is relatively good at predicting numbers that are at least close to the target.", "Limitations TAPAS handles single tables as context, which are able to fit in memory.", "Thus, our model would fail to capture very large tables, or databases that contain multiple tables.", "In this case, the table(s) could be compressed or filtered, such that only relevant content would be encoded, which we leave for future work.", "In addition, although TAPAS can parse compositional structures (e.g., question 2 in Figure 3), its expressivity is limited to a form of an aggregation over a subset of table cells.", "Thus, structures with multiple aggregations such as number of actors with an average rating higher than 4 could not be handled correctly.", "Despite this limitation, TAPAS succeeds in parsing three different datasets, and we did not encounter this kind of errors in Section 5.3.", "This suggests that the majority of examples in semantic parsing datasets are limited in their compositionality.", "Semantic parsing models are mostly trained to produce gold logical forms using an encoder-decoder approach (Jia and Liang, 2016; Dong and Lapata,", "2016).", "To reduce the burden in collecting full logical forms, models are typically trained from weak supervision in the form of denotations.", "These are used to guide the search for correct logical forms (Clarke et al., 2010; Liang et al., 2011).", "Other works suggested end-to-end differentiable models that train from weak supervision, but do not explicitly generate logical forms.", "Neelakantan et al. (2015) proposed a complex model that sequentially predicts symbolic operations over table segments that are all explicitly predefined by the authors, while Yin et al. (2016) proposed a similar model where the operations themselves are learned during training.", "Muller et al. (2019) proposed a model that selects table cells, where the table and question are represented as a Graph Neural Network, however their model can not predict aggregations over table cells.", "Cho et al. (2018) proposed a supervised model that predicts the relevant rows, column and aggregation operation sequentially.", "In our work, we propose a model that follow this line of work, with a simpler architecture than past models (as the model is a single encoder that performs computation for many operations implicitly) and more coverage (as we support aggregation operators over selected cells).", "Finally, pre-training methods have been designed with different training objectives, including language modeling (Dai and Le, 2015; Peters et al., 2018; Radford et al., 2018) and masked language modeling (Devlin et al., 2019; Lample and Con-neau, 2019).", "These methods dramatically boost the performance of natural language understanding models (Peters et al., 2018, inter alia ).", "Recently, several works extended BERT for visual question answering, by pre-training over text-image pairs while masking different regions in the image (Tan and Bansal, 2019; Lu et al., 2019).", "As for tables, Chen et al. (2019) experimented with rendering a table into natural language so that it can be handled with a pre-trained BERT model.", "In our work we extend masked language modeling for table representations, by masking table cells or text segments.", "In this paper we presented TAPAS , a model for question answering over tables that avoids generating logical forms.", "We showed that TAPAS effectively pre-trains over large scale data of text-table pairs and successfully restores masked words and table cells.", "We additionally showed that the model can fine-tune on semantic parsing datasets, only using weak supervision, with an end-to-end differentiable recipe.", "Results show that TAPAS achieves better or competitive results in comparison to state-of-the-art semantic parsers.", "In future work we aim to extend the model to represent a database with multiple tables as context, and to effectively handle large tables.", "We would like to thank Yasemin Altun, Srini Narayanan, Slav Petrov, William Cohen, Massimo Nicosia, Syrine Krichene, Jordan Boyd-Graber and the anonymous reviewers for their constructive feedback, useful comments and suggestions.", "This work was completed in partial fulfillment for the PhD degree of the first author, which was also supported by a Google PhD fellowship." ]
[ "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "method", "abstain", "method", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "other", "abstain", "method", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "objective", "method", "result", "result", "abstain", "objective", "other", "other" ]
[ "Critical to natural language generation is the production of correctly inflected text.", "In this paper, we isolate the task of predicting a fully inflected sentence from its partially lemmatized version.", "Unlike traditional morphological inflection or surface realization, our task input does not provide gold tags that specify what morphological features to realize on each lemmatized word; rather, such features must be inferred from sentential context.", "We develop a neural hybrid graphical model that explicitly reconstructs morphological features before predicting the inflected forms, and compare this to a system that directly predicts the inflected forms without relying on any morphological annotation.", "We experiment on several typologically diverse languages from the Universal Dependencies treebanks, showing the utility of incorporating linguistically-motivated latent variables into NLP models.", "NLP systems are often required to generate grammatical text, e.g., in machine translation, summarization, dialogue, and grammar correction.", "One component of grammaticality is the use of contextually appropriate closed-class morphemes.", "In this work, we study contextual inflection , which has been recently introduced in the CoNLL-SIGMORPHON 2018 shared task (Cotterell et al., 2018) to directly investigate context-dependent morphology in NLP.", "There, a system must inflect partially lemmatized tokens in sentential context.", "For example, in English, the system must reconstruct the correct word sequence two cats are sitting from partially lemmatized sequence two _cat_ are sitting .", "Among other things, this requires: (1) identifying cat as a noun in this context, (2) recognizing that cat should be inflected as plural to agree with the nearby verb and numeral, and (3) realizing this inflection as the suffix s .", "Most past work in supervised computational morphologyincluding the previous CoNLL-SIGMORPHON shared tasks on morphological reinflection (Cotterell et al., 2017)has focused mainly on step (3) above.", "As the task has been introduced into the literature only recently, we provide some background.", "Contextual inflection amounts to a highly constrained version of language modeling.", "Language modeling predicts all words of a sentence from scratch, so the usual training and evaluation metric perplexityis dominated by the language model's ability to predict content , which is where most of the uncertainty lies.", "Our task focuses on just the ability to reconstruct certain missing parts of the sentenceinflectional morphemes and their orthographic realization.", "This refocuses the modeling effort from semantic coherence to morphosyntactic coherence, an aspect of language that may take a back seat in current language models (see Linzen et al., 2016; Belinkov et al., 2017).", "Contextual inflection does not perfectly separate grammaticality modeling from content modeling: as illustrated in Tab.", "1, mapping two cats _be_ sitting to the fully-inflected two cats were sitting does not require full knowledge of English grammarthe system does not have to predict the required word order nor the required auxiliary verb be , as these are supplied in the input.", "Conversely, this example does still require predicting some contentthe semantic choice of past tense is not given by the input and must be guessed by the system.", "1 The primary contribution of this paper is a novel structured neural model for contextual inflection.", "The model first predicts the sequence of morphological tags from the partially lemmatized sequence and, then, it uses the predicted tag and lemma to inflect the word.", "We use this model 1 This morphological feature is inherent in the sense of Booij (1996).", "to evince a simple point: models are better off jointly predicting morphological tags from context than directly learning to inflect lemmata from sentential context.", "Indeed, none of the participants in the 2018 shared task jointly predicted tags with the inflected forms.", "Comparing our new model to several competing systems, we show our model has the best performance on the majority of languages.", "We take this as evidence that predicting morphological tags jointly with inflecting is a better method for this task.", "Furthermore, we provide an analysis discussing the role of morphological complexity in model performance.", "Given a language, let M be a set of morphological tags in accordance with the Universal Dependencies annotation (Nivre et al., 2016).", "Each m M has the form m = (cid:104) t, (cid:105) , where t is a part of speech, and the slot is a set of attributevalue pairs that represent morphosyntactic information, such as number, case, tense, gender, person, and others.", "We take t T , the set of universal parts of speech described by Petrov et al. (2012).", "A sentence consists of a finite word sequence w (we use boldface for sequence variables).", "For every word w i in the sequence, there is a corresponding analysis in terms of a morphological tag m i M and a lemma (cid:96) i .", "In general, w i is determined by the pair (cid:104) (cid:96) i , m i (cid:105) .", "2 Using this notation, Cotterell et al. (2018)'s shared task is to predict a sentence w from its partially lemmatized form (cid:96) , inferring m as an intermediate latent variable.", "Our dataset (3) has all three sequences for each sentence.", "Consider an extreme case when all words are lemmatized.", "3 We introduce a structured neural model 2 Although w i can sometimes be computed by concatenating (cid:96) i with m i -specific affixes, it can also be irregular.", "3 In case of partially lemmatized sequence we still train the model to predict the tags over the entire sequence, but evaluate it only for lemmatized slots.", "(1) In other words, the distribution is over interleaved sequences of one-to-one aligned inflected words and morphological tags, conditioned on a lemmatized sequence, all of length n .", "This distribution is drawn as a hybrid (directedundirected) graphical model (Koller and Friedman, 2009) in Fig. 1.", "We define the two conditional distributions in the model in 2.2 and 2.3, respectively.", "The distribution p ( m | (cid:96) ) is defined to be a conditional random field (CRF; Lafferty et al., 2001).", "In this work, our CRF is a conditional distribution over morphological taggings of an input sequence.", "We define this conditional distribution as p ( m | (cid:96) ) = 1 Z ( (cid:96) ) n (cid:89) i =1 ( m i , m i 1 , (cid:96) ) (2) where ( , , ) 0 is an arbitrary potential, Z ( (cid:96) ) normalizes the distribution, and m 0 is a distinguished start-of-sequence symbol.", "In this work, we opt for a recurrent neural potentialspecifically, we adopt a parameterization similar to the one given by Lample et al. (2016).", "Our potential is computed as follows.", "First, the sequence (cid:96) is encoded into a sequence of word vectors using the strategy described by Ling et al. (2015): word vectors are passed to a bidirectional LSTM (Graves et al., 2005), where the corresponding hidden states are concatenated at each time step.", "We simply refer to the hidden state h i R d as the result of said concatenation at the i -th step.", "Using h i , we can define the potential function as ( m i , m i 1 ) = exp (cid:0) A m i ,m i 1 + o (cid:62) m i h i (cid:1) , where A m i ,m i 1 is a transition weight matrix and o m i R d is a morphological tag embedding; both are learned.", "The conditional distribution p ( w i | (cid:96) i , m i ) is parameterized by a neural encoderdecoder model with hard attention from Aharoni and Goldberg (2017).", "The model was one of the top performers in the 2016 SIGMORPHON shared task (Cotterell et al., 2016); it achieved particularly high accuracy in the low-resource setting.", "Hard attention is motivated by the observation that alignment between the input and output sequences is often monotonic in inflection tasks.", "In the model, the input lemma is treated as a sequence of characters, and encoded using a bidirectional LSTM (Graves and Schmidhu-ber, 2005), to produce vectors x j for each character position j .", "Next the word w i = c = c 1 c | w i | is generated in a decoder character-by-character: p ( c j | c <j ,l i , m i ) = (3) softmax ( W ( z 1 , . . . , z j ) + b ) where z j is the concatenation of the current attended input x j alongside morphological features, m i , and an embedding of the previously generated symbol c j 1 ; and finally is an LSTM over the sequence of z j vectors.", "The decoder additionally predicts a type of operation.", "4 The distribution in Eq.", "(3), strung together with the other conditionals, yields a joint distribution over the entire character sequence: p ( c | (cid:96) i , m i ) = | w i | (cid:89) j =1 p ( c j | c <j , (cid:96) i , m i ) (4) 4 The model can be viewed as a transition system trained over aligned character-level strings to learn sequences of operations ( write or step ).", "For instance, to map the lemma talk to its past form talked , we feed in POS=V;Tense=PAST <w> t a l k </w> and train the network to output <w> t a l k e d </w> , where we have augmented the orthographic character alphabet with the featureattribute pairs that constitute the morphological tag m i .", "We optimize the log-likelihood of the training data with respect to all model parameters.", "As Eq.", "(1) is differentiable, this is achieved with standard gradient-based methods.", "For decoding we use a greedy strategy where we first decode the CRF, that is, we solve the problem m (cid:63) = argmax m log p ( m | (cid:96) ) , using the Viterbi (1967) algorithm.", "We then use this decoded m (cid:63) to generate forms from the inflector.", "Note that finding the one-best string under our neural inflector is intractable, and for this reason we use greedy search.", "Dataset.", "We use the Universal Dependencies v1.2 dataset (Nivre et al., 2016) for our experiments.", "We include all the languages with information on their lemmata and fine-grained grammar tag annotation that also have fasttext embeddings (Bojanowski et al., 2017), which are used for word embedding initialization.", "5 Evaluation.", "We evaluate our model's ability to predict:", "(i) the correct morphological tags from the lemma context, and", "(ii) the correct inflected forms.", "As our evaluation metric, we report 1-best accuracy for both tags and word form prediction.", "Configuration.", "We use a word and character embedding dimensionality of 300 and 100, respectively.", "The hidden state dimensionality is set to 200.", "All models are trained with Adam (Kingma and Ba, 2014), with a learning rate of 0.001 for 20 epochs.", "Baselines.", "We use two baseline systems: (1) the CoNLLSIGMORPHON 2018 subtask 2 neural encoderdecoder with an attention mechanism (SM; Cotterell et al. (2018)), where the encoder represents a target form context as a concatenation of its lemma, its left and right word forms, their 5 We also choose mainly non-Wikipedia datasets to reduce any possible intersection with the data used for the FastText model training Language tag form JOINT GOLD JOINT DIRECTSM CPH Bulgarian 81 .", "lemmata and tag representations, and then the decoder generates the target inflected form character-by-character; and (2) a monolingual version of the best performing system of the shared task (CPH; Kementchedjhieva et al. (2018)) that augments the above encoderdecoder with full (sentence-level) left and right contexts (comprising of forms, their lemmata and morphological tags) as well as predicts morphological tags for a target form as an auxiliary task.", "6 In both cases, the hyperparameters are set as described in Cotterell et al. (2018).", "We additionally evaluate the SIGMORPHON baseline system on prediction of the target form without any information on morphological tags ( DIRECT ).", "Tab.", "2 presents the accuracy of our best model across all languages.", "7 Below we highlight two main lessons from our error analysis that apply to a wider range of generation tasks, e.g., machine translation and dialog systems.", "Directly Predicting Morphology.", "Tab.", "2 indicates that all systems that make use of morphological tags outperform the DIRECT baseline on most languages.", "The comparison of our hybrid model with latent morphological tags to the direct form generation baseline in SM suggests that we should be including linguistically-motivated latent vari-6 It has been shown to improve the model's performance.", "7 The accuracy numbers are on average higher than the ones achieved in terms of the CoNLLSIGMORPHON 2018 subtask 2 since we did not filter out tokens that are typically not inflected (such as articles or prepositions).", "ables into models of natural language generation.", "We observe in Tab.", "2 that predicting the tag together with the form (joint) often improves performance.", "The most interesting comparison here is with the multi-task CPH method, which includes morphology into the model without joint modeling; our model achieves higher results on 7/10 languages.", "Morphological Complexity Matters.", "We observed that for languages with rich case systems, e.g., the Slavic languages (which exhibit a lot of fu-sion), the agglutinative Finno-Ugric languages, and Basque, performance is much worse.", "These languages present a broader decision space and often require inferring which morphological categories need to be in agreement in order to make an accurate prediction.", "This suggests that generation in languages with more morphological complexity will be a harder problem for neural models to solve.", "Indeed, this problem is under-explored, as the field of NLP tends to fixate on generating English text, e.g., in machine translation or dialogue system research.", "Error Analysis.", "We focused error analysis on prediction of agreement categories.", "Our analysis of adjectivenoun agreement category prediction suggests that our model is able to infer adjective number, gender, and case from its head noun.", "Verb gender, which appears only in the past tense of many Slavic languages, seems to be harder to predict.", "Given that the linear distance between the subject and the verb may be longer, we suspect the network struggles to learn longer-distance dependencies, consistent with the findings of Linzen et al. (2016).", "Overall, automatic inference of agreement categories is an interesting problem that deserves more attention, and we leave it for future work.", "We also observe that most uncertainty comes from morphological categories such as noun number, noun definiteness (which is expressed morphologically in Bulgarian), and verb tense, all of which are inherent (Booij, 1996) 8 and typically cannot be predicted from sentential context if they do not participate in agreement.", "9 On the other hand, aspect, although being closely related to tense, is well-predicted since it is mainly expressed as a separate lexeme.", "But, in general, it is still problematic to make a prediction in languages where aspect is morphologically marked or highly mixed with 8 Such categories exist in most languages that exhibit some degree of morphological complexity.", "Unless there is no strong signal within a sentence such as yesterday , tomorrow , or ago as in the case of tense.", "We additionally compared 1-best and 10-best predictions for tags.", "Most mispredictions existing in 1-best lists are due to inherent categories mentioned above (that allow multiple plausible options that can fit the sentential context).", "Indeed, the problem is usually solved by allowing system to output 10-best lists.", "There, precision@10 is on average 8 points higher than precision@1.", "Finally, our analysis of case category prediction on nouns shows that more common cases such as the nominative, accusative, and genitive are predicted better, especially in languages with fixed word order.", "On the other hand, cases that appear less frequently and on shifting positions (such as the instrumental), as well as those not associated with specific prepositions, are less well predicted.", "In addition, we evaluated the model's performance when all forms are replaced by their corresponding lemmata (as in two cat be sit ).", "For freer word order languages such as Polish or Latin, we observe a substantial drop in performance because most information on inter-word relations and their roles (expressed by means of case system) is lost.", "The primary evaluation for most contemporary language and translation modeling research is perplexity, BLEU (Papineni et al., 2002), or METEOR (Banerjee and Lavie, 2005).", "Undoubtedly, such metrics are necessary for extrinsic evaluation and comparison.", "However, relatively few studies have focused on intrinsic evaluation of the model's mastery of grammaticality.", "Recently, Linzen et al. (2016) investigated the ability of an LSTM language model to capture sentential structure, by evaluating subjectverb agreement with respect to number, and showed that under strong supervision, the LSTM is able to approximate dependencies.", "Taking it from the other perspective, a truer measure of grammatical competence would be a task of mapping a meaning representation to text, where the meaning representation specifies all necessary semantic contentcontent lemmata, dependency relations, and inherent closed-class morphemes (semantic features such as noun number, noun definiteness, and verb tense)and the system is to realize this content according to the morphosyntactic conventions of a language, which means choosing word order, agreement morphemes, function words, and the surface forms of all words.", "Such tasks have been investigated to some extentgenerating text from tectogrammatical trees (Hajic et al., 2002; Ptcek and abokrtsk, 2006) or from an AMR graph (Song et al., 2017).", "Belz et al. (2011) organized a related surface realization shared task on mapping unordered and uninflected dependency trees to properly ordered inflected sentences.", "The generated sentences were afterwards assessed by human annotators, making the task less scalable and more time consuming.", "Although our task is not perfectly matched to grammaticality modeling, the upside is that it is a lightweight task that works directly on text.", "No meaning representation is required.", "Thus, training and test data in any language can be prepared simply by lemmatizing a naturally occurring corpus.", "Finally, as a morphological inflection task, the form generation task is closely related to previous SIGMORPHON shared tasks (Cotterell et al., 2016, 2017).", "There, most neural models achieve high accuracy on many languages at type-level prediction of the form from its lemma and slot.", "The current task is more challenging in that the model has to perform token-level form generation and inherently infer the slot from the contextual environment.", "Our findings are in line with those from the CoNLL-SIGMORPHON 2018 shared task (Cotterell et al., 2018) and provide extra evidence of the utility of morphosyntactic features.", "This work proposed a method for contextual inflection using a hybrid architecture.", "Evaluation over several diverse languages showed consistent improvements over state of the art.", "Our analysis demonstrated that the contextual inflection can be a highly challenging task, and the inclusion of morphological features prediction is an important element in such a system.", "We also highlighted two types of morphological categories, contextual and inherent, in which the former relies on agreement and the latter comes from a speaker's intention.", "We thank all anonymous reviewers for their comments.", "The first author would like to acknowledge the Google PhD fellowship.", "The second author would like to acknowledge a Facebook Fellowship." ]
[ "abstain", "method", "method", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "objective", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "method", "objective", "abstain", "objective", "result", "other", "other", "other" ]
[ "Text-based games simulate worlds and interact with players using natural language.", "Recent work has used them as a testbed for autonomous language-understanding agents, with the motivation being that understanding the meanings of words or semantics is a key component of how humans understand, reason, and act in these worlds.", "However, it remains unclear to what extent artificial agents utilize semantic understanding of the text.", "To this end, we perform experiments to systematically reduce the amount of semantic information available to a learning agent.", "Surprisingly, we find that an agent is capable of achieving high scores even in the complete absence of language semantics, indicating that the currently popular experimental setup and models may be poorly designed to understand and leverage game texts.", "To remedy this deficiency, we propose an inverse dynamics decoder to regularize the representation space and encourage exploration, which shows improved performance on several games including ZORKI.", "We discuss the implications of our findings for designing future agents with stronger semantic understanding.", "Text adventure games such as ZORKI (Figure 1", "(a)) have been a testbed for developing autonomous agents that operate using natural language.", "Since interactions in these games (input observations, action commands) are through text, the ability to understand and use language is deemed necessary and critical to progress through such games.", "Previous work has deployed a spectrum of methods for language processing in this domain, including word vectors (Fulda et al., 2017), recurrent neural networks (Narasimhan et al., 2015; Hausknecht et al., 2020), pre-trained language models (Yao Work partly done during internship at Microsoft Research. Project page: https://blindfolded.cs. princeton.edu . et al., 2020), open-domain question answering systems (Ammanabrolu et al., 2020), knowledge graphs (Ammanabrolu and Hausknecht, 2020; Ammanabrolu et al., 2020; Adhikari et al., 2020), and reading comprehension systems (Guo et al., 2020).", "Meanwhile, most of these models operate under the reinforcement learning (RL) framework, where the agent explores the same environment in repeated episodes, learning a value function or policy to maximize game score.", "From this perspective, text games are just special instances of a partially observable Markov decision process (POMDP) ( S, T, A, O, R, ) , where players issue text actions a A , receive text observations o O and scalar rewards r = R ( s, a ) , and the underlying game state s S is updated by transition s (cid:48) = T ( s, a ) .", "However, what distinguishes these games from other POMDPs is the fact that the actions and observations are in language space L .", "Therefore, a certain level of decipherable semantics is attached to text observations o O L and actions a A L .", "Ideally, these texts not only serve as observation or action identifiers , but also provide clues about the latent transition function T and reward function R .", "For example, issuing an action jump based on an observation on the cliff would likely yield a subsequent observation such as you are killed along with a negative reward.", "Human players often rely on their understanding of language semantics to inform their choices, even on games they have never played before, while replacing texts with non-semantic identifiers such as their corresponding hashes (Figure 1", "(c)) would likely render games unplayable for people.", "However, would this type of transformation affect current RL agents for such games?", "In this paper, we ask the following question: To what extent do current reinforcement learning agents leverage semantics in text-based games?", "gate the Deep Reinforcement Relevance Network (DRRN) (He et al., 2016), a top-performing RL model that uses gated recurrent units (GRU) (Cho et al., 2014) to encode texts.", "We conduct three experiments on a set of interactive fiction games from the Jericho benchmark (Hausknecht et al., 2020) to probe the effect of different semantic representations on the functioning of DRRN.", "These include (1) using just a location phrase as the input observation (Figure 1", "(b)), (2) hashing text observations and actions (Figure 1", "(c)), and (3) regularizing vector representations using an auxiliary inverse dynamics loss.", "While reducing observations to location phrases leads to decreased scores and enforcing inverse dynamics decoding leads to increased scores on some games, hashing texts to break semantics surprisingly matches or even outperforms the baseline DRRN on almost all games considered.", "This implies current RL agents for text-based games might not be sufficiently leveraging the semantic structure of game texts to learn good policies, and points to the need for developing better experiment setups and agents that have a finer grasp of natural language.", "DRRN Baseline Our baseline RL agent DRRN (He et al., 2016) learns a Q-network Q ( o, a ) parametrized by .", "The model encodes the observation o and each action candidate a using two separate GRU encoders f o and f a , and then aggregates the representations to derive the Q-value through a MLP decoder g : Q ( o, a ) = g (concat( f o ( o ) , f a ( a ))) (1) For learning , tuples ( o, a, r, o (cid:48) ) of observation, action, reward and the next observation are sampled from an experience replay buffer and the following temporal difference (TD) loss is minimized: LTD ( ) = ( r + max a (cid:48) AQ ( o (cid:48) , a (cid:48) ) Q ( o, a )) 2 (2) During gameplay, a softmax exploration policy is used to sample an action: ( a | o ) = exp( Q ( o, a )) (cid:80) a (cid:48) A exp( Q ( o, a (cid:48) )) (3) Note that when the action space A is large, (2) and (3) become intractable.", "A valid action handicap (Hausknecht et al., 2020) or a language model (Yao et al., 2020) can be used to generate a reduced action space for efficient exploration.", "For all the modifications below, we use the DRRN with the valid action handicap as our base model.", "Reducing Semantics via Minimizing Observation ( MIN-OB ) Unlike other RL domains such as video games or robotics control, at each step of text games the (valid) action space is constantly changing, and it reveals useful information about the current state.", "For example, knowing un-lock box is valid leaks the existence of a locked box.", "Also, sometimes action semantics indicate its value even unconditional on the state, e.g. pick gold usually seems good.", "Given these, we minimize the observation to only a location phrase o (cid:55) loc( o ) (Figure 1", "(b)) to isolate the action semantics: Q loc ( o, a ) = g ( f o (loc( o )) , f a ( a ))) .", "Breaking Semantics via Hashing ( HASH ) GRU encoders f o and f a in the Q-network (1) generally ensure that similar texts (e.g. a single word change) are given similar representations, and therefore similar values.", "To study whether such a semantics continuity is useful, we break it by hashing observation and action texts.", "Specifically, Game DRRNMIN-OB HASH INV-DY Max balances 10 / 10 10 / 10 10 / 10 10 / 10 51 deephome 57 / 66 8.5 / 27 58 / 67 57.6 / 67 300 detective 290 / 337 86.3 / 350 290 / 317 290 / 323 360 dragon -5.0 / 6 -5.4 / 3 -5.0 / 7 -2.7 / 8 25 enchanter 20 / 20 20 / 40 20 / 30 20 / 30 400 inhumane 21.1 / 45 12.4 / 40 21.9 / 45 19.6 / 45 90 library 15.7 / 21 12.8 / 21 17 / 21 16.2 / 21 30 ludicorp 12.7 / 23 11.6 / 21 14.8 / 23 13.5 / 23 150 omniquest 4.9 / 5 4.9 / 5 4.9 / 5 5.3 / 10 50 pentari 26.5 / 45 21.7 / 45 51.9 / 60 37.2 / 50 70 zork1 39.4 / 53 29 / 46 35.5 / 50 43.1 / 87 350 zork3 0.4 / 4.5 0.0 / 4 0.4 / 4 0.4 / 4 7 Avg.Norm .21 / .38 .12 / .35 .25 / .39 .23 / .40 Table 1: Final/maximum score of different models.", "given a hash function from strings to integers h : L Z , and a pseudo-random generator G : Z R d that turns an integer seed to a random Gaussian vector, a hashing encoder f = G h : L R d can be composed.", "While f o and f a in (1) are trainable, f is fixed throughout RL, and ensures two texts that only differ by a word would have completely different representations.", "In this sense, hashing breaks semantics and only serves to identify different observations and actions in an abstract MDP problem (Figure 1", "(c)): Q hash ( o, a ) = g ( f ( o ) , f ( a )) .", "Regularizing Semantics via Inverse Dynamics Decoding ( INV-DY ) The GRU representations in DRRN f o ( o ) , f a ( a ) are only optimized for the TD loss (2).", "As a result, text semantics can degenerate during encoding, and the text representation might arbitrarily overfit to the Q-values.", "To regularize and encourage more game-related semantics to be encoded, we take inspiration from Pathak et al. (2017) and propose an inverse dynamics auxiliary task during RL.", "Given representations of current and next observations f o ( o ) , f o ( o (cid:48) ) , we use a MLP g inv to predict the action representation, and a GRU decoder d to decode the action back to text * .", "The inverse dynamics loss is defined as L inv ( , ) = log p d ( a | g inv (concat( f o ( o ) , f o ( o (cid:48) ))) where denote weights of g inv and d , and p d ( a | x ) is the probability of decoding token sequence a using GRU decoder d with initial hidden state as x .", "To also regularize the action encoding, action reconstruction from f a is also used as a loss term: L dec ( , ) = log p d ( a | f a ( a )) * Directly defining an L1/L2 loss between f a ( a ) and g inv (concat( f o ( o ) , f o ( o (cid:48) ))) in the representation space will collapse text representations together.", "An intrinsic reward r + = L inv ( , ) is also used to explore toward where the inverse dynamics is not learned well yet.", "All in all, the aim of INV-DY is threefold: (1) regularize both action and observation representations to avoid degeneration by decoding back to the textual domain, (2) encourage f o to encode action-relevant parts of observations, and (3) provide intrinsic motivation for exploration.", "Setup We train on 12 games from the Jericho benchmark (Hausknecht et al., 2020).", "These human-written interactive fictions are rich, complex, and diverse in semantics.", "For each game, we train DRRN asynchronously on 8 parallel instances of the game environment for 10 5 steps, using a prioritized replay buffer.", "Following prior practice (Hausknecht et al., 2020), we augment observations with location and inventory descriptions by issuing the look' and inventory' commands.", "We train three independent runs for each game and report their average score.", "For HASH , we use the Python built-in hash function to process text as a tuple of token IDs, and implement the random vector generator G by seeding PyTorch with the hash value.", "For INV-DY , we use 1 = 2 = 1 .", "Scores Table 1 reports the final score (the average score of the final 100 episodes during training), and the maximum score seen in each game for different models.", "Average normalized score (raw score divided by game total score) over all games is also reported.", "Compared to the base DRRN, MIN-OB turns out to explore similar maximum scores on We omit games where DRRN cannot score.", "most games (except DEEPHOME and DRAGON ), but fails to memorize the good experience and reach high episodic scores, which suggests the importance of identifying different observations using language details.", "Most surprisingly, HASH has a lower final score than DRRN on only one game (ZORKI), while on PENTARI it almost doubles the DRRN final score.", "It is also the model with the best average normalized final score across games, which indicates that the DRRN model can perform as well without leveraging any language semantics, but instead simply by identifying different observations and actions with random vectors and memorizing the Q-values.", "Lastly, we observe on some games (DRAGON , OMNIQUEST , ZORKI) INV-DY can explore high scores that other models cannot.", "Most notably, on ZORKI the maximum score seen is 87 (average of 54 , 94 , 113 ), while any run of other models does not explore a score more than 55.", "This might indicate potential benefit of developing RL agents with more semantic representations.", "Transfer We also investigate if representations of different models can transfer to a new language environment, which is a potential benefit of learning natural language semantics.", "So we consider the two most similar games in Jericho, ZORKI and ZORKIII, fix the language encoders of different ZORKI models, and re-train the Q-network on ZORKIII for 10,000 steps.", "As shown in Figure 2, INV-DY representations can achieve a score around 1, which surpasses the best result of models trained from scratch on ZORKIII for 100,000 steps (around 0.4), showing great promise in better gameplay by leveraging language understanding from other games.", "HASH transfer is equivalent to training from scratch as the representations are not learnt, and a score around 0.3 is achieved.", "Finally, DRRN representations transfer worse than HASH , possibly due to overfitting to the TD loss (2).", "Visualizations Finally, we use t-SNE (Maaten and Hinton, 2008) to visualize representations of some ZORKI walkthrough states in Figure 3.", "The first 30 walkthrough states (red, score 0-45) are well experienced by the models during exploration, whereas the last 170 states (blue, score 157-350) are unseen .", "We also encircle the subset of states at location living room' for their shared semantics.", "First, we note that the HASH representations for living room states are scattered randomly, unlike the other two models with GRU language encoders.", "Further, the base DRRN overfits to the TD loss (2), representing unseen states (blue) in a different subspace to seen states (red) without regarding their semantic similarity.", "IND-DY is able to extrapolate to unseen states and represent them similarly to seen states for their shared semantics, which may explain its better performance on this game.", "Game stochasticity All the above experiments were performed using a fixed game random seed for each game, following prior work (Hausknecht et al., 2020).", "To investigate if randomness in games affects our conclusions, we run one trial of each game with episode-varying random seeds .", "We find the average normalized score for the base DRRN, HASH , INV-DY to be all around 17%, with performance drop mainly on three stochastic games (DRAGON , ZORKI, ZORKIII).", "Notably, the core finding that the base DRRN and HASH perform similarly still holds.", "Intuitively, even though the Q-values would be lower overall with unexpected transitions, RL would still memorize observations and actions that lead to high Q-values.", "The rest 150 states in the middle (score 45-157) are omitted as they might be seen by some model but not others.", "Randomness includes transition uncertainty (e.g.thief showing up randomly in ZORKI) and occasional paraphrasing of text observations.", "At a high level, RL agents for text-based games succeed by (1) exploring trajectories that lead to high scores, and (2) learning representations to stably reach high scores.", "Our experiments show that a semantics-regularized INV-DY model manages to explore higher scores on some games (DRAGON , OMNIQUEST , ZORKI), while the HASH model manages to memorize scores better on other games (LIBRARY , LUDICORP , PENTARI ) using just a fixed, random, non-semantic representation.", "This leads us to hypothesize two things.", "First, fixed, stable representations might make Q-learning easier.", "Second, it might be desirable to represent similar texts very differently for better gameplay, e.g. the Q-value can be much higher when a key object is mentioned, even if it only adds a few words to a long observation text.", "This motivates future thought into the structural vs. functional use of language semantics in these games.", "Our findings also urge a re-thinking of the popular RL + valid action handicap' setup for these games.", "On one hand, RL sets training and evaluation in the same environment, with limited text corpora, and sparse, mostly deterministic rewards as the only optimization objective.", "Such a combination easily results in overfitting to the reward system of a specific game (Figure 2), or even just a specific stage of the game (Figure 3).", "On the other hand, the valid action handicap reduces the action set to a small size tractable for memorization, and reduces the language understanding challenge for the RL agent.", "Thus for future research on text-based games, we advocate for more attention towards alternative setups without RL or handicaps (Hausknecht et al., 2019; Yao et al., 2020; Yu et al., 2020).", "Particularly, in a RL + no valid action handicap' setting, generating action candidates rather than simply choosing from a set entails more opportunities and challenges with respect to learning grounded language semantics (Yao et al., 2020).", "Additionally, training agents on a distribution of games and evaluating them on a separate set of unseen games would require more general semantic understanding.", "Semantic evaluation of these proposed paradigms is outside the scope of this paper, but we hope it will spark a productive discussion on the next steps toward building agents with stronger semantic understanding.", "Autonomous decision-making agents are potentially impactful in our society, and it is of great ethical consideration to make sure their understanding of the world and their objectives align with humans.", "Humans use natural language to convey and understand concepts as well as inform decisions, and in this work we investigate whether autonomous agents leverage language semantics similarly to humans in the environment of text-based games.", "Our findings suggest that the current generation of agents optimized for reinforcement learning objectives might not exhibit human-like language understanding, a phenomenon we should pay attention to and further study.", "We appreciate helpful suggestions from anonymous reviewers as well as members of the Princeton NLP Group and MSR RL Group." ]
[ "abstain", "abstain", "abstain", "method", "result", "objective", "method", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other" ]
[ "Identifying sections is one of the critical components of understanding medical information from unstructured clinical notes and developing assistive technologies for clinical note-writing tasks.", "Most state-of-the-art text classification systems require thousands of in-domain text data to achieve high performance.", "However, collecting in-domain and recent clinical note data with section labels is challenging given the high level of privacy and sensitivity.", "This paper proposes an algorithmic way to improve the task transferability of meta-learning-based text classification in order to address the issue of low-resource target data.", "Specifically, we explore how to make the best use of the source dataset and propose a unique task transferability measure named Normalized Negative Conditional Entropy (NNCE).", "Leveraging the NNCE, we develop strategies for selecting clinical categories and sections from source task data to boost cross-domain meta-learning accuracy.", "Experimental results show that our task selection strategies improve section classification accuracy significantly compared to meta-learning algorithms.", "An important part of Electronic Health Records (EHRs) is the digitized clinical notes that contain the medical and treatment histories of patients.", "The section of clinical notes can be defined as a text segment that clusters consecutive sentences with relevant content of one dimension of a patient's health encounter (Pomares-Quimbaya et al., 2019).", "Clinical note sections, labeled with either headings or subheadings, make the notes well organized and offer improved clinical information extraction (Wang et al., 2018b).", "However, many clinical notes contain narratives that are in an unstructured free-text Work done during an internship at Amazon Care Work done while Ram Bhakta was a researcher at Amazon Care in 2021, and Ram is now affiliated to Oath Care, Austin, USA format, (e.g., History of Present Illnesses described in paragraph form), which makes it challenging to retrieve and utilize this information.", "In the United States, physicians generally spend an excessive amount of time interfacing with EHRs and computerized physician order entry (CPOE) workflows in their aftercare work, resulting in burnout, low job satisfaction, and system-wise inefficiencies (Pa-tel et al., 2018).", "An automated section classifier can play a key role in mitigating this problem.", "In some cases, section classification serves as an end task of automatic report segmentation.", "For example, according to an internal survey we conducted with Amazon Care providers, we found evidence that classifying sentences related to the History of Present Illness from medical encounters can greatly assist providers with their documentation.", "For computer-assisted report generation, understanding clinical notes from an unstructured format is an important data pre-processing (Gopinath et al., 2020).", "There are some challenges for clinical note section classification in practice.", "First, it is difficult to collect and access a large amount of in-domain data.", "Second, section types and medical contents within a section substantially vary depending on care providers, which makes it hard to utilize open-source datasets.", "Even though some sections exist in multiple different sources, their contents vary across clinical categories.", "For example, the Diagnosis section for Nutrition specialty and Rehabilitation Service specialty vary in types of content.", "Recently developed neural network language technologies capture rich contextual information in sentences.", "Among them, Bidirectional Encoder Representations from Transformers (BERT) achieved significant improvements in multiple Natural Language Processing (NLP) tasks, establishing strong baselines in low-resource scenarios (De-vlin et al., 2019).", "However, there remains room for performance improvement because BERT uses 6690 source data data outside of in-domain or target-domain data in an unsupervised training fashion only.", "Another approach for low-resource in-domain NLP tasks is Multi-Task Learning (MTL).", "The MTL adopts shared text encoding layers across all tasks while the top layers are task-specific for each dataset (Liu et al., 2015, 2019).", "The target task with limited data benefits from the knowledge learned from source tasks.", "Instead of MTL, which minimizes the loss of the source tasks, Dou et al. (2019) proposed a model-agnostic meta-learning algorithm that finds optimal model parameters for better adaptation capability to new tasks.", "In classification tasks, Nichol et al. (2018) proposed Reptile, an optimization-based meta-learning algorithm for section classification, and achieved comparable accuracy on well-established benchmarks on low resourced image datasets.", "In the present paper, we adopted these methods as strong baselines in our experiments and computed the relative performance improvement of our method.", "Task transferability denotes how easy it is to transfer the representation learned from one task to another task (Tran et al., 2019; Nguyen et al., 2020b).", "It helps discover the relationship between two types of tasks and provides supporting evidence for developing transfer learning strategies.", "Task transferability becomes more useful in realistic situations where the assumption of the meta-learning, which is that data of the target task can be drawn from the distribution of the source tasks, does not hold.", "One common example is that there are outlier tasks' in the training (source) tasks, which are dissimilar from the testing (target) ones (Venkitaraman et al., 2020).", "For this problem, good selection of relevant source tasks can benefit knowledge transfer to unseen tasks (Zamir et al., 2018; Achille et al., 2019; Nguyen et al., 2020a).", "In clinical section classification, we suppose how close a source task is toward the target task is determined by its specialty and the section types included.", "However, few studies of task transferability estimation have discussed the function of each label.", "Thus we propose an information-theoretic metric for task transferability, namely Normalized Negative Conditional Entropy (NNCE).", "The NNCE score is calculated by the classifier of a source task and target data samples without training on the target task, thus saving expensive computation for model optimization.", "We hypothesize that this score correlates with how well the source data labels (sec-tions) distinguish the target labels.", "Leveraging the NNCE, we explore strategies of source task selection to improve the performance of meta-learning.", "The goal is to make the best use of available data from various clinical specialties for any target tasks.", "Specifically, we explore two strategies:", "1) category selection we select a subset of clinical categories that are relevant to the target task;", "2) section selection for a clinical category, we filter out the samples of certain section types which are not relevant to the target task and merge similar sections by assigning the same label.", "The category selection is informed directly by the best NNCE scores.", "For section selection, however, there are too many combinations, and it is time-consuming to train models for every possible task and find optimal ones.", "To handle that, we apply a backward selection method for heuristic search.", "The experiment results show that our task selection strategies improve the meta-transfer learning of section classification in low-resource scenarios.", "Our work has the following contributions: We apply the meta-learning for clinical section classification at sentence level in low-resource scenarios utilizing out-of-domain datasets.", "We propose a task transferability metric for selecting the source tasks relevant to the target tasks by category and section selection, which improves meta-learning performance.", "We evaluate a computationally efficient backward selection method for section selection and show that it leads to a better knowledge transfer.", "To the best of our knowledge, this is the first attempt to apply class subset selection to improve the task transferability in the NLP field.", "The goal of this paper is to address the automated clinical section classification task in low-resource scenarios.", "Notable early work focused on the extraction of frequency-based features and classified the sections of the clinical narratives with traditional machine learning approaches, including Support Vector Machines (Apostolova et al., 2009), 6691 Maximum Entropy (MaxEnt) models (Tepper et al., 2012) and Bayesian models (Ganesan and Subotin, 2014).", "Li et al. (2010) framed section mapping as a sequence-labeling problem and adopted a Hidden Markov Model (HMM).", "Dai et al. (2015) formulated the task as a token-based classification using the conditional random fields (CRF) model.", "Ni et al. (2015) applied active learning and distant supervision to the section classification.", "In the study of Tran et al. (2015), the tasks were performed by an object-based section annotator using an ontology to describes the relationship among the section concepts.", "However, most of the studies above investigate the section classification task for a single domain without exploring how to transfer knowledge from the source dataset to an unseen target domain with limited data.", "Recently, Rosenthal et al. (2019) leveraged the data from medical literature and performed section classification at the sentence level via transfer learning, recurrent neural networks (RNNs), and BERT in scenarios where a limited amount of in-domain training data was available.", "This work performs simple transfer learning and only predicts the shared sections across different clinical categories, and in practice, most section labels are domain-specific.", "This paper applies meta-learning and task transferability to transfer information learned from the source category to the target category with a new section classification task.", "Meta-learning aims at fast adaptation to new tasks with small amounts of data through learning knowledge from multiple source tasks.", "Among different approaches to meta-learning, one proposal is learning the initialization of a network that is good at adapting to new jobs.", "Dou et al. (2019) applied this proposal to the General Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2018a) and explored the model-agnostic meta-learning (MAML) (Finn et al., 2017) and its variants called first-order MAML (FO-MAML) and Reptile.", "In this paper, we adopted the Reptile algorithm that achieved the best performance in (Dou et al., 2019).", "Previous work explores the relationship between classification tasks on task similarity using traditional machine learning algorithms (Thrun and O'Sullivan, 1996; Bakker and Heskes, 2003; Xue", "et al., 2007; Zhang and Yeung, 2010).", "Other recent work mapped the functions into a vector space (Achille et al., 2019, 2021) to estimate the transferability using a non-symmetric distance.", "Vu et al. (2020) further developed the task embeddings approach and applied it to the NLP field to predict the most transferable source tasks.", "Zamir et al. (2018) modeled the underlying structure among different tasks to reduce the number of labeled training data.", "However, the common theme in all these approaches is that they require fine-tuning the target task and exhaustive optimization of parameters.", "The transferability estimation, unfortunately, is not robust if there are insufficient training samples.", "Moreover, none of these algorithms have discussed label selection which is crucial for task selection in clinical section classification.", "Tran et al. (2019) investigated the correlation of the label distributions between those tasks and proposed a negative conditional entropy (NCE) measure to estimate the task transferability.", "This algorithm only requires the source model and the labeled target samples without fine-tuning the in-domain data.", "Nguyen et al. (2020b) developed a variant of NCE measure called the Log Expected Empirical Prediction (LEEP) that denotes the average log-likelihood of the expected empirical predictor.", "Our proposed NNCE is similar in concept to NCE and LEEP.", "However, we apply the class subset selection to improve the knowledge transfer.", "Unlike previous work (Manjunatha et al., 2018), which does not use knowledge about the target task while finding the subset, our approach incorporates how the decision boundary of each source label distinguishes the labels of the target task.", "We conduct experiments on the Medical Information Mart for Intensive Care III (MIMIC-III) database (Johnson et al., 2016), a large open-access dataset of de-identified patient records.", "We collected data from 9 different clinical categories of MIMIC-III and randomly picked 200 clinical notes for each.", "There are nearly 1,000 section labels of these categories, and most of them contain very few sentence instances.", "To handle the sparsity, we only keep the section types of each category satisfying the following conditions: The section is among the ten most frequent ones.", "Table 1 shows the number of sentence instances and the lists of selected section types.", "The section list varies across categories, with only a few section labels in more than one domain.", "However, some sections in different categories are still related to each other.", "For example, sentences in the social history section of Discharge Summary Reports' category are similar to the instances in the employment status and previous living situation section of Social Work'.", "We adopt Reptile, an optimization-based meta-learning algorithm, to be our baseline approach.", "Assume we have a set of source tasks { T 1 , T 2 ,..., TN } from multiple open-resource clinical datasets.", "We perform the Reptile with these source tasks to learn the BERT model parameters to provide a good initialization for fine-tuning the target task.", "For sampling batches of tasks, we use the same strategy proposed in Dou et al. (2019) that the probability of selecting a task is proportional to the size of its dataset.", "The training procedure of Reptile is described in Algorithm 1 where denotes learning rate.", "In the baseline meta-learning approach, we train the model with all the available datasets without data selection which might suffer from outlier' tasks.", "In the next step, we leverage the task transferability estimation for selecting the sources tasks bettering transferring knowledge to the target task.", "Fig. 1 shows the general framework of NNCE.", "The motivation of the NNCE for estimating the task transferability is the idea of evaluating how well the decision boundaries of source labels distinguish the target labels.", "Consider a source task defined on X Y and a target task on X Z .", "We denote the target samples as D = { ( x 1 , z 1 ) , ( x 1 , z 2 ) , ..., ( x n , z n ) } and use y Y = { 1 , 2 , ..., LS } and z Z = { 1 , 2 , ..., LT } to represent the label variables of source and target data respectively.", "We train a classifier f on the source task which maps the space X to Y .", "By feeding the target samples into the source model f , we assign the predicted source labels for the target samples so that Y = { y 1 , y 2 , ..., y n } .", "Thus, every target sample is attached with a true 6693 Figure 1: NNCE measure. label' from Z and a predicted label from Y that can be denoted as ( x i , y i , z i ) .", "We compute the empirical joint distribution and the empirical marginal distribution by P ( y ) = 1 n n (cid:88) i =1 1 { y i = y } , P ( z ) = 1 n n (cid:88) i =1 1 { z i = z } , P ( y, z ) = 1 n n (cid:88) i =1 1 { y i = y, z i = z } .", "(1) To measure how the source and task labels are related, we handle the class imbalance issue of the target dataset by normalizing the target class frequency: (cid:101) P ( y, z ) = P ( y | z ) = P ( y, z ) P ( z ) .", "The value of (cid:101) P ( y, z ) represents the ratio of the target samples in class z that are assigned with the predicted label y .", "Then we compute: (cid:101) P ( z | y ) = (cid:101) P ( y, z ) (cid:80) LT z (cid:101) P ( y, z ) .", "so that (cid:80) z (cid:101) P ( z | y ) = 1 .", "We suppose that a good source label y = l that distinguishes the target labels well should have large values of (cid:101) P ( z = l | y ) for some target classes as well as small values for other target classes.", "On the contrary, if the values of (cid:101) P ( z = l | y ) for different target class z are approximately equal, this label is useless for classifying the target labels.", "Based on that, we define the NNCE to estimate the task transferability by: Figure 2: Category selection example.", "NNCE = LS (cid:88) y Y P ( y ) LT (cid:88) z Z (cid:101) P ( z | y ) log (cid:101) P ( z | y ) = LS (cid:88) y Y P ( y ) E ( y ) (4) where we use E ( y ) = (cid:80) LT z Z (cid:101) P ( z | y ) log (cid:101) P ( z | y ) to estimate how well the decision boundary of a source label classifies the target classes and NNCE is the overall measurement weighted by the prior P ( y ) .", "NNCE score is always negative.", "For a determined target task, a larger score indicates better transferability between the source and target tasks.", "The advantage of NNCE over some other label correlation methods like LEEP is that it allows us to select the source labels better distinguishing the target class with respect to E ( y ) .", "The NNCE is related to the NCE proposed by Tran et al. (2019), and it is equal to NCE if we do not normalize the target class frequency in Equation (2).", "The proof is in the Appendix A. 4.3 Task Selection for Clinical Section Classification We suppose that selecting the source tasks with good task transferability can benefit the meta-learning of the low-resource target task.", "In clinical section classification tasks, the pattern of the data and the section types vary across categories.", "So we propose two approaches for choosing the source tasks category selection and section selection.", "The procedure of category selection is direct.", "Fig. 2 shows a simple example of category selection.", "We compute the NNCE score for each of the source tasks from different clinical categories.", "Then we pick the N best' categories whose task achieves 6694 Figure 3: A single step for section selection.", "the highest NNCE scores.", "This approach helps filter out the outlier' tasks by removing the clinical categories irrelevant to the target task.", "Section selection is a process of searching for the optimal task for each of the clinical categories.", "It aims to make the best use of the section labels to benefit transferring knowledge to the target task.", "We modify the list of the section classes by deleting the instances from the useless sections and merge similar ones.", "However, there are too many combinations for partitioning that lead to high computational costs.", "To reduce the computational complexity, we propose a backward selection method with three operations for heuristic search.", "We delete the section l of the source dataset with the smallest value of empirical marginal distribution P ( y ) :", "The motivation behind this operation is that the fewest target samples are tagged with source label l representing this section is unrelated to the target category.", "We delete the section l satisfying:", "From the demonstration in Section 4.2 we can conclude l has the smallest value of E ( y ) , which indicates the source section l is worst at distinguishing the target sections.", "This operation aims to find the closest' pair of the source sections and merge them into one.", "To find such sections i , j , we adopt the following equation: i , j = argmin i,j i (cid:54) = j JSD ( (cid:101) P ( z | y = i ) (cid:107) (cid:101) P ( z | y = j )) (7) where JSD ( ) presents the JensenShannon divergence (Lin, 1991).", "A small value of JSD ( ) indicates that the (cid:101) P ( z | y = i ) and (cid:101) P ( z | y = j ) distribute closely and the source sections i and j are similar.", "In this case, the decision boundary between the source label i and j are trivially helpful for discriminating the target labels.", "We initialize the source task by including all the samples and sections labels, and perform a backward selection algorithm to reduce the section num-bers iteratively.", "Fig. 3 shows a single step of this process.", "We apply the NNCE measure with three operations introduced before to generate NNCE scores and produce no more than three new tasks 1 .", "Then we compute the NNCE score for each of the new tasks.", "The final picked task at this step is the one that achieves the highest scores among the original one and the newly generated ones.", "We keep 1 Different operations may result in the same task.", "performing this process until none of the produced tasks improves the NNCE score anymore.", "We carry out the experiments with four target tasks of different clinical categories Discharge Summary Report', Nursing Progress', Recab Service Progress' and Social Work' presented in Table 1.", "For the target task of Social Work', we utilize all the other eight categories for pre-training.", "For Discharge Summary Report', Nursing Progress' and Recab Service Progress', we remove their close categories Discharge Summary Addendum', Nursing Generic' and Recab Service Evaluation' categories, respectively, and the pre-training is performed by the remaining seven categories.", "For each target categories, we split the samples into the training and testing set with a roughly 3:1 ratio across the SUBJECT_ID' referring to a unique patient.", "We randomly pick 200/500/1000 samples from each target datasets to simulate low-resource scenarios and perform BERT, MTL, and Reptile for the clinical section classification.", "We adopt the PyTorch (version 1.3.0) implementation of BERT 2 for our tasks and the model is initialized with BERT-base .", "The settings of MTL and Reptile are same as the ones described in (Dou et al., 2019).", "We threshold the word sequence length to 80, which covers more than 99% of the sentences.", "We use Adam (Kingman and Ba, 2015) for optimization and a batch size of 32 for all the 2 https://github.com/huggingface/pytorch-pretrained-BERT Figure 4: Convergence of accuracy of fine-tuning for sample size=200.", "experiments.", "For both MTL and Reptile, the learning rate is 5e-5, and the number of pre-training epoch is 5.", "We set the inner update step k to be 5, the inner learning rate to be 5e-5 and the number of sampled tasks in each step to be 8 for Reptile.", "For BERT fine-tuning, we train the model with the learning rate of 2e-5 for 25 epochs.", "The classification accuracy results of BERT, MTL, and Meta-learning for different tasks are shown in Table 2.", "From the table, we find that both MTL and Reptile improve the performance of the low-resource target task while Reptile outperforms multi-task learning and achieves the best results.", "The comparison between BERT and Reptile demonstrates that the meta-learning approach can benefit the fine-tuning of the target task.", "The improvement is more significant when we perform the classification task with fewer target samples.", "Fig. 4 shows the convergence of accuracy of BERT fine-tuning with and without Reptile pretraining.", "The curves in these figures suggest that meta-learning has the advantage of fast convergence and adapts to the new task more quickly.", "We also discover that after 15 epochs of fine-tuning, the performance is not sensitive to the epoch number.", "For any selected target category, we pre-train the model for each of the remaining categories and fine-tune with 200 target samples to obtain the transfer learning accuracies.", "We compute the NNCE scores for different source tasks and evaluate the NNCE by 6696 Target category Correlation coefficients NCE NNCE Discharge Summary Report 0.671 0.676 Nursing Progress 0.772 0.807 Rehab Service Progress 0.918 0.922 Social Work 0.479 0.703 The correlations between the NNCE scores and transfer learning accuracy are statistically significant with p < 0 .", "05 .", "Table 3: Comparison of Pearson correlation coefficients of NCE and NNCE(Tran et al., 2019).", "the Pearson correlation coefficients between these scores and their accuracies of adaptation.", "We also report the correlations using the NCE scores for comparison.", "By comparing the correlation coefficients presented in Table 3, we find that NNCE receives higher correlations over NCE for all the tasks and is better at task transferability estimation.", "We set the target sample size to be 200 and explore how task selection strategies category selection and section selection benefit meta-learning.", "Table 4 shows the results of meta-learning approach with category selection.", "We report the classification accuracies of picking N = 2/4/6 categories with the highest NNCE scores and compare with including all the source categories.", "The results reveal that the category selection improves the meta-learning performance, and there is an optimal value of N for each task.", "If N is too large, it might include outlier' tasks that degrade the performance.", "If N is too small, it loses the benefit of utilizing large amounts of source data.", "We also perform the category selection with NCE to compare it with NNCE.", "The underlined tasks in Table 4 indicate that different subsets of categories are selected if we replace NNCE with NCE.", "For all these tasks, NNCE achieves higher accuracies.", "Please see Appendix C for detailed results for different target categories.", "We discuss whether the section selection benefits the meta-learning in two scenarios.", "First, we compare the performances of Reptile with and without section selection using all the source categories.", "In the second scenario, we repeat the first procedure but only use the best subset of the source categories determined in Table 4, and repeat the comparison method in the first scenario.", "The comparisons presented in Table 5 indicate that adopting section se-Task Nb.", "05 .", "However, the improvement is not statistically significant for most tasks.", "The average relative gains to Reptile brought by the category selection and section selection are 1.5% and 0.8%, which indicates that category selection contributes more to improving the meta-learning.", "We also find that combining both category and section selection results in better performance than using each of them independently for most tasks.", "We show an example in Table 6 to further illustrate section selection.", "The source and target categories are Rehab Service Progress' and Nursing Progress', and the original section types are presented.", "The labels in blue are the selected sections, and the merged ones are displayed inside the brackets.", "We observe that the common section types plan and assessment are kept.", "Although the content of the same section type is different across categories, there are similarities between their utterance patterns.", "The source sections in black are irrelevant to any of the target sections, so they are removed.", "The merged sections balance and gait are of close concepts, both of which describe the patient's progress of mobility.", "This example shows 6697 that the selection procedure extracts information of the source sections related to the target sections, which benefits the knowledge transfer.", "In this paper, we explored the clinical section classification with limited in-domain data.", "We applied a meta-learning algorithm utilizing multiple out-of-domain clinical datasets, improving the classification accuracy and adaptation speed.", "We proposed a Normalized Negative Conditional Entropy measure to estimate the task transferability and leverage it to select the clinical categories and sections related to the target task that best improves knowledge transfer.", "In addition, we examined a backward selection method to reduce the computational complexity of section selection.", "Our study suggests that both category selection and section selection outperform the baseline meta-learning approach, and combining two strategies results in better performance than adopting each of them independently.", "Future work will look to develop a joint optimization of category selection and section selection.", "We also plan to apply our approach to other styles of text data.", "For example, section classification on spoken utterances of doctor-patient conversations is an exciting extension of the present work, which we plan to explore (Krishna et al., 2021).", "Finally, we will continue to apply the proposed method to other text processing applications, e.g., medical information retrieval (Goeuriot et al., 2016)." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "objective", "abstain", "objective", "abstain", "method", "abstain", "abstain", "method", "result", "objective", "objective", "result", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "method", "objective", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "method", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "objective", "result", "objective", "method", "result", "abstain", "method", "abstain", "abstain" ]
[ "Weakly supervised text classification based on a few user-provided seed words has recently attracted much attention from researchers.", "Existing methods mainly generate pseudo-labels in a context-free manner (e.g., string matching), therefore, the ambiguous, context-dependent nature of human language has been long overlooked.", "In this paper, we propose a novel framework ConWea, providing contextualized weak supervision for text classification.", "Specifically, we leverage contextualized representations of word occurrences and seed word information to automatically differentiate multiple interpretations of the same word, and thus create a contextualized corpus.", "This contextualized corpus is further utilized to train the classifier and expand seed words in an iterative manner.", "This process not only adds new contextualized, highly label-indicative keywords but also disambiguates initial seed words, making our weak supervision fully contextualized.", "Extensive experiments and case studies on real-world datasets demonstrate the necessity and significant advantages of using contextualized weak supervision, especially when the class labels are fine-grained.", "Weak supervision in text classification has recently attracted much attention from researchers, because it alleviates the burden of human experts on annotating massive documents, especially in specific domains.", "One of the popular forms of weak supervision is a small set of user-provided seed words for each class.", "Typical seed-driven methods follow an iterative framework generate pseudo-labels using some heuristics, learn the mapping between documents and classes, and expand the seed set (Agichtein and Gravano, 2000; Riloff et al., 2003; Kuipers et al., 2006; Tao et al., 2015; Meng et al., 2018).", "Most of, if not all, existing methods generate pseudo-labels in a context-free manner, therefore, the ambiguous, context-dependent nature of human languages has been long overlooked.", "Suppose the user gives penalty as a seed word for the sports class, as shown in Figure 1. The word penalty has at least two different meanings: the penalty in sports -related documents and the fine or death penalty in law -related documents.", "If the pseudo-label of a document is decided based only on the frequency of seed words, some documents about law may be mislabelled as sports .", "More importantly, such errors will further introduce wrong seed words, thus being propagated and amplified over the iterations.", "In this paper, we introduce contextualized weak supervision to train a text classifier based on user-provided seed words.", "The contextualized here is reflected in two places: the corpus and seed words.", "Every word occurrence in the corpus may be interpreted differently according to its context; Every seed word, if ambiguous, must be resolved according to its user-specified class.", "In this way, we aim to improve the accuracy of the final text classifier.", "We propose a novel framework ConWea, as illustrated in Figure 1. It leverages contextualized representation learning techniques, such as ELMo (Pe-ters et al., 2018) and BERT (Devlin et al., 2019), together with user-provided seed information to first create a contextualized corpus .", "This contextualized corpus is further utilized to train the classifier and expand seed words in an iterative manner.", "During this process, contextualized seed words are introduced by expanding and disambiguating the initial seed words.", "Specifically, for each word, we develop an unsupervised method to adaptively decide its number of interpretations, and accordingly, group all its occurrences based on their contextualized representations.", "We design a principled comparative ranking method to select highly label-User-Provided Seed Words Messi scored the penalty!", "indicative keywords from the contextualized corpus, leading to contextualized seed words.", "We will repeat the iterative classification and seed word expansion process until the convergence.", "To the best of our knowledge, this is the first work on contextualized weak supervision for text classification.", "It is also worth mentioning that our proposed framework is compatible with almost any contextualized representation learning models and text classification models.", "Our contributions are summarized as follows: We propose a novel framework enabling contextualized weak supervision for text classification.", "We develop an unsupervised method to automatically group word occurrences of the same word into an adaptive number of interpretations based on contextualized representations and user-provided seed information.", "We design a principled ranking mechanism to identify words that are discriminative and highly label-indicative.", "We have performed experiments on real-world datasets for both coarseand fine-grained text classification tasks.", "The results demonstrate the superiority of using contextualized weak supervision, especially when the labels are fine-grained.", "Our code is made publicly available at GitHub 1 .", "Problem Formulation.", "The input of our problem contains (1) a collection of n text documents D = {D 1 , D 2 , . . . , D n } and (2) m target classes C = {C 1 , C 2 , . . . , C m } and their seed words S = {S 1 , S 2 , . . . , S m } .", "We aim to build a high-quality 1 https://github.com/dheeraj7596/ConWea document classifier from these inputs, assigning class label C j C to each document D i D .", "Note that, all these words could be upgraded to phrases if phrase mining techniques (Liu et al., 2015; Shang et al., 2018) were applied as preprocessing.", "In this paper, we stick to the words.", "Framework Overview.", "We propose a framework, ConWea, enabling contextualized weak supervision.", "Here, contextualized is reflected in two places: the corpus and seed words.", "Therefore, we have developed two novel techniques accordingly to make both contextualizations happen.", "First, we leverage contextualized representation learning techniques (Peters et al., 2018; Devlin et al., 2019) to create a contextualized corpus.", "We choose BERT (Devlin et al., 2019) as an example in our implementation to generate a contextualized vector of every word occurrence.", "We assume the user-provided seed words are of reasonable quality the majority of the seed words are not ambiguous, and the majority of the occurrences of the seed words are about the semantics of the user-specified class.", "Based on these two assumptions, we are able to develop an unsupervised method to automatically group word occurrences of the same word into an adaptive number of interpretations, harvesting the contextualized corpus.", "Second, we design a principled comparative ranking method to select highly label-indicative keywords from the contextualized corpus, leading to contextualized seed words.", "Specifically, we start with all possible interpretations of seed words and train a neural classifier.", "Based on the predictions, we compare and contrast the documents belonging to different classes, and rank contextualized words based on how label-indicative, frequent, and", "unusual these words are.", "During this process, we eliminate the wrong interpretations of initial seed words and also add more highly label-indicative contextualized words.", "This entire process is visualized in Figure 1. We denote the number of iterations between classifier training and seed word expansion as T , which is the only hyper-parameter in our framework.", "We discuss these two novel techniques in detail in the following sections.", "To make our paper self-contained, we will also brief the pseudo-label generation and document classifiers.", "We leverage contextualized representation techniques to create a contextualized corpus.", "The key objective of this contextualization is to disambiguate different occurrences of the same word into several interpretations.", "We treat every word separately, so in the rest of this section, we focus on a given word w .", "Specifically, given a word w , we denote all its occurrences as w 1 , . . . , w n , where n is its total number of occurrences in the corpus.", "Contextualized Representation.", "First, we obtain a contextualized vector representation b w i for each w i .", "Our proposed method is compatible with almost any contextualized representation learning model.", "We choose BERT (Devlin et al., 2019) as an example in our implementation to generate a contextualized vector for each word occurrence.", "In this contextualized vector space, we use the cosine similarity to measure the similarity between two vectors.", "Two word occurrences w i and w j of the same interpretation are expected to have a high cosine similarity between their vectors b w i and b w j .", "For the ease of computation, we normalize all contextualized representations into unit vectors.", "Choice of Clustering Methods.", "We model the word occurrence disambiguation problem as a clustering problem.", "Specifically, we propose to use the K -Means algorithm (Jain and Dubes, 1988) to cluster all contextualized representations b w i into K clusters, where K is the number of interpretations.", "We prefer K -Means because (1) the cosine similarity and Euclidean distance are equivalent for unit vectors and (2) it is fast and we are clustering a significant number of times.", "Automated Parameter Setting.", "We choose the value of K purely based on a similarity threshold .", "is introduced to decide whether two clusters belong to the same interpretation by checking if the cosine similarity between two cluster center vectors is greater than .", "Intuitively, we should keep increasing K until there exist no two clusters with the same interpretation.", "Therefore, we choose K to be the largest number such that the similarity between any two cluster centers is no more than .", "where c i refers to the i -th cluster center vector after clustering all contextualized representations into K clusters.", "In practice, K is usually no more than 10.", "So we increase K gradually until the constraint is violated.", "We pick based on user-provided seed information instead of hand-tuning, As mentioned, we make two majority assumptions: (1) For any seed word, the majority of its occurrences follow the intended interpretation by the user; and (2) The majority of the seed words are not ambiguous they only have one interpretation.", "Therefore, for each seed word s , we take the median of pairwise cosine similarities between its occurrences.", "Then, we take the median of these medians over all seed words as .", "Mathematically, = median ( { ( s ) | s } ) (3) The nested median solution makes the choice of safe and robust to outliers.", "For example, consider the word windows in the 20Newsgroup corpus.", "In fact, the word windows has two interpretations in the 20Newsgroup corpus one represents an opening in the wall and the other is an operating system.", "We first compute the pairwise similarities between all its occurrences and plot the histogram as shown in Figure", "2(a).", "From this plot, we can see that its median value is about 0 .", "7 .", "We apply the same for all seed words and obtain following Equation 3. is calculated to be 0 .", "82 .", "Based on this value, we gradually increase K for windows and it ends up with K = 2 .", "We visualize its K-Means clustering results using t-SNE (Maaten and Hinton, 2008) in Figure", "2(b).", "Similar results can be observed for the word penalty , as shown in Figure", "2(c).", "These examples demonstrate how our document contextualization works for each word.", "In practice, to make it more efficient, one can subsample the occurrences instead of enumerating all pairs in a brute-force manner.", "Contextualized Corpus.", "The interpretation of each occurrence of w is decided by the cluster-ID to which its contextualized representation belongs.", "Specifically, given each occurrence w i , the word w Figure 3: The HAN classifier used in our ConWea framework.", "is replaced by w i in the corpus as follows: w i = (cid:40) w if K = 1 w $ j otherwise (4) where j = arg K max j =1 cos( b w i , c j ) By applying this to all words and their occurrences, the corpus is contextualized.", "The pseudo-code for corpus contextualization is shown in Algorithm 1. 4 Pseudo-Label and Text Classifier We generate pseudo-labels for unlabeled contextualized documents and train a classifier based on these pseudo-labels, similar to many other weakly supervised methods (Agichtein and Gravano, 2000; Riloff et al., 2003; Kuipers et al., 2006; Tao et al., 2015; Meng et al., 2018).", "These two parts are not the focus of this paper.", "We briefly introduce them to make the paper self-contained.", "Pseudo-Label Generation.", "There are several ways to generate pseudo-labels from seed words.", "As proof-of-concept, we employ a simple but effective method based on counting.", "Each document is assigned a label whose aggregated term frequency of seed words is maximum.", "Let tf ( w, d ) denote term-frequency of a contextualized word w in the contextualized document d and S c represents set of seed words of class c , the document d is assigned a label l ( d ) as follows: l ( d ) = arg max l { (cid:88) i tf ( s i , d ) | s i S l } (5) Document Classifier.", "Our framework is compatible with any text classification model.", "We use Hierarchical Attention Networks (HAN) (Yang et al., 2016) as an example in our implementation.", "HAN considers the hierarchical structure of documents (document sentences words) and includes an attention mechanism that finds the most important words and sentences in a document while taking the context into consideration.", "There are two levels of attention: word-level attention identifies the important words in a sentence and sentence level attention identifies the important sentences in a document.", "The overall architecture of HAN is shown in Figure 3. We train a HAN model on contextualized corpus with the generated pseudo-labels.", "The predicted labels are used in seed expansion and disambiguation.", "Seed Expansion.", "Given contextualized documents and their predicted class labels, we propose to rank contextualized words and add the top few words into the seed word sets.", "The core element of this process is the ranking function.", "An ideal seed word s of label l , is an unusual word that appears only in the documents belonging to label l with significant frequency.", "Hence, for a given class C j and a word w , we measure its ranking score based on the following three aspects: Label-Indicative.", "Therefore, we use P ( C j | w ) as our label-indicative measure: LI ( C j , w ) = P ( C j | w ) = f C j ,w f C j where f C j refers to the total number of documents that are predicted as class C j , and among them, f C j ,w documents contain the word w .", "Since our pseudo-label generation follows the presence of seed words in the document, ideally, the posterior probability of a document belonging to the class C j after observing the presence of word w (i.e., P ( C j | w ) ) should be very close to 100%.", "All these counts are based on the prediction results on the input unlabeled documents.", "Frequent.", "Ideally, a seed word s of label l appears in the documents belonging to label l with significant frequency.", "To measure the frequency score, we first compute the average frequency of seed word s in all the documents belonging to label l .", "Since average frequency is unbounded, we apply tanh function to scale it, resulting in the frequency score, F ( C j , w ) = tanh (cid:0) f C j ( w ) f C j (cid:1) Here, different from f C j ,w defined earlier, f C j ( w ) is the frequency of word w in documents that are predicted as class C j .", "Unusual: We want our highly label-indicative and frequent words to be unusual.", "To incorporate this, we consider inverse document frequency (IDF).", "Let n be the number of documents in the corpus D and f D ,w represents the document frequency of word w , the IDF of a word w is computed as follows: IDF ( w ) = log (cid:0) n f D ,w (cid:1) Similar to previous work (Tao et al., 2015), we combine these three measures using the geometric mean, resulting in the ranking score R ( C j , w ) of a word w for a class C j .", "Seed Disambiguation.", "While the majority of user-provided seed words are nice and clean, some of them may have multiple interpretations in the given corpus.", "We propose to disambiguate them based on the ranking.", "We first consider all possible interpretations of an initial seed word, generate the pseudo-labels, and train a classifier.", "Using the clas-sified documents and the ranking function, we rank all possible interpretations of the same initial seed word.", "Because the majority occurrences of a seed word are assumed to belong to the user-specified class, the intended interpretation shall be ranked the highest.", "Therefore, we retain only the top-ranked interpretation of this seed word.", "After this step, we have fully contextualized our weak supervision, including the initial user-provided seeds.", "In this section, we evaluate our framework and many compared methods on coarseand fine-grained text classification tasks under the weakly supervised setting.", "Following previous work (Tao et al., 2015), (Meng et al., 2018), we use two news datasets in our experiments.", "The dataset statistics are provided in Table 1. Here are some details.", "The New York Times (NYT): The NYT dataset contains news articles written and published by The New York Times.", "These articles are clas-sified into 5 wide genres (e.g., arts, sports) and 25 fine-grained categories (e.g., dance, music, hockey, basketball).", "The 20 Newsgroups (20News): The 20News dataset 2 is a collection of newsgroup documents partitioned widely into 6 groups (e.g., recreation, computers) and 20 fine-grained classes (e.g., graphics, windows, baseball, hockey).", "We perform coarseand fine-grained classifications on the NYT and 20News datasets.", "NYT dataset is imbalanced in both fine-grained and coarse-grained classifications.", "20News is nearly balanced in fine-grained classification but imbalanced in coarse-grained classification.", "Being aware of these facts, we adopt microand macro-F 1 scores as evaluation metrics.", "We compare our framework with a wide range of methods described below:", "IR-TF-IDF treats the seed word set for each class as a query.", "The relevance of a document to a label is computed by aggregated TF-IDF values of its respective seed words.", "The label with the highest relevance is assigned to each document.", "Dataless (Chang et al., 2008) uses only label surface names as supervision and leverages Wikipedia to derive vector representations of labels and documents.", "Each document is labeled based on the document-label similarity.", "Word2Vec first learns word vector representations (Mikolov et al., 2013) for all terms in the corpus and derive label representations by aggregating the vectors of its respective seed words.", "Finally, each document is labeled with the most 2 http://qwone.com/jason/20Newsgroups/ similar label based on cosine similarity.", "Doc2Cube (Tao et al., 2015) considers label surface names as seed set and performs multidimensional document classification by learning dimension-aware embedding.", "WeSTClass (Meng et al., 2018) leverages seed information to generate pseudo documents and refines the model through a self-training module that bootstraps on real unlabeled documents.", "We denote our framework as ConWea , which includes contextualizing corpus, disambiguating seed words, and iterative classification & key words expansion.", "Besides, we have three ablated versions.", "ConWea-NoCon refers to the variant of ConWea trained without the contextualization of corpus.", "ConWea-NoSeedExp is the variant of ConWea without the seed expansion module.", "ConWea-WSD refers to the variant of ConWea, with the contextualization module replaced by Lesk algorithm (Lesk, 1986), a classic Word-sense disambiguation algorithm (WSD).", "We also present the results of HAN-Supervised under the supervised setting for reference.", "We use 80-10-10 for train-validation-test splitting and report the test set results for it.", "All weakly supervised methods are evaluated on the entire datasets.", "We use pre-trained BERT-base-uncased 3 to obtain contextualized word representations.", "We follow Devlin et al. (2019) and concatenate the averaged word-piece vectors of the last four layers.", "The seed words are obtained as follows: we asked 5 human experts to nominate 5 seed words per class, and then considered the majority words (i.e., > 3 nominations) as our final set of seed words.", "For every class, we mainly use the label surface name as seed words.", "For some multi-word class labels (e.g., international business), we have multiple seed words, but never exceeds four per each class.", "The same seed words are utilized for all compared methods for fair comparisons.", "For ConWea, we set T = 10 .", "For any method using word embedding, we set its dimension to be 100.", "We use the public implementations of WeSTClass 4 and Dataless 5 with the hyper-parameters mentioned in their original papers.", "bert 4 https://github.com/yumeng5/WeSTClass 5 https://cogcomp.org/page/software_ view/Descartes", "We summarize the evaluation results of all methods in Table 2. As one can observe that our proposed framework achieves the best performance among all the compared weakly supervised methods.", "We discuss the effectiveness of ConWea as follows: Our proposed framework ConWea outperforms all the other methods with significant margins.", "By contextualizing the corpus and resolving the interpretation of seed words, ConWea achieves inspiring performance, demonstrating the necessity and the importance of using contextualized weak supervision.", "We observe that in the fine-grained classification, the advantages of ConWea over other methods are even more significant.", "This can be attributed to the contextualization of corpus and seed words.", "Once the corpus is contextualized properly, the subtle ambiguity between words is a drawback to other methods, whereas ConWea can distinguish them and predict them correctly.", "The comparison between ConWea and the ablation method ConWea-NoExpan demonstrates the effectiveness of our Seed Expansion.", "For example, for fine-grained labels on the 20News dataset, the seed expansion improves the micro-F1 score from 0 .", "58 to 0 .", "65 .", "The comparison between ConWea and the two ablation methods ConWea-NoCon and ConWea-WSD demonstrates the effectiveness of our Contextualization.", "Our contextualization, building upon (Devlin et al., 2019), is adaptive to the input corpus, without requiring any additional human annotations.", "However, WSD methods(e.g., (Lesk, 1986)) are typically trained for a general domain.", "If one wants to apply WSD to some specific corpus, additional annotated training data might be required to meet the similar performance as ours, which defeats the purpose of a weakly supervised setting.", "Therefore, we believe that our contextualization module has its unique advantages.", "Our experimental results further confirm the above reasoning empirically.", "For example, for coarse-grained labels on the 20News dataset, the contextualization improves the micro-F1 score from 0 .", "53 to 0 .", "62 .", "We observe that ConWea performs quite close to supervised methods, for example, on the NYT dataset.", "This demonstrates that ConWea is quite effective in closing the performance gap between the weakly supervised and supervised settings.", "The only hyper-parameter in our algorithm is T , the number of iterations of iterative expansion & classification.", "We conduct experiments to study the effect of the number of iterations on the performance.", "The plot of performance w.r.t. the number of iterations is shown in Figure 4. We observe that the performance increases initially and gradually converges after 4 or 5 iterations.", "We observe that after the convergence point, the expanded seed words have become almost unchanged.", "While there is some fluctuation, a reasonably large T , such as T = 10 , is a good choice.", "We vary the number of seed words per class and plot the F 1 score in Figure 5. One can observe that in general, the performance increases as the number of seed words increase.", "There is a slightly different pattern on the 20News dataset when the labels are fine-grained.", "We conjecture that it is caused by the", "subtlety of seed words in fine-grained cases additional seed words may bring some noise.", "Overall, three seed words per class are enough for reasonable performance.", "We present a case study to showcase the power of contextualized weak supervision.", "Specifically, we investigate the differences between the expanded seed words in the plain corpus and contextualized corpus over iterations.", "Table 3 shows a column-by-column comparison for the class For Sale on the 20News dataset.", "The class For Sale refers to documents advertising goods for sale.", "Starting with the same seed sets in both types of corpora, from Table 3, in the second iteration, we observe that space becomes a part of expanded seed set in the plain corpus.", "Here space has two interpretations, one stands for the physical universe beyond the Earth and the other is for an area of land.", "This error gets propagated and amplified over the iterations, further introducing wrong seed words like nasa, shuttle and moon, related to its first interpretation.", "The seed set for contextualized corpus addresses this problem and adds only the words with appropriate interpretations.", "Also, one can see that the initial seed word offer has been disambiguated as offer$0.", "We review the literature about (1) weak supervision for text classification methods, (2) contextualized representation learning techniques, (3) document classifiers, and (4) word sense disambiguation.", "Weak supervision has been studied for building document classifiers in various of forms, including hundreds of labeled training documents (Tang et al., 2015; Miyato et al., 2016; Xu et al., 2017), class/category names (Song and Roth, 2014; Tao et al., 2015; Li et al., 2018), and user-provided seed words (Meng et al., 2018; Tao et al., 2015).", "In this paper, we focus on user-provided seed words as the source of weak supervision, Along this line, Doc2Cube (Tao et al., 2015) expands label keywords from label surface names and performs multidimensional document classification by learning dimension-aware embedding; PTE (Tang et al., 2015) utilizes both labeled and unlabeled documents to learn text embeddings specifically for a task, which are later fed to logistic regression classifiers for classification; Meng et al. (2018) leverage seed information to generate pseudo documents and introduces a self-training module that bootstraps on real unlabeled data for model refining.", "This method is later extended to handle hierarchical classifications based on a pre-defined label taxonomy (Meng et al., 2019).", "However, all these weak supervisions follow a context-free manner.", "Here, we propose to use contextualized weak supervision.", "Contextualized word representation is originated from machine translation (MT).", "CoVe (McCann et al., 2017) generates contextualized representations for a word based on pre-trained MT models, More recently, ELMo (Peters et al., 2018) leverages neural language models to replace MT models, Table 3: Case Study: Seed word expansion of the For Sale class in context-free and contextualized corpora.", "which removes the dependency on massive parallel texts and takes advantages of nearly unlimited raw corpora.", "Many models leveraging language modeling to build sentence representations (Howard and Ruder, 2018; Radford et al., 2018; Devlin et al., 2019) emerge almost at the same time.", "Language models have also been extended to the character level (Liu et al., 2018; Akbik et al., 2018), which can generate contextualized representations for character spans.", "Our proposed framework is compatible with all the above contextualized representation techniques.", "In our implementation, we choose to use BERT to demonstrate the power of using contextualized supervision.", "Word Sense Disambiguation (WSD) is one of the challenging problems in natural language processing.", "Typical WSD models (Lesk, 1986; Zhong and Ng, 2010; Yuan et al., 2016; Raganato et al., 2017; Le et al., 2018; Tripodi and Navigli, 2019) are trained for a general domain.", "Recent works (Li and Jurafsky, 2015; Mekala et al., 2016; Gupta et al., 2019) also showed that machine-interpretable representations of words considering its senses, improve document classification.", "However, if one wants to apply WSD to some specific corpus, additional annotated training data might be required to meet the similar performance as ours, which defeats the purpose of a weakly supervised setting.", "In contrast, our contextualization, building upon (Devlin et al., 2019), is adaptive to the input corpus, without requiring any additional human annotations.", "Therefore, our framework is more suitable than WSD under the weakly supervised", "setting..", "Our experimental results have verified this reasoning and showed the superiority of our contextualization module over WSD in weakly supervised document classification tasks.", "Document classification problem has been long studied.", "In our implementation of the proposed ConWea framework, we used HAN (Yang et al., 2016), which considers the hierarchical structure of documents and includes attention mechanisms to find the most important words and sentences in a document.", "CNN-based text classifiers(Kim, 2014; Zhang et al., 2015; Lai et al., 2015) are also popular and can achieve inspiring performance.", "In this paper, we proposed ConWea, a novel contextualized weakly supervised classification framework.", "Our method leverages contextualized representation techniques and initial user-provided seed words to contextualize the corpus.", "This contextualized corpus is further used to resolve the interpretation of seed words through iterative seed word expansion and document classifier training.", "Experimental results demonstrate that our model outperforms previous methods significantly, thereby signifying the superiority of contextualized weak supervision, especially when labels are fine-grained.", "In the future, we are interested in generalizing contextualized weak supervision to hierarchical text classification problems.", "Currently, we perform coarseand fine-grained classifications separately.", "There should be more useful information embedded in the tree-structure of the label hierarchy.", "Also, extending our method for other types of textual data, such as short texts, multi-lingual data, and code-switched data is a potential direction.", "We thank Palash Chauhan and Harsh Jhamtani valuable discussions." ]
[ "abstain", "abstain", "objective", "method", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "objective", "objective", "objective", "objective", "method", "method", "objective", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "other", "other", "objective", "other", "other", "other", "other", "other", "objective", "objective", "other", "other", "other", "other", "abstain", "abstain", "other", "abstain", "other", "abstain", "other", "objective", "method", "abstain", "objective", "abstain", "method", "abstain", "objective", "other" ]
[ "Laws and their interpretations, legal arguments and agreements are typically expressed in writing, leading to the production of vast corpora of legal text.", "Their analysis, which is at the center of legal practice, becomes increasingly elaborate as these collections grow in size.", "Natural language understanding (NLU) technologies can be a valuable tool to support legal practitioners in these endeavors.", "Their usefulness, however, largely depends on whether current state-of-the-art models can generalize across various tasks in the legal domain.", "To answer this currently open question, we introduce the Legal General Language Understanding Evaluation (LexGLUE) benchmark, a collection of datasets for evaluating model performance across a diverse set of legal NLU tasks in a standardized way.", "We also provide an evaluation and analysis of several generic and legal-oriented models demonstrating that the latter consistently o er performance improvements across multiple tasks.", "Law is a field of human endeavor dominated by the use of language.", "As part of their professional training, law students consume large bodies of text as they seek to tune their understanding of the law and its application to help manage human behavior.", "Virtually every modern legal system produces massive volumes of textual data (Katz et al., 2020).", "Lawyers, judges, and regulators continuously author legal documents such as briefs, memos, statutes, regulations, contracts, patents and judicial decisions (Coupette et al., 2021).", "Beyond the consumption and production of language, law and the art of lawyering is also an exercise centered around the analysis and interpretation of text.", "et al., 2019, 2020; Zhong et al., 2020b; Bommarito et al., 2021), from judgment prediction (Aletras et al., 2016; Sim et al., 2016; Katz et al., 2017; Zhong et al., 2018; Chalkidis et al., 2019a; Malik et al., 2021), information extraction from legal documents (Chalkidis et al., 2018, 2019c; Chen et al., 2020; Hendrycks et al., 2021) and case summarization (Bhattacharya et al., 2019) to legal question answering (Ravichander et al., 2019; Kien et al., 2020; Zhong et al., 2020a,c) and text classification (Nal-lapati and Manning, 2008; Chalkidis et al., 2019b, 2020a).", "Transformer models (Vaswani et al., 2017) pre-trained on legal, rather than generic, corpora have also been studied (Chalkidis et al., 2020b; Zheng et al., 2021; Xiao et al., 2021).", "Pre-trained Transformers, including BERT (De-vlin et al., 2019), GPT-3 (Brown et al., 2020), T5 (Ra el et al., 2020), BART (Lewis et al., 2020), DeBERTa (He et al., 2021) and numerous variants, are currently the state of the art in most natural language processing (NLP) tasks.", "Rapid performance improvements have been witnessed, to the extent that ambitious multi-task benchmarks (Wang et al., 2018, 2019b) are considered almost solved' a few years after their release and need to be made more challenging (Wang et al., 2019a).", "Recently, Bommasani et al. (2021) named these pre-trained models (e.g., BERT, DALL-E, GPT-3) foundation models .", "The term may be controversial, but it emphasizes the paradigm shift these models have caused and their interdisciplinary potential.", "Studying the latter includes the question of how to adapt these models to legal text (Bommarito et al., 2021).", "As discussed by Zhong et al. (2020b) and Chalkidis et al. (2020b), legal text has distinct characteristics, such as terms that are uncommon in generic corpora (e.g., restrictive covenant', promissory estoppel', tort', novation'), terms that have di erent meanings than in everyday language (e.g., an executed' contract is signed and e ec-tive, a party' is a legal entity), older expressions (e.g., pronominal adverbs like herein', hereto', wherefore'), uncommon expressions from other languages (e.g., laches', voir dire', certiorari', sub judice'), and long sentences with unusual word order (e.g., the provisions for termination hereinafter appearing or will at the cost of the borrower forthwith comply with the same) to the extent that legal language is often classified as a sub-language' (Tiersma, 1999; Williams, 2007; Haigh, 2018).", "Furthermore, legal documents are often much longer than the maximum length state-of-the-art deep learning models can handle, including those designed to handle long text (Beltagy et al., 2020; Zaheer et al., 2020; Yang et al., 2020).", "Inspired by the recent widespread use of the GLUE multi-task benchmark NLP dataset (Wang et al., 2018, 2019b), the subsequent more di cult SuperGLUE (Wang et al., 2019a), other previous multi-task NLP benchmarks (Conneau and Kiela, 2018; McCann et al., 2018), and similar initiatives in other domains (Peng et al., 2019), we introduce LexGLUE, a benchmark dataset to evaluate the performance of NLP methods in legal tasks.", "LexGLUE is based on seven English existing legal NLP datasets, selected using criteria largely from SuperGLUE (discussed in Section 3.1).", "We anticipate that more datasets, tasks, and languages will be added in later versions of LexGLUE.", "1 As more legal NLP datasets become available, we also plan to favor datasets checked thoroughly for validity (scores reflecting real-life performance), annotation quality, statistical power, and social bias (Bowman and Dahl, 2021).", "2019b,a), one of our goals is to push towards generic (or foundation') models that can cope with multiple NLP tasks, in our case legal NLP tasks, possibly with limited task-specific fine-tuning.", "Another goal is to provide a convenient and informative entry point for NLP researchers and practitioners wishing to explore or develop methods for legal NLP.", "Having these goals in mind, the datasets we include in LexGLUE and the tasks they address have been simplified in several ways, discussed below, to make it easier for newcomers and generic models to address all tasks.", "We provide Python APIs integrated with Hugging Face (Wolf et al., 2020; Lhoest et al., 2021) to easily import all the datasets we experiment with and evaluate the performance of di erent models (Section 4.4).", "By unifying and facilitating the access to a set of law-related datasets and tasks, we hope to attract not only more NLP experts, but also more interdisciplinary researchers (e.g., law doctoral students willing to take NLP courses).", "More broadly, we hope LexGLUE will speed up the adoption and transparent evaluation of new legal NLP methods and approaches in the commercial sector, too.", "Indeed, there have been many commercial press releases in the legal tech industry on high-performing systems, but almost no independent evaluation of the performance of machine learning and NLP-based tools.", "A standard publicly available benchmark would also allay concerns of undue influence in predictive models, including the use of metadata which the relevant law expressly disregards.", "The rapid growth of the legal text processing field is demonstrated by numerous papers presented in top-tier conferences in NLP and artificial intelligence (Luo et al., 2017; Zhong et al., 2018; Chalkidis et al., 2019a; Valvoda et al., 2021) as well as surveys (Chalkidis and Kampas, 2018; Zhong et al., 2020b; Bommarito et al., 2021).", "Moreover, specialized workshops on NLP for legal text (Ale-tras et al., 2019; Di Fatta et al., 2020; Aletras et al., 2020) are regularly organized.", "A core task in this area has been legal judgment prediction (forecasting), where the goal is to predict the outcome (verdict) of a court case.", "In this direction, there have been at least three lines of work.", "The first one (Aletras et al., 2016; Chalkidis et al., 2019a; Medvedeva et al., 2020, 2021) predicts violations of human rights in cases of the 4311 European Court of Human Rights (ECtHR).", "The second line of work (Luo et al., 2017; Zhong et al., 2018; Yang et al., 2019) considers Chinese criminal cases where the goal is to predict relevant law articles, criminal charges, and the term of the penalty.", "The third line of work (Ruger et al., 2004; Katz et al., 2017; Kaufman et al., 2019) includes methods for predicting the outcomes of cases of the Supreme Court of the United States (SCOTUS).", "The same or similar tasks have also been studied with court cases in many other jurisdictions including France (Sulea et al., 2017), Philippines (Virtu-cio et al., 2018), Turkey (Mumcuoglu et al., 2021), Thailand (Kowsrihawat et al., 2018), United Kingdom (Strickson and De La Iglesia, 2020), Germany (Urchs et al., 2021), and Switzerland (Niklaus et al., 2021).", "Apart from predicting court decisions, there is also work aiming to interpret (explain) the decisions of particular courts (Ye et al., 2018; Chalkidis et al., 2021c; Branting et al., 2021).", "Another popular task is legal topic classification.", "Nallapati and Manning (2008) highlighted the challenges of legal document classification compared to more generic text classification by using a dataset including docket entries of US court cases.", "Chalkidis et al. (2020a) classify EU laws into EuroVoc concepts, a task earlier introduced by Mencia and Furnkranzand (2007), with a special interest in fewand zero-shot learning.", "Luz de Araujo et al. (2020) also studied topic classification using a dataset of Brazilian Supreme Court cases.", "There are similar interesting applications in contract law (Lippi et al., 2019; Tuggener et al., 2020).", "Several studies (Chalkidis et al., 2018, 2019c; Hendrycks et al., 2021) explored information extraction from contracts, to extract important information such as the contracting parties, agreed payment amount, start and end dates, applicable law, etc.", "Other studies focus on extracting information from legislation (Cardellino et al., 2017; Angelidis et al., 2018) or court cases (Leitner et al., 2019).", "Legal Question Answering (QA) is another task of interest in legal NLP, where the goal is to train models for answering legal questions (Kim et al., 2015; Ravichander et al., 2019; Kien et al., 2020; Zhong et al., 2020a,c; Louis and Spanakis, 2022).", "Not only is this task interesting for researchers but it could support e orts to help laypeople better understand their legal rights.", "In the general task setting, this requires identifying relevant legislation, case law, or other legal documents, and extracting elements of those documents that answer a particular question.", "A notable venue for legal QA has been the Competition on Legal Information Extraction and Entailment (COLIEE) (Kim et al., 2016; Kano et al., 2017, 2018).", "More recently, there have also been e orts to pre-train Transformer-based language models on legal corpora (Chalkidis et al., 2020b; Zheng et al., 2021; Xiao et al., 2021), leading to state-of-the-art results in several legal NLP tasks, compared to models pre-trained on generic corpora.", "Overall, the legal NLP literature is overwhelming, and the resources are scattered.", "Documentation is often not available, and evaluation measures vary across articles studying the same task.", "Our goal is to create the first unified benchmark to access the performance of NLP models on legal NLU.", "As a first step, we selected a representative group of tasks, using datasets in English that are also publicly available, adequately documented and have an appropriate size for developing modern NLP methods.", "We also introduce several simplifications to make the new benchmark more standardized and easily accessible, as already noted.", "We present the Legal General Language Understanding 2 Evaluation (LexGLUE) benchmark, a collection of datasets for evaluating model performance across a diverse set of legal NLU tasks.", "", "Language: In this first version of LexGLUE, we only consider English datasets, which also makes experimentation easier for researchers across the globe.", "We hope to include other languages in future versions of LexGLUE.", "Substance: 3 The datasets should check the ability of systems to understand and reason about legal text to a certain extent in order to perform tasks that are meaningful for legal practitioners.", "where top-ranked models now achieve average scores higher than 90%).", "Unlike SuperGLUE (Wang et al., 2019a), we did not rule out, but rather favored, datasets requiring domain (in our case legal) expertise.", "Availability & Size: We consider only publicly available datasets, documented by published articles, avoiding proprietary, untested, poorly documented datasets.", "We also excluded very small datasets, e.g., with fewer than 5K documents.", "Although large pre-trained models often perform well with relatively few task-specific training instances, newcomers may wish to experiment with simpler models that may perform disappointingly with small training sets.", "Small test sets may also lead to unstable and unreliable results.", "LexGLUE comprises seven datasets.", "Table 1 shows core information for each of the LexGLUE datasets and tasks, described in detail below.", "4 ECtHR Tasks A & B The European Court of Human Rights (ECtHR) hears allegations that a state has breached human rights provisions of the European Convention of Human Rights (ECHR).", "We use the dataset of Chalkidis et al. (2019a, 2021c), which contains approx.", "11K cases from the ECtHR public database.", "The cases are chronologically split into training (9k, 20012016), development (1k, 20162017), and test (1k, 20172019).", "For each case, the dataset provides a list of factual paragraphs (facts) from the case description.", "Each case is mapped to articles of the ECHR that were violated (if any).", "In Task A, the input to a model is the list of facts of a case, and the output is the set of violated articles.", "In the most recent version of the dataset (Chalkidis et al., 2021c), each case is also mapped to articles of ECHR that were allegedly violated (considered by the court).", "In Task B, the input is again the list of facts of a case, but the output is the set of allegedly violated articles.", "The total number of ECHR articles is currently 66.", "Several articles, however, cannot be violated, are rarely (or never) discussed in practice, or do not depend on the facts of a case and concern procedural technicalities.", "Thus, we use a simplified version of the label set (ECHR articles) in both Task A and B, including only 10 ECHR articles that can be violated and depend on the case's facts.", "SCOTUS The US Supreme Court (SCOTUS) 5 is the highest federal court in the United States of America and generally hears only the most controversial or otherwise complex cases which have not been su ciently well solved by lower courts.", "We release a new dataset combining information from SCOTUS opinions 6 with the Supreme Court DataBase (SCDB) 7 (Spaeth et al., 2020).", "SCDB provides metadata (e.g., decisions, issues, decision directions) for all cases (from 1946 up to 2020).", "We opted to use SCDB to classify the court opinions in the available 14 issue areas (e.g., Criminal Procedure, Civil Rights, Economic Activity, etc.).", "This is a single-label multi-class classification task (Ta-ble 1).", "The 14 issue areas cluster 278 issues whose focus is on the subject matter of the controversy (dispute).", "The SCOTUS cases are chronologically split into training (5k, 19461982), development (1.4k, 19821991), test (1.4k, 19912016) sets.", "EUR-LEX European Union (EU) legislation is published in the EUR-Lex portal.", "8 All EU laws are annotated by EU's Publications O ce with multiple concepts from EuroVoc, a multilingual thesaurus maintained by the Publications O ce.", "9 The current version of EuroVoc contains more than 7k concepts referring to various activities of the EU and its Member States (e.g., economics, healthcare, trade).", "We use the English part of the dataset of Chalkidis et al. (2021a), which comprises 65k EU laws (documents) from EUR-Lex.", "Given a 5 https://www.supremecourt.gov 6 https://www.courtlistener.com 7 http://scdb.wustl.edu 8 http://eur-lex.europa.eu/ 9 http://eurovoc.europa.eu/ 4313 Method Source # Params Vocab.", "CaseLaw-BERT (Zheng et al., 2021) 110M 32K 512 2M 256 (37GB) US Court Cases Table 2: Key specifications of the examined models.", "We report the number of parameters, the size of vocabulary, the maximum sequence length, the core pre-training specifications (training steps and batch size), and the training corpora (OWT = OpenWebText, BC = BookCorpus).", "Starred models have been warm-started from RoBERTa.", "document, the task is to predict its EuroVoc labels (concepts).", "The dataset is chronologically split in training (55k, 19582010), development (5k, 2010 2012), test (5k, 20122016) subsets.", "It supports four di erent label granularities, comprising 21, 127, 567, 7390 EuroVoc concepts, respectively.", "We use the 100 most frequent concepts from level 2, which has a highly skewed label distribution and temporal concept drift (Chalkidis et al., 2021a), making it su ciently di cult for an entry point.", "LEDGAR Tuggener et al. (2020) introduced LEDGAR (Labeled EDGAR), a dataset for contract provision (paragraph) classification.", "The contract provisions come from contracts obtained from the US Securities and Exchange Commission (SEC) fil-ings, which are publicly available from EDGAR 10 (Electronic Data Gathering, Analysis, and Retrieval system).", "The original dataset includes approx.", "850k contract provisions labeled with 12.5k categories.", "Each label represents the single main topic (theme) of the corresponding contract provision, i.e., this is a single-label multi-class classification task.", "In LexGLUE, we use a subset of the original dataset with 80k contract provisions, considering only the 100 most frequent categories as a simplification.", "We split the new dataset chronologically into training (60k, 20162017), development (10k, 2018), and test (10k, 2019) sets.", "UNFAIR-ToS The UNFAIR-ToS dataset (Lippi et al., 2019) contains 50 Terms of Service (ToS) from on-line platforms (e.g., YouTube, Ebay, Face-book, etc.).", "The dataset has been annotated on the sentence-level with 8 types of unfair contractual terms , meaning terms (sentences) that potentially violate user rights according to EU consumer law.", "11 The input to a model is a sentence, the output is the set of unfair types (if any).", "We split the dataset chronologically into training (5.5k, 20062016), development (2.3k, 2017), test (1.6k, 2017) sets.", "10 https://www.sec.gov/edgar/ 11 Art. 3 of Direct.", "93 / 13, Unfair Terms in Consumer Contracts ( http://data.europa.eu/eli/dir/1993/13/oj ).", "CaseHOLD The CaseHOLD (Case Holdings on Legal Decisions) dataset (Zheng et al., 2021) contains approx.", "53k multiple choice questions about holdings of US court cases from the Harvard Law Library case law corpus.", "Holdings are short summaries of legal rulings that accompany referenced decisions relevant for the present case, e.g.: . . . to act pursuant to City policy, re d 503, 506-07 (3d Cir.l985)( holding that for purposes of a class certification motion the court must accept as true all factual allegations in the complaint and may draw reasonable inferences therefrom ).", "The input consists of an excerpt (or prompt) from a court decision , containing a reference to a particular case, where the holding statement (in boldface) is masked out.", "The model must identify the correct (masked) holding statement from a selection of five choices.", "We split the dataset in training (45k), development (3.9k), test (3.9k) sets, excluding samples that are shorter than 256 tokens.", "Chronological information is missing from CaseHOLD, thus we cannot perform a chronological re-split.", "Our first baseline model is a linear Support Vector Machine (SVM) (Cortes and Vapnik, 1995) with TF-IDF features for the topK frequent n -grams of the training set, where n [1 , 2 , 3].", "We experiment with Transformer-based (Vaswani et al., 2017) pre-trained language models, which achieve state of the art performance in most NLP tasks (Bommasani et al., 2021) and NLU benchmarks (Wang et al., 2019a).", "These models are pre-trained on very large unlabeled corpora to predict masked tokens (masked language modeling) and typically also to perform other pre-training tasks that still do not require any manual annotation (e.g., predicting if two sentences were adjacent in the corpus or not, dubbed next sentence prediction).", "The pre-trained models are then fine-tuned (further trained) on task-specific (typically much smaller) annotated datasets, after adding task-specific layers.", "We fine-tune and evaluate the performance of the following publicly available models (Table 2).", "BERT (Devlin et al., 2019) is the best-known pre-trained Transformer-based language model.", "It is pre-trained to perform masked language modeling and next sentence prediction.", "RoBERTa (Liu et al., 2019) is also a pre-trained Transformer-based language model.", "Unlike BERT, RoBERTa uses dynamic masking, it eliminates the next sentence prediction pre-training task, uses a larger vocabulary, and has been pre-trained on much larger corpora.", "Liu et al. (2019) reported improved results on NLU benchmarks using RoBERTa, compared to BERT.", "DeBERTa (He et al., 2021) is another improved BERT model that uses disentangled attention, i.e., four separate attention mechanisms considering the content and the relative position of each token, and an enhanced mask decoder, which explicitly considers the absolute position of the tokens.", "DeBERTa has been reported to outperform BERT and RoBERTa in several NLP tasks (He et al., 2021).", "Longformer (Beltagy et al., 2020) extends Transformer-based models to support longer sequences, using sparse-attention.", "The latter is a combination of local (window-based) attention and global (dilated) attention that reduces the computational complexity of the model and thus can be deployed in longer documents (up to 4096 tokens).", "Longformer outperforms RoBERTa on long document tasks and QA benchmarks.", "BigBird (Zaheer et al., 2020) is another sparse-attention based transformer that uses a combination of a local (window-based) attention, global (dilated), and random attention, i.e., all tokens also attend a number of random tokens on top of those in the same neighborhood (window) and the global ones.", "BigBird has been reported to outperform Longformer on QA and summarization tasks.", "Legal-BERT (Chalkidis et al., 2020b) is a BERT model pre-trained on English legal corpora, consisting of legislation, contracts, and court cases.", "It uses the original pre-training BERT configuration.", "The sub-word vocabulary of Legal-BERT is built from scratch, to better support legal terminology.", "CaseLaw-BERT (Zheng et al., 2021) is another law-specific BERT model.", "It also uses the original pre-training BERT configuration and has been pre-trained from scratch on the Harvard Law case corpus, 12 which comprises 3.4M legal decisions from US federal and state courts.", "This model is called Custom Legal-BERT by Zheng et al. (2021).", "We call it CaseLaw-BERT to distinguish it from the previously published Legal-BERT of Chalkidis et al. (2020b) and to better signal that it is trained exclusively on case law (court opinions).", "Hierarchical Variants Legal documents are usually much longer (i.e., consisting of thousands of words) than other text types (e.g., tweets, customer reviews, news articles) often considered in various NLP tasks.", "Thus, standard Transformer-based models that can typically process up to 512 sub-word units cannot be directly applied across all LexGLUE datasets, unless documents are severely truncated to the model's limit.", "Figure 2 shows the distribution of text input length across all LexGLUE datasets.", "Even for Transformer-based models specifically designed to handle long text (e.g., Longformer, BigBird), handling longer legal documents remains a challenge.", "Given the length of the text input in three of the seven LexGLUE tasks, i.e., ECtHR (A and B) and SCOTUS, we employ a hierarchical variant of each pre-trained Transformer-based model that has not been designed for longer text (BERT, RoBERTa, 12 https://case.law/ 4315 DeBERTa, Legal-BERT, CaseLaw-BERT) during fine-tuning and inference.", "The hierarchical variants are similar to those of Chalkidis et al. (2021c).", "They use the corresponding pre-trained Transformer-based model to encode each paragraph of the input text independently and obtain the top-level representation h [cls] of each paragraph.", "A second-level shallow (2-layered) Transformer encoder with always the same (across BERT, RoBERTa, DeBERTa etc.) specifications (e.g., hidden units, number of attention heads) is fed with the paragraph representations to make them context-aware (aware of the surrounding para-graphs).", "We then max-pool over the context-aware paragraph representations to obtain a document representation, which is fed to a classification layer.", "13 4.3 Task-Specific Fine-Tuning Text Classification Tasks For EUR-LEX, LEDGAR and UNFAIR-ToS tasks, we feed each document to the pre-trained model (e.g., BERT) and obtain the top-level representation h [cls] of the special [cls] token as the document representation, following Devlin et al. (2019).", "The latter goes through a dense layer of L output units, one per label, followed by a sigmoid (in EUR-LEX, UNFAIR-ToS) or softmax (in LEDGAR) activation, respectively.", "For the two ECtHR tasks (A and B) and SCOTUS, where the hierarchical variants are employed, we feed the max-pooled (over paragraphs) document representation to a classification linear layer.", "The linear layer is again followed by a sigmoid (EctHR) or softmax (SCOTUS) activation.", "Multiple-Choice QA Task For CaseHOLD, we convert each training (or test) instance (the prompt and the five candidate answers) into five input pairs following Zheng et al. (2021).", "Each pair consists of the prompt and one of the five candidate answers, separated by the special delimiter token [sep] .", "The top-level representation h [cls] of each pair is fed to a linear layer to obtain a logit, and the five logits are then passed through a softmax yielding a probability distribution over the five candidate answers.", "For reproducibility purposes and to facilitate future experimentation with other models, we pre-process", "and release all datasets on Hugging Face Datasets (Lhoest et al., 2021).", "14 We also release the code 15 of our experiments, which relies on the Hugging Face Transformers (Wolf et al., 2020) library.", "16 Appendix A explains how to load the datasets and run experiments with our code.", "5.1 Experimental Set Up For TFIDF-based linear SVM models, we use the implementation of Scikit-learn (Pedregosa et al., 2011) and grid-search for hyper parameters (num-ber of features, C , and loss function).", "For all the pre-trained models, we use publicly available Hugging Face checkpoints.", "17 We use the *-base configuration of each pre-trained model, i.e., 12 Transformer blocks, 768 hidden units, and 12 attention heads.", "We train models with the Adam optimizer (Kingma and Ba, 2015) and an initial learning rate of 3e-5 up to 20 epochs using early stopping on development data.", "We use mixed precision (fp16) to decrease the memory footprint in training and gradient accumulation for all hierarchical models.", "The hierarchical models can read up to 64 paragraphs of 128 tokens each.", "We use Longformer and BigBird in default settings, i.e., Longformer uses windows of 512 tokens and a single global token ( [cls] ), while BigBird uses blocks of 64 tokens (windows: 3 block, random: 3 block, global: 2 initial block; each token attends 512 tokens in total).", "The batch size is 8 in all experiments.", "We run five repetitions with di erent random seeds and report the test scores based on the seed with the best scores on development data.", "We evaluate performance using micro-F1 ( -F 1 ) and macro-F1 (m-F 1 ) across all datasets to take into account class imbalance.", "For completeness, we also report the arithmetic, harmonic, and geometric mean across tasks following Shavrina and Malykh (2021).", "18 5.2 Experimental Results Main Results Table 3 presents the test results for all models across all LexGLUE tasks, while Table 4 14 https://huggingface.co/datasets/lex_glue 15 https://github.com/coastalcph/lex-glue 16 https://huggingface.co/transformers 17 http://huggingface.co/models 18 We acknowledge that the use of scores aggregated over tasks has been criticized in general NLU benchmarks (e.g., GLUE), as models are trained with di erent numbers of samples, task complexity, and evaluation metrics per task.", "We believe that the use of a standard common metric (F1) across tasks and averaging with harmonic mean alleviate this issue.", "presents the aggregated (averaged) results.", "We observe that the two legal-oriented pre-trained models (Legal-BERT, CaseLaw-BERT) perform overall better, especially considering m-F 1 that accounts for class imbalance (considers all classes equally important).", "Their in-domain (legal) knowledge seems to be more critical in the two datasets relying on US case law data (SCOTUS, CaseHOLD) with an improvement of approx.", "+ 2-4% p.p. (m-F 1 ) over equally sized Transformer-based models, which are pre-trained on generic corpora.", "These results are explained by the fact that these tasks are more domain-specific in terms of language, compared to the rest.", "No single model performs best in all tasks, and the results of Table 3 show that there is still large scope for improvement (Section 6).", "An exceptional case of the dominance of the pre-trained Transformer models is the SCOTUS dataset, where the TFIDF-based linear SVM performs best.", "We suspect the large size of the SCOTUS opinions (Figure 2) to be the main reason, i.e., in many cases full paragraphs or parts of them are not considered by the hierarchical models (limited to 64 paragraphs of 128 tokens each).", "Legal-oriented Models Interestingly, the performance of Legal-BERT and CaseLaw-BERT, the two legal-oriented pre-trained models, is almost identical on CaseHOLD, despite the fact that CaseLaw-BERT is solely trained on US case law.", "On the other hand, Legal-BERT has been exposed to a wider variety of legal corpora, including EU and UK legislation, ECtHR, ECJ and US court cases, and US contracts.", "Legal-BERT performs as well as or better than CaseLaw-BERT on all datasets.", "These results suggest that domain-specific pre-training (and learning a domain-specific sub-word vocabulary) is beneficial, but over-fitting a specific (niche) sub-domain (e.g., US case law), similarly to Zheng et al. (2021), has no benefits.", "Beyond the scope of this work and the examined baseline models, we identify four major factors that could potentially advance the state of the art in LexGLUE and legal NLP more generally:", "Long Documents: Several Transformer-based models (Beltagy et al., 2020; Zaheer et al., 2020; Liu et al., 2022) have been proposed to handle long documents by exploring sparse attention mechanisms.", "These models can handle sequences up to 4096 sub-words, which is largely exceeded in three out of seven LexGLUE tasks (Figure 2).", "Contrary, the hierarchical model of Section 4.2 can handle sequences up to 8192 sub-words in our experiments, but a part of the model (the additional Transformer blocks that make the paragraph embeddings aware of the other paragraphs) is not pre-trained, which possibly negatively a ects performance.", "Structured Text: Current models for long documents, like Longformer and BigBird, do not consider the document structure (e.g., sentences, paragraphs, sections).", "For example, window-based attention may consider a sequence of sentences across paragraph boundaries or even consider truncated sentences.", "To exploit the document structure, Yang et al. (2020) proposed SMITH, a hierarchi-4317 cal Transformer model that hierarchically encodes increasingly larger blocks (e.g., words, sentences, documents).", "SMITH is very similar to the hierarchical model of Section 4.2, but it is pre-trained end-to-end with two objectives: token-level masked and sentence block language modeling.", "Large-scale Legal Pre-training: Recent studies (Chalkidis et al., 2020b; Zheng et al., 2021; Bambroo and Awasthi, 2021; Xiao et al., 2021) introduced language models pre-trained on legal corpora, but of relatively small sizes, i.e., 1236 GB.", "In the work of Zheng et al. (2021), the pre-training corpus covered only a narrowly defined area of legal documents, US court opinions.", "The same applies to Lawformer (Xiao et al., 2021), which was pre-trained on Chinese court opinions.", "Future work could curate and release a legal version of the C4 corpus (Ra el et al., 2020), containing multi-jurisdictional legislation, court decisions, contracts and legal literature at a size of hundreds of GBs.", "Given such a corpus, a large language model capable of processing long structured text could be pre-trained and it might excel in LexGLUE.", "Even Larger Language Models: Scaling up the capacity of pre-trained models has led to increasingly better results in general NLU benchmarks (Kaplan et al., 2020), and models have been scaled up to billions of parameters (Brown et al., 2020; Ra el et al., 2020; He et al., 2021).", "In Appendix E, we observe that using the large version of RoBERTa leads to substantial performance improvements compared to the base version.", "The results are comparable or better in some cases-compared to the legal-oriented language models (Legal-BERT, CaseLaw-BERT).", "Considering that the two legal-oriented models are much smaller and have been pre-trained with (5 10 ) less data (Section 2), we have a strong indication for performance gains by pre-training larger legal-oriented models using larger legal corpora.", "Although, our benchmark inevitably cannot cover everything in the whole wide (legal) world (Raji et al., 2021), we include a representative collection of English datasets that also ground to a certain degree in practically interesting applications.", "In its current version, LexGLUE can only be used to evaluate English models.", "As legal documents are typically written in the o cial language of the particular country of origin, there is an increasing need for developing models for other languages.", "The current scarcity of datasets in other languages (with the exception of Chinese) makes a multilingual extension of LexGLUE challenging, but an interesting avenue for future research.", "Beyond language barriers, legal restrictions currently inhibit the creation of more datasets.", "Important document types, such as contracts and scholarly publications are protected by copyright or considered trade secrets.", "As a result, their owners are concerned with data-leakage when they are used for model training and evaluation.", "Providing both legal and technical solutions, e.g., using privacy-aware infrastructure and models (Downie, 2004; Feyise-tan et al., 2020) is a challenge to be addressed.", "Access to court decisions can also be hindered by bureaucratic inertia, outdated technology and data protection concerns, which collectively result in these otherwise public decisions not being publicly available (Pah et al., 2020).", "While the anonymization of personal data provides a solution to this problem, it is itself an open challenge for legal NLP (Jana and Biemann, 2021).", "In lack of suitable datasets and benchmarks, we have refrained from including anonymization in this version of LexGLUE, but plan to do so at a later stage.", "Another limitation of the current version of LexGLUE is that human evaluation is missing.", "All datasets rely on ground truth labels automatically extracted from data (e.g., court decisions) produced as part of o cial judicial or archival procedures.", "These resources should be highly reliable (valid), but we cannot statistically assess their quality.", "In the future, re-annotating part of the datasets with multiple legal experts would provide an estimation of human level performance and inter-annotator agreement, though the cost would be high, because of the required legal expertise.", "While LexGLUE o ers a much needed unified testbed for legal NLU, there are several other critical aspects that need to be studied carefully.", "These include multi-disciplinary research to better understand the limitations and challenges of applying NLP to law (Binns, 2020), while also considering fairness and robustness (Angwin et al., 2016; Dressel and Farid, 2018; Baker Gillis, 2021; Wang et al., 2021; Chalkidis et al., 2022), and broader legal considerations of AI technologies in general (Schwemer et al., 2021; Tsarapatsanis and Aletras, 2021; Delacroix, 2022).", "This work was partly funded by the Innovation Fund Denmark (IFD) 19 under File No. 0175-00011A and by the German Federal Ministry of Education and Research (BMBF) kmu-innovativ program under funding code 01IS18085.", "We would like to thank Desmond Elliott for providing valuable feedback (baselines for truncated documents presented in Appendix D), Xiang Dai and Joel Niklaus for reviewing and pointing out issues in the new resources (code, datasets).", "All datasets included in LexGLUE, except SCOTUS, are publicly available and have been previously published.", "If datasets or the papers that introduced them were not compiled or written by ourselves, we referenced the original work and encourage LexGLUE users to do so as well.", "In fact, we believe this work should only be referenced, in addition to citing the original work, when experimenting with multiple LexGLUE datasets and using the LexGLUE evaluation infrastructure.", "Otherwise only the original work should be cited.", "We believe that this work does not contain any grounds for ethical concerns.", "A transparent and rigorous benchmark for NLP in the legal domain might serve as an orientation for scholars and industry researchers.", "As a result, the capabilities of tools that are trained using natural language data from the legal domain will become clearer, thereby helping their users to better understand them.", "This increased certainty would also raise the awareness within research and industry communities to potential risks associated with the use of these tools.", "We regard this contribution to a more realistic, more informed discussion as an important use case of the work presented.", "Ideally, it could help both beginners and seasoned professionals to understand the limitations of using NLP tools in the legal domain and thereby prevent exaggerated expectations and potential applications that might risk endangering fundamental rights or the rule of law.", "We currently cannot imagine use cases of this particular work that would lead to ethical concerns or potential harm (Tsarapatsanis and Aletras, 2021).", "LexGLUE comprises seven datasets: ECtHR Task A and B, SCOTUS, EUR-LEX, LEDGAR, UNFAIR-ToS, and CaseHOLD that are available for re-use and re-share with appropriate attribution.", "The data is in general partially anonymized in accordance with the applicable national law.", "The data is considered to be in the public sphere from a privacy perspective.", "This is a very sensitive matter, as the courts try to keep a balance between transparency (the public's right to know) and privacy (respect for private and family life).", "ECtHR contains personal data of the parties and other people involved in the legal proceedings.", "Its data is processed and made public in accordance with the European data protection laws.", "This includes either implied consent or legitimate interest to process the data for research purposes.", "As a result, their processing by us or other future users of the benchmark is not likely to raise ethical concerns.", "SCOTUS contains personal data of a similar nature.", "Again, the data is processed and made available by the US Supreme Court, whose proceedings are public.", "While this ensures compliance with US law, it is very likely that similarly to the ECtHR any processing could be justified by either implied consent or legitimate interest under European law.", "EUR-LEX by contrast is merely a collection of legislation material and therefore not likely to contain personal data, except signatory information (e.g., president of EC).", "It is openly published by the European Union and processed by the EU's Publication O ce.", "In addition, since our work qualifies as research, it is privileged pursuant to Art. 6 (1)", "(f) GDPR.", "LEDGAR contains publicly available contract provisions published in the EDGAR database of the US Securities and Exchange Commission (SEC).", "As far as personal information might be contained, it should equally fall into the public sphere and be covered by research privilege.", "Our processing does not focus on personal information at all, rather attributing content labels to provisions.", "UNFAIR-ToS contains Terms of Services from business entities such as YouTube, Ebay, Facebook, etc., which makes it unlikely for the data to include personal information.", "These companies keep user data separate from contractual provisions, so to the best of our knowledge not contained in this dataset.", "CaseHOLD contains parts of legal decisions 4319 from US Court decisions, obtained from the Harvard library case law corpus.", "All of the decisions were previously published in compliance with US law.", "In addition, most instances (case snippets) are too short to contain identifiable information.", "Should such data be contained, their processing would equally be covered either by implicit consent or a public interest exception.", "We use all datasets in accordance with copyright terms and under the licenses set forth by their creators.", "We have not employed any crowd-workers or annotators for this work.", "The paper outlines the main limitations with regard to speaker population (En-glish) and generalizability in a dedicated section (Section 7).", "As a benchmark paper, our claims naturally match the results of the experiments, which given the current detail of instructions should be easily reproduced.", "We provide several ways of accessing the datasets and running the experiments both with and without Hugging Face infrastructure.", "We do not currently foresee any potential harms for vulnerable or marginalized populations and we do not use, to the best of our knowledge, any identifying characteristics for populations of these kinds." ]
[ "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "objective", "method", "method", "objective", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "objective", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method" ]
[ "QA models based on pretrained language models have achieved remarkable performance on various benchmark datasets.", "However, QA models do not generalize well to unseen data that falls outside the training distribution, due to distributional shifts.", "Data augmentation (DA) techniques which drop/replace words have shown to be effective in regularizing the model from overfitting to the training data.", "Yet, they may adversely affect the QA tasks since they incur semantic changes that may lead to wrong answers for the QA task.", "To tackle this problem, we propose a simple yet effective DA method based on a stochastic noise generator, which learns to perturb the word embedding of the input questions and context without changing their semantics.", "We validate the performance of the QA models trained with our word embedding perturbation on a single source dataset, on five different target domains.", "The results show that our method significantly outperforms the baseline DA methods.", "Notably, the model trained with ours outperforms the model trained with more than 240K artificially generated QA pairs.", "Deep learning models have achieved impressive performances on a variety of real-world natural language understanding tasks such as text classification, machine translation, question answering, and text generation to name a few (Vaswani et al., 2017; Seo et al., 2017).", "Recently, language models that are pretrained with a large amount of unlabeled data have achieved breakthrough in the performance on these downstream tasks (Devlin et al., 2019), even surpassing human performance on some of them.", "The success of such data-driven language model pretraining heavily depends on the amount and diversity of training data available, since when * Equal contribution trained with a small amount of highly-biased data, the pretrained models can overfit and may not generalize well to out-of-distribution data.", "Data augmentation (DA) techniques (Krizhevsky et al., 2012; Verma et al., 2019a; Yun et al., 2019; Sen-nrich et al., 2016) can prevent this to a certain extent, but most of them are developed for image domains and are not directly applicable to augmenting words and texts.", "Perhaps the most important desiderata for an augmentation method in supervised learning, is that it should not change the label of an example.", "For image domains, there exist several well-defined data augmentation techniques that can produce diverse augmented images without changing the semantics.", "In contrast, for Natural Language Processing (NLP), it is not straightforward to augment the input texts without changing their semantics.", "A simple augmentation technique that preserves semantics is replacing words with synonyms or using back translation (Sennrich et al., 2016).", "However, they do not effectively improve the generalization performance because the diversity of viable transformations with such techniques is highly limited (Pham et al., 2021).", "Some recent works (Wei and Zou, 2019; Ng et al., 2020) propose data augmentation methods tailored for NLP tasks based on dropping or replacing words and show that such augmentation techniques improve the performance on the out-of-domain as well as the in-domain tasks.", "As shown in Fig. 1, however, we have observed that most existing data augmentation methods for NLP change the semantics of original inputs.", "While such change in the semantics may not be a serious problem for certain tasks, it could be critical for Question Answering (QA) task since its sensitivity to the semantic of inputs.", "For instance, replacing a single word with a synonym (Hesburgh Vanroth in Fig. 1) might cause the drastic semantic drift of the answer (Jia and Liang, 2017).", "Thus, word-based Q: In what year was the Theodore m.", "( ; ) Figure 1: Concept.", "Our model SWEP perturbs word embedding and feeds the perturbed embedding to the QA model.", "While the input-level perturbation method (SSMBA) changes the words a lot, our method preserves the original words if we project perturbed embedding back to the words.", "augmentations are ineffective for QA tasks, and most existing works on data augmentation for QA tasks resort to question or QA-pair generation.", "Yet, this approach requires a large amount of training time, since we have to train a separate generator, generate QA pairs from them, and then use the generated pairs to train the QA model.", "Also, QA-pair generation methods are not sample-efficient since they usually require a large amount of generated pairs to achieve meaningful performance gains.", "To address such limitations of the existing data augmentation techniques for QA, we propose a novel DA method based on learnable word-level perturbation, which effectively regularizes the model to improve its generalization to unseen questions and contexts with distributional shifts.", "Specifically, we train a stochastic perturbation function to learn how to perturb each word embedding of the input without changing its semantic, and augment the training data with the perturbed samples.", "We refer to this data augmentation method as Stochastic Word Embedding Perturbation (SWEP).", "The objective of the noise generator is to maximize the log-likelihood of the answer of the input with perturbation, while minimizing the Kullback-Leibler (KL) divergence between prior noise distribution and conditional noise distribution of given input.", "Since the perturbation function maximizes the likelihood of the answer of the perturbed input, it learns how to add noise without changing the semantics of the original input.", "Furthermore, minimizing the KL divergence prevents generating identical noise as the variance of the prior distribution is non-zero, i.e. we can sample diverse noise for the same input.", "We train the QA model on the SQuAD dataset (Rajpurkar et al., 2016) with our learned perturbations, and evaluate the trained model on the five different domains BioASQ (Tsatsaronis et al., 2012), New York Times, Reddit post, Amazon review, and Wikipedia (Miller et al., 2020) as well as SQuAD to measure the generalization performance on out-of-domain and in-domain data.", "The experimental results show that our method improves the in-domain performance as well as out-of-domain robustness of the model with this simple yet effective approach, while existing baseline methods often degrade the performance of the QA model, due to semantics changes in the words.", "Notably, our model trained only with the SQuAD dataset shows even better performance than the model trained with 240,422 synthetic QA pairs generated from a question generation model.", "Our contribution in this work is threefold.", "We propose a simple yet effective data augmentation method to improve the generalization performance of pretrained language models for QA tasks.", "We show that our learned input-dependent perturbation function transforms the original input without changing its semantics, which is crucial to the success of DA for question answering.", "We extensively validate our method for domain generalization tasks on diverse datasets, on which it largely outperforms strong baselines, including a QA-pair generation method.", "2018; Yun et al., 2019), data augmentation methods are known to be an effective regularizer in text domain (Sennrich et al., 2016).", "However, unlike the image transformations that do not change their semantics, transforming raw texts without changing their semantics is difficult since they are composed of discrete tokens.", "The most common approach for data augmentation in NLP is applying simple perturbations to raw words, by either deleting a word or replacing it with synonyms (Wei and Zou, 2019).", "In addition, back-translation with neural machine translation has also been shown to be effective, as it paraphrases the original sentence with a different set and ordering of words while preserving the semantics to some extent (Xie et al., 2020).", "Beyond such simple heuristics, Ng et al. (2020) propose to mask the tokens and reconstruct them with pretrained language model to augment training data for text classification and machine translation.", "For QA tasks, question or QA-pair generation (Zhang and Bansal, 2019; Lee et al., 2020) are also popular augmentation techniques, which generate questions or question-answer pairs from an unlabeled paragraph, thus they can be utilized as additional data to train the model.", "Domain Generalization Unlike domain adaptation in which the target domains are fixed and we can access unlabeled data from them, domain generalization aims to generalize to unseen target domains without access to data from the target distribution.", "Several prior works (Li et al., 2018; Bal-aji et al., 2018; Tseng et al., 2020) propose meta-learning frameworks to tackle domain generalization, focusing on image domains.", "For extractive QA, Lee et al. (2019) leverage adversarial training to learn a domain-invariant representation of question and context.", "However, they require multiple heterogeneous source datasets to train the model to be robust to Out-of-Domain data.", "In contrast, Volpi et al. (2018) leverage adversarial perturbation to generate fictitious examples from a single source dataset, that can generalize to unseen domains.", "The goal of extractive Question Answering (QA) is to point out the start and end position of the answer span y = ( y start , y end ) from a paragraph (context) c = ( c 1 , . . . , c L ) with length L for a question x = ( x 1 , . . . , x M ) .", "For generative QA, it aims to generate answer y = ( y 1 , . . . , y K ) instead of predicting the position of answer spans from the context.", "A typical approach to the QA is to train a neural networks to model the conditional distribution p ( y | x , c ) , where are composed of f and g denoted for the parameters of the encoder f ( ; f ) and classifier or decoder g ( ; g ) on top of the encoder.", "We estimate the parameter to maximize the log likelihood with N observations { x ( i ) , y ( i ) , c ( i ) } Ni =1 , which are drawn from some unknown distribution p train , as follows: LMLE ( ) := N (cid:88) i =1 log p ( y ( i ) | x ( i ) , c ( i ) ) (1) For convenience, we set the length T := L + M +3 and abuse notations to define the concatenated sequence of the question x and context c as x := ( x 0 , . . . , x L , c 0 , . . . , c M +1 ) where x 0 , c 0 , c M +1 denote start, separation, and end symbol, respectively.", "However, the model trained to maximize the likelihood in Eq.", "(1) is prone to overfitting and brittle to distributional shifts where target distribution p test is different from p train .", "In order to tackle this problem, we train the model with additional data drawn from different generative process to increase the support of training distribution, to achieve better generalization on novel data with distributional shifts.", "We will describe it in the next section.", "Several methods for data augmentation have been proposed in text domain, however, unlike in image domains (Verma et al., 2019a,b; Yun et al., 2019), there does not exist a set of well-defined data augmentation methods which transform the input without changing its semantics.", "We propose a new data augmentation scheme where we sample a noise z = ( z 1 , . . . , z T ) from a distribution q ( z | x ) and perturb the input x with the sampled noise without altering its semantics.", "To this end, the likelihood p ( y | x , z ) should be kept high even after the perturbation, while the perturbed instance should not collapse to the original input.", "We estimate such parameters and by maximizing the following objective: L noise ( , ) := N (cid:88) i =1 E q ( z | x ( i ) ) [log p ( y ( i ) | x ( i ) , z )] T (cid:88) t =1 DKL ( q ( z t | x ( i ) ) (cid:107) p ( z t )) (2) where 0 is a hyper-parameter which controls the effect of KL-term.", "We assume that z t and z t (cid:48) are conditionally independent given x if t (cid:54) = t (cid:48) , i.e., q ( z | x ) = (cid:81) Tt =1 q ( z t | x ) .", "The parameter of prior is a hyper-parameter to be specified.", "When = 1 , the objective corresponds to the Evidence Lower BOund (ELBO) of the marginal likelihood.", "Maximizing the expected log-likelihood term in Eq.", "(2) increases the likelihoods evaluated with the perturbed embeddings, and therefore the semantics of the inputs after perturbations are likely to be preserved.", "The KL divergence term in Eq.", "(2) penalizes the perturbation distribution q ( z | x ) deviating too much from the prior distribution p ( z ) .", "We assume that the prior distribution is fully factorized, i.e. p ( z 1 , . . . , z T ) = (cid:81) Tt =1 p ( z t ) .", "Furthermore, we set each distribution p ( z t ) as a multivariate Gaussian distribution N ( 1 , I d ) , where 1 = (1 , . . . , 1) R d , I d , denotes a vector with ones, identity matrix, and positive real number, respectively.", "Hence, we expect the inputs perturbed with the multiplicative noises remain close to the original inputs on average.", "Note that the choice of the prior is closely related to Gaussian dropout (Sri-vastava et al., 2014); we will elaborate on this connection later.", "The parameterization of the perturbation function q heavily affects the success of the learning with the objective (2).", "The function needs to control the intensity of perturbation for each token of x without changing the semantics.", "Since the meaning of each word varies across linguistic contexts, the function should be expressive enough to encode the sentence x into a meaningful latent space embedding to contextualize the subtle meaning of each word in the sentence.", "To this end, we share the encoder function f ( ; f ) to contextualize the input x into hidden representation ( h 1 , . . . , h T ) and feed it into the perturbation function as input as shown in the left side of Fig.", "2. However, we stop the gradient of with respect to L ( , ) propagating to the encoder f ( ; f ) .", "Intuitively, it prevents noisy gradient from flowing to p for early stage of training.", "On top of the encoder, we stack two layer feed forward neural network with ReLU activation, which outputs mean t R d and variance 2 t R d for each token, following Kingma and Welling (2014).", "We leverage the reparameterization trick (Kingma and Welling, 2014) to sample z t R d .", "Since x is a sequence of discrete tokens, we map each token =1 ,, [CLS] When did Tesla come to the US?", "Figure 2: Architecture.", "Overview of how the input is perturbed with SWEP.", "It encodes the input to hidden representation with transformers and outputs a desirable noise for each word embedding.", "The noise is multiplied with the word embedding.", "x t to corresponding word embedding e t and multiply it with the noise z t in element-wise manner as follows: e t = WordEmbedding ( x t ) ( h 1 , . . . , h T ) = f ( e 1 , . . . , e T ; f ) t , 2 t = MLP ( h t ) z t = t + t (cid:12) (cid:15) , where (cid:15) N ( 0 , I d ) e t = e t (cid:12) z t (3) where (cid:12) denotes element-wise multiplication.", "We feed ( e 1 , . . . , e T ) to the g f to compute the likelihood p ( y | x , z ) as shown in Fig.", "2. 3.3 Learning Objective As described in the section 3.2, we can jointly optimize the parameters , with gradient ascent.", "However, we want to train the QA model with additional data drawn from the different generative process as well as the given training data to increase the support of training distribution, which leads to better regularization and robustness to the distributional shift.", "Therefore, our final learning objective function is a convex combination of LMLE ( ) and L noise ( , ) as follows: L ( , ) = LMLE ( ) + (1 ) L noise ( , ) (4) where 0 < < 1 is a hyper-parameter which controls the importance of each objective.", "For all the experiments, we set as 0.5.", "In other words, we train the QA model to maximize the conditional log-likelihood of the original input and perturbed one with stochastic gradient ascent.", "consider the i th coordinate.", "With the reparameterization trick, we can write z t,i = t,i + t,i (cid:12) (cid:15) i , where each (cid:15) i iid N (0 , 1) and t,i , t,i are i th component of t , t which are outputs of neural network as described in Eq.", "(3).", "Simply, each noise element z t,i is sampled from N ( t,i , 2 t,i ) .", "Assume that z is the noise sampled from the prior distribution N (1 , ) , i.e. z = 1 + (cid:15) where (cid:15) N (0 , 1) .", "Then, z t,i can be expressed in terms of z as follows: z t,i = z + ( ) (5) If we set = (1 p ) /p where p is the retention probability, we can consider z as a Gaussian dropout mask sampled from N (1 , 1 pp ) , which shows comparable performance to dropout mask sampled from Bernoulli distribution with probability p (Srivastava et al., 2014).", "Then, we can interpret our perturbation function as the input dependent dropout which scales and translates the Gaussian dropout mask, and thus it flexibly controls the intensity of perturbation adaptively to each word embedding of the input x .", "Our goal is to regularize the QA model to generalize to unseen domains, such that it is able to answer the questions from the new domain.", "We consider a more challenging setting where the model is trained with a single source dataset and evaluate it on the datasets from the unseen domains as well as on unseen examples from the source domain.", "Specifically, we train the QA model with SQuAD dataset (Rajpurkar et al., 2016) as source domain, test the model with several different target domain QA datasets BioASQ (Tsatsaronis et al., 2012), New Wikipedia (Wiki), New York Times (NYT), Reddit posts, and Amazon Reviews (Miller et al., 2020).", "We evaluate the QA model with F1 and Exact Match (EM) score, following the convention for extractive QA tasks.", "For the BioASQ dataset, we use the dataset provided in the MRQA shared task (Fisch et al., 2019).", "We downloaded the other datasets from the official website of Miller et al. (2020).", "Implementation Detail As for the encoder f we use the pretrained language model BERT-base (Devlin et al., 2019), ELECTRA-small (Clark", "et al., 2020) for extractive QA and randomly initialize an affine transformation layer for g .", "For the generative QA task, we use a T5-small (Raf-fel et al., 2020) for f g as an encoder-decoder model.", "For the perturbation function q , we stack two feed-forward layers with ReLU on the encoder as described in section 3.2.", "For the extractive QA task, we train the model for 2 epochs with the batch size 8 and use AdamW optimizer (Loshchilov and Hutter, 2019) with the learning rate 3 10 5 .", "For the T5 model, we train it for 4 epochs with batch size 64 and use Adafactor optimizer (Shazeer and Stern, 2018) with learning rate 10 4 .", "We use beam search with width 4 to generate answers for generative question answering.", "Baselines We experiment with our model SWEP and its variant against several baselines.", "1. MLE : This is the base QA model fine-tuned to maximize LMLE ( ) .", "2. Adv-Aug : Following Volpi et al. (2018), we perturb the word embeddings of the input x with an adversarial objective and use them as additional training data to maximize LMLE ( ) .", "We assume that the answer for each question and context remains the same after the adversarial perturbation.", "3. Gaussian-Dropout This is the model whose word embedding is perturbed with dropout mask sampled from a Gaussian distribution N (1 , 1 pp ) , where p is dropout probability and set to be 0.1 (Srivastava et al., 2014).", "4. Bernoulli-Dropout This is the model of which word embedding is perturbed with dropout mask sampled from Bernoulli distribution Ber (1 p ) , where p is dropout probability and set to be 0.1 (Srivastava et al., 2014).", "5. Word-Dropout : This is the model trained to maximize LMLE ( ) with word dropout (Sen-nrich et al., 2016) where the tokens of x are randomly set to a zero embedding.", "6. SSMBA : This is the QA model trained to maximize LMLE ( ) , with additional examples generated by the technique proposed in (Ng et al., 2020), which are generated by corrupting the target sequences and reconstructing them using a masked language model, BERT.", "7. Prior-Aug This is variant of SWEP trained with additional perturbed data, where the noise is drawn from the prior distribution p ( z ) rather than q ( z | x ) .", "8. SWEP : This is our full model which maximizes the objective function in Eq.", "(4).", "We compare SWEP and its variant Prior-Aug with the baselines as described in section 4.1.", "As shown in Table 1, our model outperforms all the baselines, whose backbone networks are BERT or ELECTRA, on most of the datasets.", "The data augmentation with SSMBA improves the performance of ELECTRA on in-domain dataset SQuAD and Wiki.", "However, it significantly underperforms ours on out-of-domain datasets even if the data augmentation with SSMBA use 4.8 times more data than ours.", "Similarly, Table 2 shows that the T5 model trained with our method consistently improves the performance of the model trained with MLE on most of the datasets.", "Contrary to ours, SSMBA significantly degrades the performance of the BERT and T5 model both on in-domain and out-of-domain datasets.", "Since masking and reconstructing some of the tokens from a sentence with a masked language model may cause a semantic drift, those transformations make some questions unanswerable.", "As a result, the data augmentation with SSMBA often hurts the performance of the QA model.", "Similarly, Word-Dropout randomly zeros out word embedding of tokens, but some of zeroed out words are critical for answering questions.", "Adv-aug marginally improves the performance, but it requires an additional backward pass to compute the gradient for adversarial perturbation, which slows down the training procedure.", "We empirically show that our data augmentation SWEP is an effective regularizer in the setting where there are only a few annotated training examples.", "To simulate such a scenario, we reduce the number of labeled SQuAD data to 80% , 50% , 30% , and 10% and train the model with the same experimental setup as described in section 4.2.", "Fig. 3 shows the accuracy as a function of the percentage of QA pairs.", "Ours consistently improves the performance of the QA model at any ratios of labeled data.", "Even with 10% of labeled data, it increases EM and F1 score by 1%.", "the QA model trained with additional synthetic data generated from the question-answer generation model (QG).", "We use Info-HCVAE (Lee et al., 2020) to generate QA pairs from unlabeled paragraphs and train the BERT model with human-annotated and synthetic QA pairs, while varying the number of the generated pairs.", "As shown in Fig. 4, SWEP trained only with SQuAD already outperforms the model trained with 240,422 synthetic QA pairs generated with Info-HCVAE.", "Moreover, when combining the two methods, we achieve even larger performance gains compared to when using either SWEP or Info-HCVAE alone, as the two approaches are orthogonal.", "We further perform an ablation study to verify the effectiveness of each component of SWEP.", "In Table 3, we present the experimental results while removing various parts of our model.", "First of all, we replace the elementwise multiplicative noise with elementwise additive noise and set the prior distribution as N ( 0 , I d ) .", "We observe that the noise generator does not learn meaningful perturbation, which leads to performance degradation.", "Moreover, instead of learning t or t from the data, we fix either of them and perform experiments, which we ELECTRA-small BioASQ NYT Amazon Prior-Aug 38.96 / 54.19 74.38 / 83.47 59.01 / 73.11 SWEP 40.35 / 55.72 75.18 / 84.18 60.89 / 74.97 additive perturb.", "denote w/ fixed and w/ fixed .", "For all the time step t , we set t as (1 , . . . , 1) R d for w/ fixed .", "For w/ fixed , we set 2 t as (1 , . . . , 1) R d , i.e. we use the identity matrix I d as the covariance of q ( z | x ) .", "As shown in Table 3, fixing t or 2 t with predefined values achieves slightly better performance than the Prior-Aug, but it degrades the performance of the full model.", "Based on this experimental results, we verify that learning t or 2 t for each word embedding e t is crucial to the success of the perturbation function, as it can delicately perturb each words with more flexibility.", "Furthermore, we convert the stochastic perturbation to deterministic one, which we denote as w/o (cid:15) N ( 0 , I d ) .", "To be specific, the MLP ( h t ) in Eq.", "(2) only outputs t alone and we multiply it with e t without any sampling, i.e. e t = e t (cid:12) t .", "As shown in Table 3, the deterministic perturbation largely underperforms the full model.", "In terms of the objective function, we observe that removing LMLE ( ) results in larger performance drops, suggesting that using both augmented and original instance as a single batch is crucial for performance improvement.", "In addition, the experiment without DKL shows the importance of imposing a constraint on the distribution of perturbation with the KL-term.", "We quantitatively analyze the intensity of perturbations given to the input during the training.", "To quantitatively measure the semantic drift, we measure the extent to how many words are replaced Figure 5: Visualization of the Perturbation.", "Quantitative Analysis.", "Plot the extent to how many words changed by perturbation during training.", "with another word during training for each data augmentation method and plot it in Fig.", "6. Unlike SSMBA, which replaces the predefined percentage of words with others, the adversarial augmentation (Adv-Aug) or SWEP perturbs the word embeddings in the latent space.", "We project the perturbed embedding back to the input space to count how many words are changed.", "Specifically, each word w t R |V| is represented as the one-hot vector and mapped to word vector as e t = W e w t , where V denotes the vocabulary for training data and W e R d |V| is the word embedding matrix.", "Then, the perturbed word embedding e t is projected back to one-hot vector w t as follows: ( v 1 , . . . , v d ) (cid:62) = W (cid:62) e e t j = arg max i { v 1 , . . . , v i , . . . , v d } w t = one-hot ( j, |V| ) (6) where one-hot( j, |V| ) makes a one hot vector of which j -th component is one with the length |V| .", "In Fig. 6, we plot the ratio of how many words are replaced with others in raw data before and after each perturbation for each batch as training goes on.", "In Fig. 1, for example, SSMBA changes about 11 raw words while SWEP does not change any words.", "We observe that around 20% of perturbed words are not projected back to each original word if we apply the adversarial augmentation.", "Also, we see that the adversarial augmentation largely changes the semantics of the words although the perturbation at the final layer is within the epsilon neighborhood of its latent embedding.", "In contrast, the perturbation by SWEP rarely changes the original words except in the very early stage of training.", "This observation implies that SWEP learns the range of perturbation that preserves the semantics of the original input, which is important when augmenting data for QA tasks and verifies our concept described in Fig.", "1. 5.3 Qualitative Analysis In Fig. 5, we visualize the value of the l 2 distance between the original word and one with the perturbation after the training.", "We observe that the perturbation function q learns to generate adaptive perturbations for each word (i.e. the lowest intensity of perturbation on answer-like words pro-fessor jerome green).", "However, it is still unknown why the intensity of certain word is higher than the others and how much difference affects the dynamics of training.", "We have included more observation such as embedding space visualization in Figure", "7. 6 Conclusion We proposed a simple yet effective data augmentation method based on a stochastic word embedding perturbation for out-of-distribution QA tasks.", "Specifically, our stochastic noise generator learns to generate the adaptive noise depending on the contextualized embedding of each word.", "It maximizes the likelihood of input with perturbation, such that it learns to modulate the intensity of perturbation for each word embedding without changing the semantic of the given question and paragraph.", "We augmented the training data with the perturbed samples using our method, and trained the model with only a single source dataset and evaluate it on datasets from five different domains as well as the in-domain dataset.", "Based on the experimental results, we verified that our method improves both the performance of in-domain generalization and robustness to distributional shifts, outperforming the baseline data augmentation methods.", "Further quantitative and qualitative analysis suggest that our method learns to generate adaptive perturbation without a semantic drift.", "Our data augmentation method SWEP efficiently improves the robustness of the QA model to unseen out-of-domain data with a few additional computational cost.", "This robustness is crucial to the success of the real-world QA models, since they frequently encounter questions for unseen domains, from the end-users.", "While previous works such as (Lee et al., 2019) require a set of several heterogeneous datasets to learn domain-invariant representations, such is not a sample-efficient method, while our method is simple yet effective and can improve the robustness of the QA model only when trained on a single source dataset.", "This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2019-0-00075, Artificial Intelligence Graduate School Program(KAIST)), Samsung Electronics Co., Ltd, 42Maru, and the Engineering Research Center Program through the National Research Foundation of Korea (NRF) funded by the Korean Government MSIT (NRF-2018R1A5A1059921)." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "objective", "objective", "method", "abstain", "abstain", "method", "method", "result", "result", "objective", "objective", "result", "result", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "objective", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "result", "method", "abstain", "abstain", "method", "abstain", "result", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "method", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "method", "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "other" ]
[ "User intent classification plays a vital role in dialogue systems.", "Since user intent may frequently change over time in many realistic scenarios, unknown (new) intent detection has become an essential problem, where the study has just begun.", "This paper proposes a semantic-enhanced Gaussian mixture model (SEG) for unknown intent detection.", "In particular, we model utterance embeddings with a Gaussian mixture distribution and inject dynamic class semantic information into Gaussian means, which enables learning more class-concentrated embeddings that help to facilitate downstream outlier detection.", "Coupled with a density-based outlier detection algorithm, SEG achieves competitive results on three real task-oriented dialogue datasets in two languages for unknown intent detection.", "On top of that, we propose to integrate SEG as an unknown intent identifier into existing generalized zero-shot intent classification models to improve their performance.", "A case study on a state-of-the-art method, ReCapsNet, shows that SEG can push the classification performance to a significantly higher level.", "Understanding user intent is crucial for developing conversational and dialogue systems.", "It is essential to accurately identify the intent behind a user utterance to better guide downstream decisions and policies.", "With the advent of conversational AI, dialogue systems are becoming central tools in many applications such as mobile apps, companion bots, virtual assistants and so on.", "Since user interests may change frequently over time, the AI agents may continuously see unknown (new) user intents.", "Manual annotation can hardly catch up with such rapid development, which motivates the problem Equal contribution.", "of unknown intent detection that has recently attracted increasing interest from both academia and industry.", "While there have been some pioneering works studying the open-world classification problem in natural language processing (Fei and Liu, 2016; Shu et al., 2017), very few methods are designed for unknown intent detection.", "To our knowledge, the first work is by Lin and Xu (2019), in which the authors use large margin cosine loss (LMCL) to learn deep discriminative features and then feed them to a density-based outlier detection algorithm to identify unknown intents.", "Although this method performs well on some benchmark datasets, it has two limitations.", "(1) In training, LMCL ignores the prior knowledge of class labels, while it has been shown that label correlations captured in the embedding space can improve prediction performance, especially in the zero-shot learning scenarios (Palatucci et al., 2009; Ma et al., 2016).", "(2) LMCL computes the cosine distance between embeddings in the feature space and trains with a softmax cross-entropy loss, making the embedding distribution of each class long and narrow (Wan et al., 2018), which may be less suitable for applying density-based anomaly detection algorithms to detect unknown intents.", "In this paper, we aim to address these limitations and propose a novel semantic-enhanced Gaussian mixture model (SEG) for unknown intent detection.", "In contrast to the softmax function, the Gaussian mixture model enforces embeddings to form ball-like dense clusters in the feature space, which may be more desirable for outlier detection, especially when using density-based outlier detection algorithms.", "Furthermore, we propose to inject the semantic information of class labels into the Gaussian mixture distribution by assigning the embeddings of class labels or descriptions to be the means of the Gaussians.", "This enables SEG to learn more class-concentrated embeddings that can benefit downstream outlier detection.", "We further use a large margin loss to make SEG learn more discriminative features and employ a density-based outlier detection algorithm LOF (Breunig et al., 2000) to detect unknown intents.", "Identifying unknown intents is not enough for some application scenarios where it is important to know what exactly the new intents are, e.g., zero-shot intent classification.", "Current generalized zero-shot intent classification methods (Chen et al., 2016; Kumar et al., 2017; Xia et al., 2018; Liu et al., 2019) attempt to classify test instances directly by making predictions in the pool of all the seen and unseen intents.", "However, their prediction performances are quite low, and they are still far from practical use.", "In this work, we propose to integrate SEG as an unknown intent identifier into the generalized zero-shot intent classification pipeline.", "The basic idea is that correctly identifying if the intent of an utterance is known or unknown will make the subsequent intent classification task much easier.", "We conduct a case study on a state-of-the-art zero-shot intent classification method ReCapsNet (Liu et al., 2019).", "The results show that incorporating SEG successfully improves the performance of ReCapsNet by a large margin.", "It even pushes the performance to a practical level on the SNIPS dataset (Coucke et al., 2018).", "The main contributions of this paper are summarized as follows.", "We propose a semantic-enhanced Gaussian mixture model (SEG) for unknown intent detection by incorporating class semantic information into a Gaussian mixture distribution.", "We explore to improve existing generalized zero-shot intent classification systems with an unknown intent identifier.", "To the best of our knowledge, this is the first attempt to apply unknown intent detection in this task.", "We conduct extensive experiments on three real-world datasets to validate the effectiveness of the proposed SEG model for unknown intent detection and its application in generalized zero-shot intent classification.", "The rest of the paper is organized as follows.", "In Section 2, we review related works on intent classification and open-world classification.", "In Section 3, we discuss the proposed SEG model in details.", "In Section 4, we present experimental results on unknown intent detection.", "In Section 5, we apply SEG to improve generalized zero-shot intent classification and conduct a case study.", "Finally, Section 6 concludes the paper.", "User intent classification is an important component of dialogue systems.", "Great effort has been made to understand user intent across various domains, ranging from search engine questions (Hu et al., 2009) to medical queries (Zhang et al., 2016).", "Deep learning models including convolutional neural networks (CNN) (Xu and Sarikaya, 2013) and attention-based recurrent neural networks (RNN) (Ravuri and Stolcke, 2015; Liu and Lane, 2016) are commonly used for intent classification.", "CNN based methods build sentence embeddings by aggregating embeddings of adjacent words, while RNN based methods extract sentence embeddings via encoding word embeddings sequentially.", "Both types of methods have shown promising results in practice (Yin et al., 2017).", "Traditional intent classification methods require considerable amount of labeled data for each class to train a discriminative classifier, while zero-shot intent classification (Sappadla et al., 2016; Zhang et al., 2019) addresses the problem that not all intent categories are seen during the training phase, which is an important task in natural language understanding as novel intents may continuously emerge in dialogue systems (Liu and Lane, 2016; Nam et al., 2016; Xu and Sarikaya, 2013).", "Zero-shot intent classification aims to generalize knowledge and concepts learned from seen intents to recognize unseen intents.", "Early methods (Ferreira et al., 2015a,b; Yazdani and Henderson, 2015) explore the relationship between seen and unseen intents by introducing external resources such as manually defined attributes or label ontologies, but they are usually expensive to obtain.", "To deal with this, some methods (Chen et al., 2016; Kumar et al., 2017) map the utterances and intent labels to an embedding space and then model their relations in the space.", "Recently, IntentCapsNet-ZS (Xia et al., 2018) extends capsule networks (Sabour et al., 2017) for zero-shot intent classification by transferring the prediction vectors from seen classes to unseen classes.", "ReCapsNet (Liu et al., 2019) shows that IntentCapsNet-ZS hardly recognizes Figure 1: Illustration of the proposed framework for unknown intent classification.", "utterances from unseen intents in the generalized zero-shot classification scenario, and proposes to solve this issue by transferring the transformation matrices from seen intents to unseen intents.", "In this paper, we use ReCapsNet as an example to show that incorporating an unknown intent identifier in the generalized zero-shot classification pipeline can significantly improve the prediction performance on unseen intents and the overall performance.", "Most of existing classification methods make the closed-world assumption, that is, no new classes can appear in testing.", "However, the real world is open and dynamic, and in many applications, the AI agent cannot expect it sees everything in training, which makes open-world learning or classification an important problem.", "There are two major approaches to tackle open-world classification.", "One is to use the classifier to output an additional confidence score to measure the probability that a test sample is seen or unseen.", "cbsSVM (Fei and Liu, 2016) proposes a center-based similarity (CBS) learning strategy and employs SVM to build 1-vs-rest CBS classifiers.", "MSP (Hendrycks and Gimpel, 2017) proposes to use the maximum softmax probability as the confidence score.", "Instead of using Softmax as the final output layer, DOC (Shu et al., 2017) builds a multi-class classifier with a 1-vs-rest final layer which contains a sigmoid function for each seen class to reduce the open space risk.", "exploiting anomaly detection methods such as robust covariance estimators (Rousseeuw and Driessen, 1999), one-class SVM (Scholkopf et al., 2001), isolation forest (Liu et al., 2008) and local outlier factor (Breunig et al., 2000).", "Robust covariance estimators assume data follows a Gaussian mixture distribution.", "Based on this, it tries to fit an elliptic envelope, and outliers can be defined as points standing far enough from the fit shape.", "One-class SVM finds a hyperplane that circles the positive samples as the decision boundary.", "Isolation forest uses a binary search tree (isolated tree) to isolate samples.", "Due to the small number of outliers and their alienation from most samples, outliers will be isolated earlier and be closer to the root node of the isolated tree.", "Local outlier factor (LOF) is a density-based algorithm, which compares the density of a point and its neighbors to determine whether it is an abnormal point.", "Lower density means it is more likely to be identified as an abnormal point.", "In addition, to facilitate anomaly detection, some methods (Lin and Xu, 2019; Wan et al., 2018) use large margin loss functions to learn more discriminative feature representations.", "Given an utterance x = { w 1 , w 2 , . . . , w T } with T words, where w t R d w is the embedding of the t -th word.", "Each word can be further encoded sequentially using a bidirectional LSTM (BiLSTM), i.e., h t = LSTM fw ( w t , h t 1 ) , h t = LSTM bw ( w t , h t +1 ) , (1) where h t , h t R d h are the hidden states of the word w t by forward LSTM fw and backward LSTM bw respectively.", "The word w t is encoded as the entire hidden state, which is represented by concatenating h t and h t , i.e. h t = [ h t ; h t ] , and the hidden state matrix of the utterance can be represented as H = [ h 1 , h 2 , . . . , h T ] R 2 d h T .", "Furthermore, we use the self-attention mechanism to obtain the sentence embedding.", "Specifically, a = softmax ( W s 2 tanh ( W s 1 H )) , z = W Ha , (2) where a RT is the self-attention weight vector, W s 1 R d a 2 d h and W s 2 R 1 D a are trainable parameters, W R d z 2 d h is also trainable feed-forward weight parameter, and z R d z is the final representation of the utterance x .", "The softmax cross-entropy loss is widely used in many machine learning problems.", "However, the embedding distribution of each class learned by the softmax cross-entropy loss tends to be long, narrow, and radiating from the center, with different classes distributed next to each other closely (Wan et al., 2018).", "Such embedding distribution may not be ideal for detecting new intent classes, as there might not be much space for new classes.", "Nevertheless, the Gaussian mixture loss can enforce each class to gather into a dense and small cluster, which may be more desirable for detecting new intents.", "Here, we design a semantic-enhanced large margin Gaussian mixture loss for embedding learning.", "Large-Margin Cross-Entropy Loss Given a K way classification task, we assume the extracted feature vector (embedding) z of the training samples follows a Gaussian mixture distribution, where k and k are the mean and covariance of class k in the embedding space respectively and p ( k ) is the prior probability of class k .", "The probability density function of z is given by p ( z ) = (cid:88) k N ( z ; k , k ) p ( k ) , (3) where N ( z ; k , k ) is the Gaussian distribution.", "For the embedding z i of any training sample x i , the posterior probability that z i belongs to its class y i can be expressed as p ( y i | z i ) = N ( z i ; y i , y i ) p ( y i ) (cid:80) k N ( z i ; k , k ) p ( k ) .", "(4) The cross-entropy loss of z i between the true class label y i and the inference p ( y i | z i ) can then be computed as: L ce,i = log p ( y i | z i ) , (5) and the total loss of N training samples is L ce = 1 NN (cid:88) i =1 L ce,i .", "(6) Let d k be the Mahalanobis distance between z i and k , i.e., d k = ( z i k ) (cid:62) 1 k ( z i k ) / 2 .", "(7) Then L ce,i can be expressed as L ce,i = log p ( y i ) | y i | 12 e d yi (cid:80) k p ( k ) | k | 12 e d k .", "(8) Consider a simplified case where p ( k ) and k are identical for all classes.", "In this case, the model will give a correct prediction of z i if the distance of z i to its class mean y i is less than or equal to its distance to any other class mean.", "In general, large margin loss helps to improve classification performance.", "Here, we also introduce a classification margin m [1 , + ) into the cross-entropy loss, which then becomes: L mce = 1 NN (cid:88) i =1 L mce,i , L mce,i = log p ( y i ) | y i | 12 e md yi (cid:80) k p ( k ) | k | 12 e d k .", "(9) With the large margin loss, z i is correctly classified only when its distance to class mean y i is significantly less than (no more than 1 m of) its distance to any other class mean.", "Semantic Enhancement via Class Description This is one of the key features of our proposed method.", "We inject the semantic information of each class into the Gaussian mixture model by assigning the embedding learned from the text description d k of class k to be the class centroid k .", "The text description d k can either be a single-word class name or a sentence or paragraph that describes the class.", "That is, k = feature extract ( d k ) , (10) where feature extract ( ) indicates the feature extraction module in Section 3.1.", "Generation Loss In addition to the cross-entropy loss, we want to maximize the observed likelihood of the embeddings with the Gaussian mixture distribution.", "Specifically, we minimize the following negative logarithm likelihood, L g = N (cid:88) i =1 log N ( z i ; y i , y i ) =12 N (cid:88) i =1 ( z i y i ) (cid:62) 1 y i ( z i y i ) + const , (11) where const means a constant number.", "As shown in Eq.", "(11), the generation loss L g encourages the embedding z i to be close to its class centroid y i , which facilitaes learning a more class-concentrated embedding distribution that may benefit the downstream outlier detection task.", "where is a trade-off parameter.", "By the above feature learning procedure, each utterance x can be encoded as an embedding z .", "Then, the embedding z is fed to a well-known outlier detection algorithm LOF (Breunig et al., 2000) to detect new or unknown intents (outliers).", "LOF is an unsupervised density-based anomaly detection method based on the following intuition.", "By comparing the local density of an object to those of its neighbors, it can identify regions of similar density.", "The objects with substantially lower density than their neighbors' are considered to be outliers.", "where N k ( z ) denotes the set of k -nearest neighbors of z , and lrd denotes the local reachability density which measures the local density around an object.", "The local reachability density is defined as the inverse of the average reachability distance between z and its neighbors, i.e., lrd k ( z ) = | N k ( z ) | (cid:80) o N k ( z ) reach-dist k ( z , o ) .", "Here, the reachability distance reach-dist k ( z , o ) is defined as reach-dist k ( z , o ) = max { k-dist ( o ) , d ( z , o ) } , (15) where k-dist ( o ) denotes the distance of the object o to its k -th nearest neighbor, and d ( z , o ) is the distance between z and o .", "If the LOF factor of an utterance is much larger than 1, it has substantially lower local density than its neighbors', which means the utterance embedding is relatively distant from its neighbors.", "Hence, it can be inferred the utterance is likely to belong to an unknown intent class.", "Figure 1 illustrates the overall training and testing procedures of the proposed framework for unknown intent detection.", "The backbone network is a self-attention Bi-LSTM encoder.", "In the training phase, the encoder is trained by minimizing the semantic-enhanced large margin Gaussian mixture loss (SEG classifier) as in Eq.", "(12) on the training samples (seen intent class instances).", "In the testing phase, user utterances may come from both seen and unseen intent classes.", "Given an utterance, we first obtain its feature representation z with the trained encoder, then we use LOF to decide whether z is an outlier or not.", "If z is an outlier, we take it as an instance of some new intent class.", "Otherwise, we classify z to one of the seen intent classes using the SEG classifier.", "In this section, we present experimental results on unknown intent detection.", "Formally, we train an Dataset SNIPS ATIS SMP-2018 % of known intents 25% 50% 75% 25% 50% 75% 25% 50% 75% MSP 0.5543 0.8060 0.8585 0.6848 0.5158 0.3853 0.6132 0.7089 0.7716 DOC 0.5462 0.7962 0.8564 0.7007 0.5073 0.3659 0.6095 0.7197 0.7642 Softmax 0.5508 0.8036 0.8393 0.6597 0.6310 0.5732 0.5818 0.6860 0.7351 LMCL 0.5489 0.8041 0.8458 0.6763 0.6778 0.6110 0.6059 0.7094 0.7580 SEG/o 0.5440 0.8067 0.8474 0.6768 0.6699 0.5918 0.6734 0.7676 0.8128 SEG 0.5599 0.8193 0.8612 0.6410 0.6700 0.6466 0.6966 0.7895 0.8205 Table 2: Macro F1-score of unknown intent detection with different proportion of seen classes.", "unknown intent detection system with training data D tr = ( X tr , Y tr ) , where Y tr { l 1 , , l K } = C seen (the set of seen intent classes).", "For test utterances of seen intents, the unknown intent detection system aims to assign correct intent labels to them.", "For test utterances of unseen intents, the system is expected to identify them as outliers.", "We evaluate our method SEG for unknown intent detection on 3 real task-oriented dialogue datasets: SNIPS (Coucke et al., 2018), ATIS (Hemphill et al., 1990) and SMP-2018 (Zhang et al., 2017).", "SNIPS is an open-source single-turn English corpus, which contains 7 types of user intents across different domains.", "ATIS is also an English dataset, which contains 18 types of user intent in the airline travel domain.", "SMP-2018 is a Chinese dialogue corpus for user intent recognition, which contains 30 different types of user intents.", "The statistics of the datasets are summarized in Table 1.", "We compare SEG with the following unknown intent detection methods.", "Maximum Softmax Probability (MSP) (Hendrycks and Gimpel, 2017) considers the maximum softmax probability of a sample as the confidence score to measure the probability that it belongs to a seen intent.", "The smaller the confidence score is, the more likely it belongs to an unknown intent.", "DOC (Shu et al., 2017) builds m 1-vs-rest sigmoid classifiers for m seen classes respectively.", "The maximum probability is considered as the confidence of whether the sample belongs to the seen intent.", "Softmax .", "It can be considered as an ablation study of our method SEG, which uses softmax instead of Gaussian mixture distribution to learn discriminative features.", "LMCL (Lin and Xu, 2019) uses large margin cosine loss instead of Gaussian mixture distribution to learn discriminative embeddings.", "We follow the setting in LCML (Lin and Xu, 2019) for unknown intent detection.", "Considering that some datasets may be unbalanced, we randomly select seen intents by a weighted random sampling over the entire intent set.", "The rest of the intents are regarded as unknown.", "We randomly select 30% samples of each intent to form the test set.", "The rest of each seen intent is added to the training set.", "We also follow LMCL to use macro f1-score as the evaluation metric, which makes sense because the ATIS dataset is extremely unbalanced.", "For SNIPS, ATIS and SMP-2018, we use 300-dim embeddings pre-trained on Fasttext, Glove, and Chinese-Word-Vectors respectively.", "For BiL-STM, we set the number of layers as 2 and the output dimension as 128.", "In the self-attention layer, we set the attention dimension d a =10.", "After the self-attention layer, we project the feature vector to a d z -dimension vector via a linear layer.", "We set d z =12 for SNIPS and SMP-2018, and d z =4 for ATIS.", "We report the average results over 10 runs.", "For the loss function, we set the margin m = 1 and the trade-off parameter = 0 .", "5 .", "For MSP, we set the threshold as 0.5 following Lin and Xu (2019).", "For DOC, we set the threshold as 0.5 as used in the original paper.", "During training of MSP and DOC, we clip the gradient norm to avoid gradient exploding.", "For LMCL, we follow the original paper to set the scaling factor s = 30 and the cosine margin m = 0 .", "35 .", "Softmax, LMCL, SEG/o and SEG all use LOF as the outlier detector, and we use the same set of parameters for LOF.", "From Table 2, it can be seen that our method SEG outperforms the baselines in most cases.", "Especially, on the most challenging dataset SMP-2018, SEG and SEG/o outperfom others by a large margin, demonstrating its high effectiveness.", "Moreover, we can make the following observations: (1) SEG consistently outperforms SEG/o in most cases, which proves the effectiveness of the proposed semantic enhancement mechanism.", "(2) SEG/o generally has higher scores than Softmax and LMCL, especially on the more complex dataset SMP-2018, where significant gaps can be observed.", "The results indicate the advantage of Gaussian mixture model over Softmax and the variant LMCL for learning class-concentrated embeddings, which are more suitable to be coupled with the outlier detector LOF.", "(3) All the methods work well on SNIPS, which is a simple dataset.", "MSP and DOC outperform other methods on ATIS with only 25% seen classes.", "However, as the proportion of seen class increases, we can see a significant decline in their performance.", "This is because ATIS is severely imbal-anced where one intent accounts for 96% of the entire data.", "When there are many seen classes, DOC and MSP cannot learn an effective supervised classifier due to the dominance of one class.", "In this section, we apply our method SEG to an extended application of unknown intent classification zero-shot intent classification.", "It aims to discriminate unseen intents, which is beyond only detecting their existence.", "Specifically, given the training data D tr = ( X tr , Y tr ) where Y tr C seen , a zero-shot classification system is trained to predict the label y te of any test sample which may belong to an unseen class, using the knowledge transferred from the seen data.", "There are two common settings for zero-shot learning, generalized zero-shot classification, where y te { C seen , C unseen } , and standard zero-shot classification, where y te C unseen .", "Here, C unseen is the set of unseen intent classes.", "Previous attempts try to tackle the challenge of Figure 2: A typical generalized zero-shot intent classification pipeline.", "zero-shot intent classification from three directions.", "(1) What prior knowledge is more supportive, such as morphology (character-level embedding), class descriptions, and knowledge-based entity attributes (Ferreira et al., 2015a,b; Chen et al., 2016; Kumar et al., 2017).", "(2) How to better utilize these prior knowledge to extract more informative semantic representations, such as data augmentation and hierarchical representations learned by capsule networks (Xia et al., 2018).", "(3) With the extracted semantic features, how to design a better zero-shot learning strategy, such as reconstructing weight matrix for unseen intents through relation learning (Liu et al., 2019).", "In this work, we improve generalized zero-shot intent classification by integrating the proposed SEG model as a binary unknown intent identifier into the original pipeline.", "We explore multiple ways of integration and conduct a case study based on a state-of-the-art method ReCapsNet (Liu et al., 2019).", "As shown in Figure 2, a typical generalized zero-shot classification framework can be abstracted into two layers, the encoder layer and the zero-shot classifier layer.", "In the encoder layer, a user utterance x in the text format needs to be first mapped to the semantic representation z ZSx .", "In addition, it is common to encode class information as S for better semantic learning or knowledge transfer.", "In order to learn better semantic representation, prior knowledge is usually incorporated at this stage.", "Then, the learned representation will be fed to the zero-shot classifier layer.", "Various zero-shot classification strategies have been proposed to transfer knowledge to new categories.", "Finally, the system outputs the prediction y te { C seen , C unseen } for the utterance x .", "We integrate SEG into the pipeline between the encoder layer and the classifier layer as shown in Figure", "3. With the semantic feature z x , we predict if the utterance x is an outlier via: p ( g | z x ) , g { seen , unseen } .", "For the case g = seen, the intent of the utterance is considered to be a seen one.", "We then predict the intent by p ( y | z x , y C seen , X tr , ) where denotes the parameters of the original framework.", "Otherwise, the intent of the utterance is considered to be unseen, and we predict it via p ( y | z x , y C unseen , X tr , ) .", "Feature Assemble We adopt two ways Sepa-rate and Combine to assemble features for the following outlier detection task.", "Combine.", "To take advantage of the original model, we first obtain the original semantic feature representation z ZSx and define a transform function f .", "Then, f ( z ZSx ) is concatenated with the pre-trained features by SEG, z SEGx , to make a combined feature representation: z x = [ z SEGx || f ( z ZSx )] .", "ReCapsNet Recently, ReCapsNet-ZS (Liu et al., 2019) demonstrates state-of-the-art performance in generalized zero-shot intent classification.", "In this section, we conduct a case study on integrating the new intent identifier into ReCapsNet.", "The framework of ReCapsNet is illustrated in Figure", "4. In the encoder layer, each utterance x is encoded with R semantic capsules [ m 1 , m 2 , ..., m R ] as the representations in R different semantic spaces.", "In addition, the training set D tr and class labels L are encoded as S tr and Figure 4: The framework of ReCapsNet.", "SC , respectively.", "In the zero-shot classifier layer, z ZS x is fed to a capsule network to make prediction.", "Each seen class k has R transformation matrices { W kr } Rr =1 .", "In the testing phase, ReCapsNet reconstructs the r -th transformation matrix for each unseen class l as W lr = (cid:80) k q lk W kr , where q lk is the relation between unseen class l and seen class k learned from ( S tr , Y tr ) and SC by metric learning.", "For the variant Combine, to exploit the property that each utterance is variously represented in different semantic spaces as discussed in Liu et al. (2019), we define the semantic feature representation of ReCapsNet as f ( z ZSx ) = [ (cid:107) m 1 (cid:107) 2 , (cid:107) m 2 (cid:107) 2 , , (cid:107) m R (cid:107) 2 ] .", "Experimental Setup We integrate SEG into the ReCapsNet pipeline with both Sep and Combine variants and test the performance of generalized zero-shot classification.", "Following the settings of generalized zero-shot classification in Liu et al. (2019), we test our methods on two datasets SNIPS (Coucke et al., 2018) and SMP-2018 (Zhang et al., 2017) and report the micro-averaged recall (accuracy) and F1 scores.", "The baselines include DeVISE (Frome et al., 2013), CMT (Socher et al., 2013), CDSSM (Chen et al., 2016), Zero-shot DNN (Kumar et al., 2017), Intent-Method SNIPS SMP-2018 Seen Unseen Overall Seen Unseen Overall Acc F1 Acc F1 Acc F1 Acc F1 Acc F1 Acc F1 DeViSE 0.9481 0.6536 0.0211 0.0398 0.4215 0.3049 0.8040 0.6740 0.0270 0.0310 0.5030 0.4250 CMT 0.9755 0.6648 0.0397 0.0704 0.4438 0.3271 0.8314 0.7221 0.0798 0.1069 0.5398 0.4834 CDSSM 0.9549 0.7033 0.0111 0.0218 0.4234 0.3194 0.6653 0.5540 0.1436 0.1200 0.4864 0.4052 Zero-shot DNN 0.9432 0.6679 0.0682 0.1041 0.4488 0.3493 0.7323 0.6116 0.0590 0.0869 0.5013 0.4316 IntentCapsNet 0.9741 0.6517 0.0000 0.0000 0.4200 0.2810 0.8850 0.7281 0.0000 0.0000 0.5375 0.4423 ReCapsNet 0.9511 0.6777 0.0994 0.1594 0.4705 0.3826 0.8107 0.7417 0.1959 0.1727 0.5692 0.5182 SEG (Sep / o) 0.9308 0.7501 0.3523 0.4514 0.6014 0.5800 0.7066 0.7391 0.3848 0.3038 0.5802 0.5681 SEG (Combine / o) 0.9217 0.7924 0.4642 0.5321 0.6612 0.6441 0.7054 0.7326 0.3888 0.3116 0.5811 0.5672 SEG (Sep / w) 0.7898 0.8335 0.6728 0.6420 0.7232 0.7245 0.6624 0.7243 0.4779 0.3627 0.5899 0.5823 SEG (Combine / w) 0.8644 0.8658 0.6961 0.6931 0.7685 0.7674 0.6821 0.7359 0.4848 0.3806 0.6046 0.5963 Table 3: Results of generalized zero-shot intent classification equipped with our new intent identifier SEG.", "CapsNet (Xia et al., 2018), and ReCapsNet (Liu et al., 2019).", "The average results over 10 runs of our methods and ReCapsNet are reported in Table 3, where the results of other baselines are taken from Liu et al. (2019).", "We use the same setting and hyper-parameters as in ReCapsNet (Liu et al., 2019).", "We set d z =4 for SNIPS and d z =12 for SMP-2018.", "The rest of the parameters of SEG are the same as those used in Section 4.2.", "In addition, we also conduct an ablation study to demonstrate the effectiveness of the proposed semantic enhancement mechanism by testing two variants of our integration (Sep / o and Combine / o) without using it.", "(1) All variants of our integration achieve a significant boost in the overall accuracy and F1 scores on the two datasets, especially on SNIPS, where the performance increase is huge.", "Each variant leads to a qualitative leap in the performance on unseen intents.", "The prediction accuracy (micro-averaged recall) on seen intents may be reduced compared to ReCapsNet and other baselines, since some utterances of seen intents are classified to unseen intents.", "However, the F1 score on seen intents increases significantly, indicating that it has much higher precision than that of the baselines.", "(2) The variants of our integration with semantic enhancement significantly outperform those without using it on predicting unseen intents by very large margins.", "Although their accuracy scores on seen intents are lower, their overall accuracy and F1 scores are consistently better, which confirms the effectiveness of semantic enhancement.", "(3) It can be seen that the Combine variants generally perform much better than the Sep variants, especially the one with semantic enhancement (Combine / w), which performs outstandingly.", "It surpasses the performance of Sep / w in every metric, demonstrating the usefulness of the simple feature assemble strategy of concatenating the feature representations of ReCapsNet and SEG.", "In this paper, we have proposed SEG, a semantic-enhanced Gaussian mixture model coupled with a LOF outlier detector, for unknown (new) intent detection.", "We empirically verified the effectiveness of SEG for unknown intent detection on real dialogue datasets in English and Chinese.", "Furthermore, we successfully applied SEG to improve generalized zero-shot intent classification and achieved remarkable performance gain over a most recent competitive method ReCapsNet.", "In future work, we plan to conduct more empirical studies on SEG and further improve its performance on new intent identification.", "We also plan to conduct more case studies in applying SEG to boost the performance of current zero-shot intent classification methods.", "We would like to thank the anonymous reviewers for their helpful comments.", "This research was supported by the grant HK ITF UIM/377." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "method", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "abstain", "abstain", "objective", "result", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "result", "objective", "method", "other", "other" ]
[ "Opinion target extraction and opinion term extraction are two fundamental tasks in Aspect Based Sentiment Analysis (ABSA).", "Many recent works on ABSA focus on Target-oriented Opinion Words (or Terms) Extraction (TOWE), which aims at extracting the corresponding opinion words for a given opinion target.", "TOWE can be further applied to Aspect-Opinion Pair Extraction (AOPE) which aims at extracting aspects (i.e., opinion targets) and opinion terms in pairs.", "In this paper, we propose T argetS pecified sequence labeling with M ulti-head S elfA ttention (TSMSA) for TOWE, in which any pre-trained language model with multi-head self-attention can be integrated conveniently.", "As a case study, we also develop a M ultiT ask structure named MT-TSMSA for AOPE by combining our TSMSA with an aspect and opinion term extraction module.", "Experimental results indicate that TSMSA outperforms the benchmark methods on TOWE significantly; meanwhile, the performance of MT-TSMSA is similar or even better than state-of-the-art AOPE baseline models.", "Aspect-Based Sentiment Analysis (ABSA) (Pon-tiki et al., 2014) has attracted much attention of researchers in recent years.", "In ABSA, aspect (or called opinion target) extraction and opinion term extraction are two fundamental tasks.", "Aspect is the word or phrase in the reviews referring to the object towards which users show attitudes, while opinion terms are those words or phrases representing users' attitudes (Wu et al., 2020).", "For example, in the sentence The dim sum is delicious. , the phrase dim sum is an aspect and the word delicious is an opinion term.", "See the upper part of Table 1 for more examples.", "Plenty of works based on neural networks have been done in both aspect The corresponding author.", "Reviews: Soooo great! The food is delicious and inexpensive, and the environment is in a nice. The only problem is that the soup and dessert are ordinary.\"", "Aspect-Opinion Pairs: food : [delicious, inexpensive] ( one-to-many ) environment : [nice] ( one-to-one ) soup, dessert : [ordinary] ( many-to-one ) Table 1: The upper part is a restaurant review and the lower part shows the corresponding aspect-opinion pairs. Extracted aspects and opinion terms are marked in red and blue, respectively. and opinion term extraction (Liu et al., 2015; Poria et al., 2016; Xu et al., 2018); moreover, some studies combine these two tasks into a multi-task structure to extract aspects and opinion terms simultaneously (Wang et al., 2016, 2017; Li and Lam, 2017; Dai and Song, 2019). However, one critical deficiency in the researches mentioned above is that they ignore the relation of aspects and opinion terms, which leads to the birth of Target-oriented Opinion Words (or Terms) Extraction (TOWE) (Fan et al., 2019) for extracting the corresponding opinion terms of a given opinion target. Subsequently, Aspect-Opinion Pair Extraction (AOPE) (Chen et al., 2020) and Pair-wise Aspect and Opinion Terms Extraction (PAOTE) (Zhao et al., 2020) have emerged, which both aim at extracting aspects and opinion terms in pairs. AOPE and PAOTE are exactly the same task, only named differently. In the following, we use AOPE to denote this task for simplicity. It can be considered that AOPE contains aspect and opinion word extraction and TOWE. Since aspect extraction has been fully studied and satisfactory results have been obtained, TOWE, which aims at mining the relation between aspects and opinion terms, is the key to the AOPE task. As shown in the lower part of Table 1, the relational structure of the aspect-opinion pairs within a sentence can be complicated, including one-to-one, one-to-many, and many-to-one. The challenge of TOWE is the learning of representations of the given opinion target accurately and a few works focus on this task. For instance, Fan et al. (2019) propose an Inward-Outward LSTM to pass target information to the left context and the right context of the target respectively, and then they combine the left, right, and global context to encode the sentence. Recently, SDRN (Chen et al., 2020) and SpanMlt (Zhao et al., 2020) both adopt a pre-trained language model to learn contextual representations for AOPE. In SDRN, a double-channel recurrent network and a synchronization unit are applied to extract aspects, opinion terms and their relevancy. In SpanMlt, the terms are extracted under annotated span boundaries with contextual representations, and then the relations between every two span combinations are iden-tified. However, apart from hyper-parameters in the pre-trained language model, these two methods introduce many other hyper-parameters (e.g., the hidden size, thresholds and recurrent steps in SDRN, and the span length, top k spans and the balanced factor of different tasks in SpanMlt). Some of these hyper-parameters have a significant impact on the model performance. Motivated by the previous work and to address the challenges mentioned above, we propose a T argetS pecified sequence labeling method based on M ulti-head S elfA ttention (Vaswani et al., 2017) (TSMSA). The sentence is first processed in the format [SEP] Aspect [SEP] (e.g., The [SEP] food [SEP] is delicious. ), which is inspired by Soares et al. (2019) who utilized a special symbol [SEP] to label all entities and output their corresponding representations.", "Then we develop a sequence labeling model based on multi-head self-attention to identify the corresponding opinion terms.", "By using the special symbol and self-attention mechanism, TSMSA is capable of capturing the information of the specific aspect.", "To improve the performance of our model, we apply pre-trained language models like BERT (Devlin et al., 2019) which contain a multi-head self-attention module as the encoder.", "As a case study, we integrate aspect and opinion term extraction, and TOWE into a M ultiT ask architecture named MT-TSMSA to validate the effectiveness of our method on the AOPE task.", "In addition, apart from hyper-parameters in the pre-trained language model, we only need to adjust the balanced factor of different tasks in MT-TSMSA.", "In summary, our main contributions are as follows: We propose a target-specified sequence labeling method with multi-head self-attention mechanism to perform TOWE, which generates target-specific context representations for different targets in the same review with the special symbol and multi-head self-attention.", "Pre-trained language models can be conveniently applied to improve the performance.", "For our TSMSA and MT-TSMSA, only a small amount of hyper-parameters need to be adjusted when using pre-trained language models.", "Compared to the existing models for TOWE and AOPE, we alleviate the tradeoff issue between a model's complexity and performance.", "Extensive experiments validate that our TSMSA can achieve the best performance on TOWE, and MT-TSMSA performs quite competitive on AOPE.", "The rest of this paper is organized as follows.", "Section 2 introduces the existing studies on TOWE and AOPE, respectively.", "Section 3 details the proposed TSMSA and MT-TSMSA.", "Section 4 presents our experimental results and discussions.", "Finally, we draw conclusions in Section 5.", "Plenty of works have been carried out for aspect extraction and opinion term extraction.", "Early researches can be divided into unsupervised/semi-supervised methods (Hu and Liu, 2004; Zhuang et al., 2006; Qiu et al., 2011) and supervised methods (Jakob and Gurevych, 2010; Shu et al., 2017).", "With the development of neural networks, deep learning methods (Liu et al., 2015; Yin et al., 2016; Poria et al., 2016; Xu et al., 2018) have made impressive progress in recent years.", "Several works integrate aspect extraction and opinion term extraction into a co-extraction process.", "Qiu et al. (2011) expand the list of aspects and opinion terms in a bootstrapping method by double propagation.", "Some other works adopt the co-extraction structure in neural networks with multi-task learning (Wang et al., 2016, 2017; Li and Lam, 2017).", "works focus on this field.", "Rule-based methods (Hu and Liu, 2004; Zhuang et al., 2006) are proposed to select corresponding opinion terms with distance rule and syntactic rule templates based on dependency parsing trees.", "However, the performance of these methods heavily relies on expert knowledge and these rules usually cover only a small amount of cases.", "Fan et al. (2019) carry out TOWE by extracting the corresponding opinion terms for a given aspect, and then utilize Inward-Outward LSTM to generate implicit representations of aspects.", "Nevertheless, this approach is not capable of applying powerful pre-trained language models like BERT as the encoder to perform better.", "Our model aims to extract corresponding opinion terms of the given aspect with explicit representations, in addition to boost performance by employing BERT as the encoder.", "Aspect-Opinion Pair Extraction (AOPE) (Chen et al., 2020) and Pair-wise Aspect and Opinion Terms Extraction (PAOTE) (Zhao et al., 2020) both aim at extracting aspects and opinion terms in pairs.", "AOPE and PAOTE are essentially the same task with different names, and they can be split into aspect extraction and TOWE.", "Chen et al. (2020) propose a Synchronous Double-channel Recurrent Network (SDRN) which consists of an opinion entity extraction unit, a relation detection unit, and a synchronization unit for pair extraction.", "Zhao et al. (2020) develop a span-based multi-task learning framework (SpanMlt) where the terms are extracted under annotated span boundaries, so as to identify the relations between every two span combinations.", "However, SDRN contains a lot of hyper-parameters and SpanMlt generates a great many of candidate spans if the value of maximal length of a span is large or the sentence is too long.", "The advantage of our methods is that only a small amount of hyper-parameters adjustment is required and similar or even better performance can be achieved.", "Given a sentence s = { w 1 , w 2 , ..., w n } consisting of n words, an aspect (opinion target) a = { w i , w i +1 , ..., w i + k } , and an opinion term o = { w j , w j +1 , ..., w j + m } ( a and o are substrings of s ), the probabilities of target-oriented opinion terms are defined as p ( o | s, a ) in the TOWE task and the", "probabilities of aspect-opinion pairs are defined as p ( (cid:104) a, o (cid:105)| s ) = p ( a | s ) p ( o | s, a ) in the AOPE task.", "The BIO tagging scheme (Ramshaw and Marcus, 1995) and a special symbol [SEP] are applied to this task, where each word w i in the sentence s is tagged as y i {B, I, O, [SEP]} (B: Beginning, I: Inside, O: Others, [SEP]: the tag of an aspect).", "The structures of our T argetS pecified sequence labeling method based on M ulti-head S elfA ttention (TSMSA) and the M ultiT ask version (MT-TSMSA) are shown in Figure 1", "(c) and", "(d).", "As aforementioned, we first use a special symbol [SEP] to label each aspect.", "Next, the multi-head self-attention method is applied to capture the context representations of the specific aspect explicitly, then they are passed to a projection layer and a Conditional Random Field (CRF) (Lafferty et al., 2001) layer for sequence labeling.", "Furthermore, the aspect and opinion words extraction (task 0) as well as the target-oriented opinion words extraction (task 1) are combined for multi-task learning.", "These two tasks share the parameters of encoder but differ in projection and CRF layers.", "We describe the multi-head self-attention approach according to Vaswani et al. (2017) with the details shown in Figure 1", "(a) and", "(b).", "For each attention head in the above approach, we first compute the scaled dot-product attention.", "Particularly, the input consists of a set of queries, keys, and values, where d k stands for the dimension of queries and keys, and d v represents the dimension of values.", "Then they are packed together into matrices Q , K , and V , respectively.", "The scaled dot-product attention is calculated as follows: Attention ( Q, K, V ) = softmax ( QKT d k ) V. (1) Next, given the number of attention heads h , we can get the dimension of output d model = h d v .", "Finally, the multi-head attention is described as follows: MH ( I, h ) = Concat ( head 1 , ..., head h ) WO , (2) head i = Attention ( IW Qi , IW Ki , IW Vi ) , (3) where I = { (cid:126)i 1 , (cid:126)i 2 , ..., (cid:126)i n } (the dimension of (cid:126)i is d model ) indicates the input and n is the sequence Scaled Dot-Product Attention Linear Q MatMul Scale Mask(opt.) Softmax K V MatMul Output", "length.", "The parameter matrices of projections are W Qi R d model d k , W Ki R d model d k , W Vi R d model d v , and W Oi R d model d model .", "To start with, the input vector of each word is generated by utilizing a word embedding lookup table L w R r d w and a positional embedding lookup table L p R n d p , where d w is the dimension of word embeddings, r is the vocabulary size, and d p is the dimension of positional embeddings.", "These embedding lookup tables will map s = { w 1 , ..., w n } to { (cid:126)e 1 w , ..., (cid:126)e nw } and { (cid:126)e 1 p , ..., (cid:126)e np } , respectively.", "For our base models (not using a pre-trained language model), (cid:126)e iw will be projected to a low dimensional vector (cid:126)e ilow which is calculated as follows: (cid:126)e ilow = ( W e (cid:126)e iw ) , where W e R d low d w ( d low < d w ) denotes the matrix of projection and ( ) is the activation function.", "In this case, (cid:126) t i in the input T = { (cid:126) t 1 , ..., (cid:126) t n } is represented by [ (cid:126)e ilow ; (cid:126)e ip ] and d model = d low + d p .", "For a pre-trained language model like BERT (De-vlin et al., 2019), (cid:126)t i equals the sum of (cid:126)e iw , (cid:126)e ip , and (cid:126)e is , where e s = { (cid:126)e 1 s , ..., (cid:126)e ns } (the dimension of (cid:126)e is is d p ) represents segment embeddings, and d model = d p = d w .", "Then, the input vector T is passed to multihead self-attention modules, where a feed-forward network and an add-norm network are combined in sequence to generate the context representation of each layer H = { H 1 , ..., H l } , where l is the number of multi-head attention layers and H i = { (cid:126)H 1 i , ..., (cid:126)H ni } .", "H i can be calculated as follows: O i = MH ( H i 1 , h ) , (4) F F N i = max (0 , O i W i 1 + b i 1 ) W i 2 + b i 2 , (5) H i = LN ( H i 1 + F F N i ) , (6) where h is the number of attention heads, H 0 = T , the matrices W i 1 R d model d ff and W i 2 R d ff d model represent mappings from d model to d ff and back to d model .", "LN ( ) is a layer normalization method applying to sequential data (Ba et al., 2016).", "Finally, the output of the encoder is H l , i.e., the last layer of H .", "Given a sequential representation H l and a sequential label Y = { y 1 , ..., y n } ( y i {B, I, O, [SEP]} or y i {B-ASP, I-ASP, B-OP, I-OP, O} 1 ), we can use H l to compute p ( Y | H l ) .", "Greedy decoding or CRF can be adopted in the decoding process.", "CRF is chosen as our decoding strategy because CRF has the ability to capture the correlations between tokens and labels and the correlations between adjacent labels simultaneously.", "Given a new sentence, we use Viterbi algorithm (Viterbi, 1967) to predict the label sequence by maximizing the conditional probability p ( Y | H l ) in the decoding process.", "The single-task version of our approaches is TSMSA.", "Given a predicted label sequence Y and 1 B-ASP: beginning of an aspect, I-ASP: inside of an aspect, B-OP: beginning of an opinion term, I-OP: inside of an opinion term, and O: others.", "S ( H l , Y ) can be defined as follows: S ( H l , Y ) = n (cid:88) i =1 Q y i 1 ,y i + n (cid:88) i =1 P i,y i , (7) P = H l W p + b p , (8)", "where the matrix Q R k k captures the relation of adjacent labels, the matrix P R n k learns the relation of tokens and labels, and the matrices W p R d model k and b p R n k indicate a projection operation from dimension d model to dimension k .", "In the above, k means the dimension of the label space.", "Then, the linear-chain CRF is exploited to calculate the conditional probability of the predicted sequence Y as follows: p ( Y | H l ) = exp( S ( H l , Y )) (cid:80) Y Y all exp( S ( H l , Y )) , (9) where Y all denotes the set of all possible sequential labels.", "So the loss of a sentence can be calculated by the negative log likelihood as follows: L ( s ) = log p ( Y | H l ) .", "By integrating aspect and opinion term extraction (task 0) and TOWE (task 1) into a multi-task architecture, we propose a MT-TSMSA method for AOPE.", "MT-TSMSA can be defined as using a sentence H l and a task id { 0 , 1 } to calculate the conditional probability p ( Y | H l , id ) .", "When the task id equals 0, it means aspect and opinion term extraction.", "For TOWE, the task id is 1. Some examples are shown in Figure 1", "(d).", "Aiming at handling different tasks, different score functions S 0 ( H l , Y 0 ) and S 1 ( H l , Y 1 ) are defined, where S 0 ( ) and S 1 ( ) have different parameter matrices, Y 0 ( Y 0 i {B-ASP, I-ASP, B-OP, I-OP O}) and Y 1 ( Y 1 i {B, I, O, [SEP]}) represent the sequential labels of aspect and opinion term extraction, and TOWE, respectively.", "So the conditional probabilities of the predicted sequences Y 0 and Y 1 can be calculated as follows: p ( Y 0 | H l , id = 0) = exp( S 0 ( H l , Y 0 )) (cid:80) Y Y 0 all exp( S 0 ( H l , Y )) , (11) p ( Y 1 | H l , id = 1) = exp( S 1 ( H l , Y 1 )) (cid:80) Y Y 1 all exp( S 1 ( H l , Y )) , (12) where Y 0 all denotes the set of all possible sequential labels of task 0 and Y 1 all represents the set of all possible sequential labels of task 1. The loss of a sentence is also calculated by the negative log likelihood as follows: L ( s, id ) = log p ( Y | H l , id ) .", "Given M sentences S = { s 1 , s 2 , ..., s M } with id = { id 1 , ..., id M } , we can minimize the loss for training: J ( ) = M (cid:88) k =1 ((1 id k ) + id k ) L ( s k , id k ) , (14) where is the hyper-parameter used to balance these two tasks.", "For TOWE, a sentence with a given aspect (i.e., target) is first processed into target-specified mode ([SEP] Aspect [SEP]) with the special symbol [SEP] and then passed into TSMSA, the outputs of which are the target-oriented opinion terms.", "For AOPE, MT-TSMSA generates aspect-opinion pairs by a two-stage inference process.", "Firstly, a sentence is passed into MT-TSMSA, where aspects are extracted in task 0. Secondly, given extracted aspects, repeating the inference process of TOWE, MT-TSMSA outputs the target-oriented opinion terms from task 1. Accordingly, the combinations of aspects from task 0 and target-orient opinion terms from task 1 are aspect-opinion pairs.", "To evaluate the performance of our model 2 , we conduct experiments on two public datasets from laptop and restaurant domains.", "These two datasets were respectively built by Fan et al. (2019) for TOWE and Chen et al. (2020) for AOPE based on SemEval Challenge 2014 Task 4, SemEval Challenge 2015 Task 12, and SemEval Challenge 2016 Task 5 (Pontiki et al., 2014, 2015, 2016).", "For the first dataset, every sentence was annotated by two people, and the conflicts were checked and eliminated manually.", "The second dataset was developed by extending the first one.", "The statistics of these benchmark datasets are shown in Table 2, from 2 The code of our model is available in public at: https: //github.com/fengyh3/TSMSA .", "which we can observe that the second dataset includes many negative samples for AOPE (i.e., the sentences only contain aspects and opinion terms, without any aspect-opinion pairs).", "Note that these negative samples will also be considered when testing our model on AOPE.", "Fan et al. (2019) have employed various baselines in TOWE, including Distance-rule (Hu and Liu, 2004), Dependency-rule (Zhuang et al., 2006), BiLSTM + Distance-rule , and TC-BiLSTM , except for BERT-based methods.", "To achieve comprehensive comparative analysis, we develop baselines of BERT + Distance-rule and Target-fused BERT (TF-BERT) for this task.", "The former trains a sentence-level opinion term extraction model by BERT, and the target-oriented opinion term is the one nearest to each aspect.", "The latter utilizes the average pooling of target word embeddings to represent the target information.", "The word representation at each position is the addition of word embedding and target information, which is fed into BERT to extract target-oriented opinion terms.", "Zhao et al. (2020) have applied some baselines in AOPE, including HAST (Li et al., 2018) + IOG and JERE-MHS (Bekoulis et al., 2018).", "Besides the above methods, we also employ the following baselines: IOG (Fan et al., 2019) utilizes an Inward-Outward LSTM and a Global LSTM to capture the information of aspects and global information respectively, then it combines these information for sequence labeling.", "SpanMlt (Zhao et al., 2020) is a span-based multi-task learning framework where the terms are extracted with annotated span boundaries and then the relations between combinations of every two spans are identified.", "SDRN (Chen et al., 2020) utilizes BERT as the encoder which consists of an opinion entity extraction unit, a relation detection unit, and a synchronization unit for the AOPE task.", "In the case of TOWE, this model extracts the target-oriented opinion terms with given correct aspects.", "4.3 Hyper-parameter Settings For the TOWE task, Fan et al. (2019) utilize 300-dimension GloVe (Pennington et al., 2014) vectors which are pre-trained on unlabeled data of 840 billion tokens to initialize word embedding vectors in IOG.", "The word embeddings are fixed at the stage of training.", "For fair comparison, we use the same fixed word embeddings in TSMSA(Base).", "We randomly select 20% of the training set as the development set for adjusting all hyper-parameters.", "The value of d model is 128, and the numbers of attention heads and layers are 4 and 6, respectively.", "In addition, the dropout rate, learning rate, and maximal sequence length are set to 0.5, 0.001, and 100, respectively.", "Adam optimizer (Kingma and Ba, 2015) is adopted to optimize our model.", "Pre-trained language models like BERT (Devlin et al., 2019) can be applied to our methods, and we adopt BERT-base 3 model, where d model is 768 and the number of attention heads and layers are both 12.", "Other hyper-parameters include the learning rate of BERT and CRF, the maximal sequence length, and the number of epochs.", "Based on the development set, these hyper-parameters are set to 5e-5, 2e-4, 100, and 8, respectively.", "Unless otherwise mentioned, is set to 1. To be consistent with various baselines (Fan et al., 2019; Chen et al., 2020; Zhao et al., 2020), the term-level F1 score is used as the evaluation metric for both TOWE and AOPE tasks.", "Term-level means that the boundaries of the span are the same as the ground-truth.", "For the AOPE task, the consistency of a predicted aspect-opinion pair with the labeled pair indicates the correctness of prediction.", "Table 3 presents the performance of different models on TOWE.", "Firstly, the F1 scores of rule-3 https://github.com/google-research/ bert Models (Fan et al., 2019) (Chen et al., 2020) 14lap 14res 15res 16res 14lap 14res 15res Distance-rule 40.42 49.92 45.97 51.83 47.68* 56.24* 51.15* Dependency-rule 37.14 58.04 55.98 64.62 43.23* 62.17* 59.76* BiLSTM + Distance-rule 63.38 69.18 66.97 74.01 66.77* 72.54* 69.42* TC-BiLSTM 61.21 67.61 62.94 73.10 65.83* 71.23* 65.55* IOG 71.35 80.02 73.25 81.69 76.43* 83.24 * 76.63* TSMSA(Base) 71.10 80.31 75.38 80.68 77.66 82.35 77.52 BERT + Distance-rule 70.54* 76.23* 71.26* 79.53* 73.84* 78.92* 76.57* TF-BERT 72.26* 78.23* 71.58* 79.23* 74.32* 79.28* 76.94* SDRN 80.24* 83.53* 80.18* 86.72* 87.54* 86.72* 85.17* TSMSA(BERT) 82.18 86.37 81.64 89.20 88.63 90.03 87.30 Table 3: Experimental results (F1 score, %) of different models on TOWE.", "based methods are poor because the rules only cover a small number of cases.", "By utilizing BiLSTM or BERT as the encoder to extract opinion terms, the BiLSTM/BERT + Distance-rule perform much better than other rule-based methods.", "However, these methods cannot deal with the one-to-many case.", "Secondly, TC-BiLSTM and TF-BERT extract static word embeddings for aspects and then incorporate them into sentence representation by concatenation or addition.", "Nevertheless, the results of TC-BiLSTM and TF-BERT are still over 10% lower than IOG/TSMSA(Base) and SDRN/TSMSA(BERT), respectively.", "It reveals that the static word embedding is not a good representation of the aspect and the concatena-tion/addition operation is not good enough to represent the specific aspect.", "Finally, IOG is a state-of-the-art baseline method for TOWE and the performance of TSMSA(Base) trained by the same word embedding is similar to IOG, which indicates the effectiveness in capturing the representation of a specific aspect with the symbol [SEP].", "Furthermore, the pre-trained language model BERT can be applied to our basic method.", "The F1 score of TSMSA(BERT) is in average 8% higher than TSMSA(Base) and IOG.", "SDRN, which also exploits BERT as the encoder, passes the information of the aspect through a synchronization unit and utilizes supervised self-attention to capture this information.", "Nevertheless, it represents the specific aspect implicitly, which might have an negative impact on capturing the information of targets.", "In average, the performance of SDRN is 2% lower than TSMSA(BERT).", "The overall results reveal that our proposed method achieves state-of-the-art performance on TOWE.", "As mentioned above, our method can be applied to AOPE by combining TOWE with aspect and opinion term extraction.", "We here compare the performance of our multi-task model (i.e., MT-TSMSA) with the following competitive models: HAST + IOG, JERE-MHS, SpanMlt, and SDRN.", "The results are shown in Table 4.", "Note that the overlapping ratios of pairs in 14lap, 14res, and 15res are 78.8%, 92%, and 99.8% for (Fan et al., 2019), and 87.1%, 86.2%, and 86.4% for (Chen et al., 2020), respectively.", "Thus, there is a difference (within 2% mostly) between the results on these two datasets.", "The performance of JERE-MHS is better than HAST + IOG, which indicates that the degree of error propagation in the separate training model might be smaller than it in the model of joint training.", "Moreover, SpanMlt, SDRN, and MT-TSMSA(BERT) use powerful pre-trained language models, which have a significant improvement in the performance on AOPE.", "We observe that SDRN and MT-TSMSA(BERT) perform better than SpanMlt, showing that selecting top k spans from candidate spans as pairs might miss some correct pairs.", "Compared to SDRN, MT-TSMSA(BERT) performs better on three datasets and nearly the same on four datasets.", "Overall, MT-TSMSA achieves quite competitive performance on AOPE by simply incorporating our TSMSA into a multitask structure.", "To evaluate the impacts of different word embeddings and training strategies on our models, we conduct ablation experiments by varying the above factors.", "The results shown in Table 5 indicate that a suitable word embedding is capable of improving the performance of our models.", "Firstly, BERT embedding shows poor performance when compared to Glove.", "We conjecture that BERT embedding needs to cooperate with the pre-trained encoder of BERT to perform better on TOWE.", "Secondly, applying the word embedding and the encoder of BERT without fine-tuning also fails to work on TOWE.", "The reason may be that the encoder of BERT without fine-tuning cannot capture the information of the specific aspect with the symbol [SEP].", "Furthermore, opinion terms extracted from task 0 help to identify the corresponding opinion terms in task 1, which means that the multi-task structure is able to achieve better results than the single-task structure on TOWE.", "Although the improvement is not significant in average, we observe that the former structure can achieve more stable performance than the latter one.", "The results of convergence and sensitivity studies are shown in Figure 2.", "Figure 2", "(a) reveals that our model gradually converges as the number of epochs increases.", "Although the dropout rate is set to 0.5, it also converges smoothly.", "Figure 2", "(b) shows the effect of the number of attention heads.", "When the number of attention heads is 4, TSMSA(Base) achieves stable and good performance, and as the value increased, the performance might be better.", "Figure 2", "(c) shows that the best performance is achieved when the number of multi-head self-attention layers is 6, and as the number increased, the model might be confronted with overfitting.", "Figure 2", "(d) indicates the impact of on our model which influences the learning of different tasks.", "Stable and good results can be obtained when = 1 , and better performance can be achieved when the value is set to 0.5 or 2.", "Compared with other hyper-parameters, the results also indicate that has a relatively small impact on the model performance.", "In this part, we apply an open source tool 4 to visualize the attention scores of TSMSA(BERT) and describe two attention heads on the tenth layer in Figure 3", "(a) and", "(b), where attention scores less than 0.1 and unimportant words are not displayed.", "As we can see, the words nice and great are both close to the aspect food, but nice will not pay attention to this aspect.", "In addition, great and reasonable focus on the special symbol [SEP] and the specific aspect food, as shown in Figure 3", "(a).", "At the same time, food gives attention to great and reasonable on different attention heads, as described in Figure 3", "(b).", "All these instances reveal that multi-head self-attention mechanism is capable of capturing the representation of a 4 https://github.com/jessevig/bertviz Case 1: The receiver was full of superlatives for the quality and performance.", "the", "(a)", "(b) the the the staff staff staff staff has has has has been been been been nice nice nice nice and and and and the the the the food food food food is is is is great great great great and and and and res res res res ##ona ##ona ##ona ##ona ##ble ##ble ##ble ##ble [SEP ] [SEP ] [SEP ] [SEP ] [SEP ] [SEP ] [SEP ] [SEP ] Figure 3: Visualization of multi-head self-attention mechanism.", "To further compare our MT-TSMSA(BERT) with the best-performing baseline of SDRN, we here conduct a case study by following (Chen et al., 2020).", "As shown in Table 6, both SDRN and MT-TSMSA(BERT) perform well in extracting aspect-opinion pairs from complicated relations.", "But in some cases like Case 4, SDRN misses the pair of (watching videos, hot).", "The reason may be that the massive hyper-parameters in SDRN have a great impact on the effect.", "For example, the threshold in the relation synchronization mechanism of SDRN will largely affect the results of the model.", "On the other hand, our method can extract all the pairs because it introduces fewer hyper-parameters, which leads to stable results.", "However, in Case 5, our method cannot extract the pair.", "The reason is that task 0 of MT-TSMSA(BERT) fails to extract the aspect term log into the system.", "Moreover, the in-depth reason is that for the aspect term extraction task, the performance of SDRN (i.e., 83.67%, 89.49%, and 74.05%) is better than that of MT-TSMSA(BERT), i.e., 83.11%, 84.85%, and 72.69% on the datasets from (Chen et al., 2020).", "In this paper, we propose a target-specified sequence labeling method based on multi-head self-attention (TSMSA) and a multi-task version (MT-TSMSA) to deal with TOWE and AOPE, respectively.", "In our methods, the encoder is capable of capturing the information of the specific aspect which is labeled by a special symbol [SEP].", "Experimental results demonstrate that TSMSA and MT-TSMSA achieve quite competitive performance in most cases.", "When combining aspect and opinion words extraction with TOWE, our MT-TSMSA can slightly improve the performance as compared with TSMSA.", "In the future, we plan to extend our approaches to sentiment classifica-tion of pairs and explore an efficient model with a one-stage inference process to reduce the time complexity on AOPE.", "We are grateful to the reviewers for their constructive comments and suggestions on this study.", "This work has been supported by the National Natural Science Foundation of China (61972426) and Guangdong Basic and Applied Basic Research Foundation (2020A1515010536)." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "other", "objective", "abstain", "result", "abstain", "method", "objective", "objective", "method", "method", "result", "method", "abstain", "abstain", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "other", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "result", "objective", "other", "other" ]
[ "Automatic abstractive summaries are found to often distort or fabricate facts in the article.", "This inconsistency between summary and original text has seriously impacted its applicability.", "We propose a fact-aware summarization model FASUM to extract and integrate factual relations into the summary generation process via graph attention.", "We then design a factual corrector model FC to automatically correct factual errors from summaries generated by existing systems.", "Empirical results 1 show that the fact-aware summarization can produce abstractive summaries with higher factual consistency compared with existing systems, and the correction model improves the factual consistency of given summaries via modifying only a few keywords.", "Text summarization models aim to produce an abridged version of long text while preserving salient information.", "Abstractive summarization is a type of such models that can freely generate summaries, with no constraint on the words or phrases used.", "This format is closer to human-edited summaries and is both flexible and informative.", "Thus, there are numerous approaches to produce abstractive summaries (See et al., 2017; Paulus et al., 2017; Dong et al., 2019; Gehrmann et al., 2018).", "However, one prominent issue with abstractive summarization is factual inconsistency.", "It refers to the hallucination phenomenon that the summary sometimes distorts or fabricates the facts in the article.", "Recent studies show that up to 30% of the summaries generated by abstractive models contain such factual inconsistencies (Kryscinski et al., 2019b; Falke et al., 2019), raising concerns about the credibility and usability of these systems.", "Table 1 demonstrates an example article and excerpts of generated summaries.", "As shown, the article mentions that Real Madrid ace Gareth Bale scored twice and Cristiano Ronaldo scored five goals.", "However, both BOTTOMUP (Gehrmann et al., 2018) and SEQ 2S EQ wrongly states that Bale scored five goals.", "Comparatively, our model FASUM generates a summary that correctly exhibits the fact in the article.", "And as shown in Section 4.6.1, our model achieves higher factual consistency not just by making more copies from the article.", "On the other hand, most existing abstractive summarization models apply a conditional language model to focus on the token-level accuracy of summaries, while neglecting semantic-level consistency between the summary and article.", "Therefore, the generated summaries are often high in token-level metrics like ROUGE (Lin, 2004) but lack factual consistency.", "In view of this, we argue that a robust abstractive summarization system must be equipped with factual knowledge to accurately summarize the article.", "efforts in building commonly applicable knowledge graphs such as ConceptNet (Speer et al., 2017), we find that these tools are more useful in conferring commonsense knowledge.", "In abstractive summarization for contents like news articles, many entities and relations are previously unseen.", "Plus, our goal is to produce summaries that do not conflict with the facts in the article.", "Thus, we propose to extract factual knowledge from the article itself.", "We employ the information extraction (IE) tool OpenIE (Angeli et al., 2015) to extract facts from the article in the form of relational tuples: (subject, relation, object).", "This graph contains the facts in the article and is integrated in the summary generation process.", "Then, we use a graph attention network (Velickovic et al., 2017) to obtain the representation of each node, and fuse that into a transformer-based encoder-decoder architecture via attention.", "We denote this model as the Fact-Aware Summarization model, FASUM .", "In addition, to be generally applicable for all existing summarization systems, we propose a Factual Corrector model, FC, to help improve the factual consistency of any given summary.", "We frame the correction process as a seq2seq problem: the input is the original summary and the article, and the output is the corrected summary.", "FC has the same architecture as UniLM (Dong et al., 2019) and initialized with weights from RoBERTa-Large (Liu et al., 2019).", "We finetune it as a denoising autoen-coder.", "The training data is synthetically generated via randomly replacing entities in the ground-truth summary with wrong ones in the article.", "As shown in Table 2, FC makes three corrections, replacing the original wrong entities which appear elsewhere in the article with the right ones.", "independently trained BERT-based (Devlin et al., 2018) factual consistency evaluator (Kryscinski et al., 2019b).", "Results show that on CNN/DailyMail, FASUM obtains 0.6% higher fact consistency scores than UNILM (Dong et al., 2019) and 3.9% higher than BOTTOMUP (Gehrmann et al., 2018).", "Moreover, after correction by FC, the factual score of summaries from BOTTOMUP increases 1.4% on CNN/DailyMail and 0.9% on XSum, and the score of summaries from TCONV S2S increases 3.1% on XSum.", "We also conduct human evaluation to verify the effectiveness of our models.", "We further propose an easy-to-compute model-free metric, relation matching rate (RMR), to evaluate factual consistency given a summary and the article.", "This metric employs the extracted relations and does not require human-labelled summaries.", "Under this metric, we show that our models can help enhance the factual consistency of summaries.", "Abstractive text summarization has been intensively studied in recent literature.", "Rush et al. (2015) introduces an attention-based seq2seq model for abstractive sentence summarization.", "See et al. (2017) uses copy-generate mechanism that can both produce words from the vocabulary via a generator and copy words from the article via a pointer.", "Paulus et al. (2017) leverages reinforcement learning to improve summarization quality.", "Gehrmann et al. (2018) uses a content selector to over-determine phrases in source documents that helps constrain the model to likely phrases.", "Zhu et al. (2019) defines a pretraining scheme for summarization and produces a zero-shot abstractive summarization model.", "Dong et al. (2019) employs different masking techniques for both NLU and NLG tasks, resulting in the UNILM model.", "Lewis et al. (2019) employs denoising techniques to help generation tasks including summarization.", "Entailment models have been used to evaluate and enhance factual consistency of summarization.", "Li et al. (2018) co-trains summarization and entailment and employs an entailment-aware decoder.", "Falke et al. (2019) proposes using off-the-shelf entailment models to rerank candidate summary sentences to boost factual consistency.", "Zhang et al. (2019b) employs descriptor vectors to improve factual consistency in medical summarization.", "Cao et al. (2018) extracts relational information from the article and maps it to a sequence as an additional input to the encoder.", "Gunel et al. (2019) employs an entity-aware transformer structure for knowledge integration, and Matsumaru et al. (2020) improves factual consistency of generated headlines by filtering out training data with more factual errors.", "In comparison, our model utilizes the knowledge graph extracted from the article and fuses it into the generated text via neural graph computation.", "To correct factual errors, Dong et al. (2020) uses pre-trained NLU models to rectify one or more wrong entities in the summary.", "Concurrent to our work, Cao et al. (2020) employs the generation model BART (Lewis et al., 2019) to produce corrected summaries.", "Several approaches have been proposed to evaluate a summary's factual consistency (Kryscinski et al., 2019a; Goodrich et al., 2019; Maynez et al., 2020).", "Zhang et al. (2019a) employs BERT to compute similarity between pairs of words in the summary and article.", "Wang et al. (2020); Durmus et al. (2020) use question answering accuracy to measure factual consistency.", "Kryscinski et al. (2019b) applies various transformations on the summary to produce training data for a BERT-based classification model, FactCC, which shows a high correlation with human metrics.", "Therefore, we use FactCC as the factual evaluator in this paper.", "We formalize abstractive summarization as a supervised seq2seq problem.", "The input consists of a pairs of articles and summaries: { ( X 1 , Y 1 ) , ( X 2 , Y 2 ) , ..., ( X a , Y a ) } .", "Each article is tokenized into X i = ( x 1 , ..., x L i ) and each summary is tokenized into Y i = ( y 1 , ..., y N i ) .", "In abstrative summarization, the model-generated summary can contain tokens, phrases and sentences not present in the article.", "For simplicity, in the following we will drop the data index subscript.", "Therefore, each training pair becomes X = ( x 1 , ..., x m ) , Y = ( y 1 , ..., y n ) , and the model needs to generate an abstrative summary Y = ( y 1 , ..., y n (cid:48) ) .", "We propose the Fact-Aware abstractive Summarizer, FASUM .", "It utilizes the seq2seq architecture built upon transformers (Vaswani et al., 2017).", "In detail, the encoder produces contextualized embeddings of the article and the decoder attends to the encoder's output to generate the summary.", "To make the summarization model fact-aware, we extract, represent and integrate knowledge from the source article into the summary generation process, which is described in the following.", "The overall architecture of FASUM is shown in Figure", "1. 3.2.1 Knowledge Extraction To extract important entity-relation information from the article, we employ the Stanford OpenIE tool (Angeli et al., 2015).", "The extracted knowledge is a list of tuples.", "Each tuple contains a subject (S), a relation (R) and an object (O), each as a segment of text from the article.", "In the experiments, there are on average 165.4 tuples extracted per article in CNN/DailyMail (Hermann et al., 2015) and 84.5 tuples in XSum (Narayan et al., 2018).", "We construct a knowledge graph to represent the information extracted from OpenIE.", "We apply the Levi transformation (Levi, 1942) to treat each entity and relation equally.", "In detail, suppose a tuple is ( s, r, o ) , we create nodes s , r and o , and add edges s r and r o .", "In this way, we obtain an undirected knowledge graph G = ( V, E ) , where each node v V is associated with text t ( v ) .", "During training, this graph G is constructed for each batch individually, i.e. there's no shared huge graph.", "One benefit is that the model can take unseen entities and relations during inference.", "We then employ a graph attention network (Velickovic et al., 2017) to obtain embedding e j for each node v j .", "The initial embedding of v j is given by the last hidden state of a bidirectional LSTM Self-Attn Add & Norm Add & Norm Feed-forward Self-Attn Add & Norm Add & Norm Feed-forward Cross-Attn Add & Norm Cross-Attn Add & Norm Linear Encoder Knowledge Graph Decoder Article First K summary tokens Graph Attn Net BiLSTM Information Extraction (s 1 , r 1 , o 1 ) (s 2 , r 2 , o 2 ) Article Levi transform (cid:3047)(cid:2878)(cid:2869) Figure 1: The model architecture of FASUM .", "The knowledge graph embedding is obtained in parallel with the encoder.", "Then, apart from the canonical cross attention over the encoder's outputs, each decoder block also computes cross-attention over the knowlege graph nodes' embeddings: ij = softmax j ( ij ) = exp ( ij ) (cid:80) j V exp ( ij ) (1) ij = s Ti e j , (2) u i = (cid:88) j V ij e j , (3) where { e j } | V | j =1 are the final embeddings of the graph nodes, and { s i } ti =1 are the decoder block's representation of the first t generated tokens.", "We denote the final output of the decoder as z 1 , ..., z t .", "To produce the next token y t +1 , we employ a linear layer W to project z t to a vector of the same size of the dictionary.", "And the predicted distribution of y t +1 is obtained by: p t +1 = ( W z t ) (4) During training, we use cross entropy as the loss function L ( ) = (cid:80) nt =1 y Tt log( p t ) , where y t is the one-hot vector for the t -th token, and represent the parameters in the network.", "To better utilize existing summarization systems, we propose a Factual Corrector model, FC, to improve the factual consistency of any summary generated by abstractive systems.", "FC frames the correction process as a seq2seq problem: given an article and a candidate summary, the model generates a corrected summary with minimal changes to be more factually consistent with the article.", "While FASum has a graph attention module in the transformer, preventing direct adaptation from pre-trained models, the FC model architecture adopts the design of the pre-trained model UniLM (Dong et al., 2019).", "We initiailized the model weights from RoBERTa-Large (Liu et al., 2019).", "The finetuning process is similar to training a denoising autoencoder.", "We use back-translation and entity swap for synthetic data generation.", "For example, an entity in the ground-truth summary is randomly replaced with another entity of the same type from the article.", "This modified summary and the article is sent to the corrector to recover the original summary.", "In the experiments, we generated 3.0M seq2seq data samples in CNN/DailyMail and 551.0K samples in XSum for finetuning.", "We take 10K samples in each dataset for validation and use the rest for training.", "During inference, the candidate summary from any abstractive summarization system is concatenated with the article and sent to FC, which produces the corrected summary.", "We evaluate our model on benchmark summarization datasets CNN/DailyMail (Hermann et al., 2015) and XSum (Narayan et al., 2018).", "They contain 312K and 227K news articles and human-edited summaries respectively, covering different topics and various summarization styles.", "We use the Huggingface's (Wolf et al., 2019) implementation of transformer in BART (Lewis et al., 2019).", "We also inherit their provided hyperparam-eters of CNN/DailyMail and XSum for the beam search.", "The minimum summary length is 56 and 11 for CNN/Daily Mail and XSum, respectively.", "The number of beams is 4 for CNN/DailyMail and 6 for XSum.", "In FASUM , both the encoder and decoder has 10 layers of 10 heads for attention.", "Teacher forcing is used in training.", "We use Adam (Kingma and Ba, 2014) as the optimizer with a learning rate of 2e-4.", "The bi-LSTM to produce the initial embedding of graph nodes has a hidden state of size 64 and the graph attention network (GAT) has 8 heads and a hidden state of size 50.", "The dropout rate is 0.6 in GAT and 0.1 elsewhere.", "We use the subword tokenizer SentencePiece (Kudo and Richardson, 2018).", "The dictionary is shared across all the datasets.", "The vocabulary has a size of 32K and a dimension of 720.", "The correction model FC follows the UniLM (Dong et al., 2019) architecture initialized with weights from RoBERTa-Large (Liu et al., 2019).", "We fine-tune the model for 5 epochs with a learning rate of 1e-5 and linear warmup over the one-fifths of total steps and linear decay.", "During decoding, it uses beam search with a width of 2, and blocks trigram duplicates.", "The batch size during finetuning is 24.", "More details are presented in the Appendix.", "To evaluate factual consistency, we re-implemented and trained the FactCC model (Kryscinski et al., 2019b).", "The model outputs a score between 0 and 1, where a higher score indicates better consistency between the input article and summary.", "The training of FactCC is independent of our summarizer so no parameters are shared.", "More details are in the Appendix.", "We also employ the standard ROUGE-1, ROUGE-2 and ROUGE-L metrics (Lin, 2004) to measure summary qualities.", "These three metrics evaluate the accuracy on unigrams, bigrams and the longest common subsequence.", "We report the F1 ROUGE scores in all experiments.", "And the ROUGE-L score on validation set is used to pick the best model for both FASUM and FC.", "The following abstractive summarization models are selected as baseline systems.", "TCONV S2S (Narayan et al., 2018) is based on topic modeling and convolutional neural networks.", "BOTTOMUP (Gehrmann et al., 2018) uses a bottom-up approach to generate summarization.", "UNILM (Dong et al., 2019) utilizes large-scale pretraining to produce state-of-the-art abstractive summaries.", "We train the baseline models when the predictions are not available in their open-source repositories.", "As shown in Table 3, our model FASUM 2 outperforms all baseline systems in factual consistency scores in CNN/DailyMail and is only behind UNILM in XSum.", "In CNN/DailyMail, FASUM is 0.6% higher than UNILM and 3.9% higher than BOTTOMUP in factual score.", "Statistical test shows that the lead is statistically significant with p-value smaller than 0.05.", "The higher factual score of UNILM among baselines corroborates the findings in Maynez et al. (2020) that pre-trained models exhibit better factuality.", "But our proposed knowledge graph component can help the train-from-scratch FASUM model to excel in factual consistency.", "We conduct ablation study to remove the knowledge graph component from FASUM , resulting in the SEQ 2S EQ model.", "As shown, there is a clear drop in factual score: 2.8% in CNN/DailyMail and 0.9% in XSum.", "This proves that the constructed 2 We have put code and all the generated summaries of all models in the supplementary materials.", "knowledge graph can help increase the factual cor-rectess of the generated summaries.", "It's worth noticing that the ROUGE metric does not always reflect the factual consistency, sometimes even showing an inverse relationship, a phenomenon observed in multiple studies (Kryscinski et al., 2019a; Maynez et al., 2020).", "For instance, although BOTTOMUP has 0.69 higher ROUGE-1 points than FASUM in CNN/DailyMail, there are many factual errors in its summaries, as shown in the human evaluation.", "On the other hand, to make sure the improved factual correctness of our models is not achieved by simply copying insignificant information from the article, we conduct analysis on abstractiveness in Section 4.6.1 and human evaluation in Section 4.6.3.", "Furthermore, the correction model FC can effectively enhance the factual consistency of summaries generated by various baseline models, especially when the original summary has a relatively low factual consistency.", "For instance, on CNN/DM, the factual score of BOTTOMUP increases by 1.4% after correction.", "On XSum, after correction, the factual scores increase by 0.2% to 3.1% for all baseline models.", "Interestingly, FC can also boost the factual consistency of our FASUM model.", "Furthermore, the correction has a rather small impact on the ROUGE score, and it can improve the ROUGE scores of most models in XSum dataset.", "We check and find that FC only makes modest modifications necessary to the original summaries.", "For instance, FC modifies 48.3% of summaries generated by BOTTOMUP in CNN/DailyMail.", "These modified summaries contain very few changed tokens: 94.4% of these corrected summaries contain 3 or fewer new tokens, while the summaries have on average 48.3 tokens.", "In the appendix of supplementary materials, we show several examples of summaries given by FASUM and corrected by FC to demonstrate the improved factual consistency of summarization.", "It has been shown in Durmus et al. (2020) that less abstractive summaries are more factual consistent with the article.", "Therefore, we inspect whether our models boost factual consistency simply by copying more portions of the article.", "in the article.", "Figure 2 shows that FASUM achieves the closest ratio of novel n-gram compared with reference summaries, and higher than BOTTOMUP and UNILM.", "This demonstrates that FASUM can produce highly abstractive summaries while ensuring factual consistency.", "While the factual consistency evaluator FactCC (Kryscinski et al., 2019b) is based on pre-trained models, it requires finetuning on articles and labelled summaries.", "Furthermore, we empirically find that the performance of FactCC degrades when it is finetuned on one summary dataset and used to evaluate models on another dataset.", "Therefore, in this subsection, we design an easy-to-compute model-free factual consistency metric, which can be used when ground-truth summaries are not available.", "As the relational tuples in the knowledge graph capture the factual information in the text, we compute the precision of extracted tuples in the summary.", "In detail, suppose the set of the relational tuples in the summary is R s = { ( s i , r i , o i ) } , and the set of the relational tuples in the article is R a .", "Then, each tuple in R s falls into one of the following three categories:", "(RMR) to measure the ratio of correct hits: RMR 1 = 100 CC + W (5) RMR 2 = 100 C C + W + M (6) Note that this metric is different from the ratio of overlapping tuples proposed in Goodrich et al. (2019), where the ratio is computed between the ground-truth and the candidate summary.", "Since even the ground-truth summary may not cover all the salient information in the article, we choose to compare the knowledge tuples in the candidate summary directly against those in the article.", "An additional advantage of our metric is that it does not require ground-truth summaries to be available.", "Table 4 displays the result of this metric in CNN/DailyMail's testset.", "As shown, FASUM achieves the highest precision of correct hits under both measures.", "And there is a considerable boost from the knowledge graph (FASUM vs SEQ 2S EQ ): 11.2% in RMR 1 and 13.8% in RMR 2 .", "And the correction from the FC model can further improve the metric for both FASUM and UNILM.", "We also compute factual consistency via natural language inference models following Maynez et al. (2020).", "We use the BERT-Large model finetuned on MNLI dataset (Williams et al., 2018) provided by fairseq (Ott et al., 2019).", "The model predicts the relationship between the article and summary to be one of the following: entailment, neutral and contradiction.", "We report the ratio of contradiction as predicted by the model in Table", "4. As shown, FASUM achieves the lowest ratio and FC helps further reducing conflicting facts in generated summaries.", "We conduct human evaluation on the factual consistency and informativeness of summaries.", "We randomly sample 100 articles from the test set of CNN/DailyMail.", "Then, each article and summary pair is labelled by 3 people from Amazon Mechanical Turk (AMT) to evaluate the factual consistency and informativeness.", "Each labeller gives a score in each category between 1 and 3 (3 being perfect).", "The kappa-ratio between reviewer scores is 0.32 for Model Factual Score Informativeness BOTTOMUP 2.32 2.23 UNILM 2.65 2.45 SEQ 2S EQ 2.59 2.30 FASUM 2.74 2.42 Table 5: Human evaluation results of summaries for 100 randomly sampled articles in CNN/DailyMail test set.", "Here, factual consistency indicates whether the summary's content is faithful with respect to the article; informativeness indicates how well the summary covers the salient information in the article.", "As shown in Table 5, our model FASUM achieves the highest factual consistency score, higher than UNILM and considerably outperforming BOTTOMUP .", "We conduct a statistical test and find that compared with UNILM, our model's score is statistically significant with p-value smaller than 0.05 under paired t-test.", "In terms of informativeness, our model is comparable with UNILM and outperforms BOTTOMUP .", "Finally, without the knowledge graph component, the SEQ 2S EQ model generates summaries with both less factual consistency and informativeness.", "To assess the effectiveness of the correction model FC, we conduct a human evaluation of side-by-side summaries.", "In CNN/DailyMail, we randomly sample 100 articles where the summaries generated by BOTTOMUP are modified by FC.", "3 labelers are asked whether the original or the corrected version is factually more correct.", "We collect all the feedbacks and compute the ratio of judgements for each case.", "To reduce bias, we randomly shuffle the two versions of summaries.", "We conduct similar evaluation on UNILM.", "As shown in Table 6, the corrected summaries are significantly more likely to be judged as more factually correct for both baseline models.", "For example, 42.3% of the judgements think the corrected summaries are factually more correct, 42.7% think the corrected version neither improves nor worsens the factual consistency, while only 15.0% think that the corrected version becomes worse than the original BOTTOMUP summary.", "Therefore, FC can help boost the factual consistency of summaries from given systems.", "Finally, to evaluate the quality of the relation matching rate (RMR), we compute the correlation coefficient between the factual score given by human labelers and the RMR value.", "The result shows that = 0 .", "43 , indicating observable relationship between RMR and human evaluation results.", "In this paper, we extract factual information from the article to be represented by a knowledge graph.", "We then integrate this factual knowledge into the process of producing summaries.", "The resulting model FASUM enhances the ability to preserve facts during summarization, demonstrated by both automatic and human evaluation.", "We also present a correction model, FC, to rectify factual errors in candidate summaries.", "Furthermore, we propose an easy-to-compute model-free metric, relation matching rate, to measure factual consistency based on the overlapping ratio of relational tuples.", "For future work, we plan to integrate knowledge graphs into pre-training for more accurate and factually consistent summarization.", "Moreover, we will combine the internally extracted knowledge graph with an external knowledge graph (e.g. ConceptNet) to enhance the commonsense capability of summarization models." ]
[ "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "objective", "result", "abstain", "objective", "objective", "method", "abstain", "result", "abstain", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "abstain", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "method", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "objective", "abstain", "method" ]
[ "Many studies have applied reinforcement learning to train a dialog policy and show great promise these years.", "One common approach is to employ a user simulator to obtain a large number of simulated user experiences for reinforcement learning algorithms.", "However, modeling a realistic user simulator is challenging.", "A rule-based simulator requires heavy domain expertise for complex tasks, and a data-driven simulator requires considerable data and it is even unclear how to evaluate a simulator.", "To avoid explicitly building a user simulator beforehand, we propose Multi-Agent Dialog Policy Learning, which regards both the system and the user as the dialog agents.", "Two agents interact with each other and are jointly learned simultaneously.", "The method uses the actor-critic framework to facilitate pretraining and improve scalability.", "We also propose Hybrid Value Network for the role-aware reward decomposition to integrate role-specific domain knowledge of each agent in task-oriented dialog.", "Results show that our method can successfully build a system policy and a user policy simultaneously, and two agents can achieve a high task success rate through conversational interaction.", "Dialog policy, which decides the next action that the dialog agent should take, plays a vital role in a task-oriented dialog system.", "More recently, dialog policy learning has been widely formulated as a Reinforcement Learning (RL) problem (Su et al., 2016; Peng et al., 2017; He et al., 2018; Zhao et al., 2019; Zhang et al., 2019; Takanobu et al., 2019), which models users as the interactive environment.", "Since RL requires much interaction for training, it is too time-consuming and costly to interact with real users directly.", "The most common way is first Corresponding author to develop a dialog agent with a user simulator that mimics human behaviors in an offline scenario.", "Designing a reliable user simulator, however, is not trivial and often challenging as it is equivalent to building a good dialog agent.", "With the growing needs for the dialog system to handle more complex tasks, it will be much challenging and laborious to build a fully rule-based user simulator, which requires heavy domain expertise.", "Data-driven user simulators have been proposed in recent studies (Kreyssig et al., 2018; Shi et al., 2019), but they require a considerable quantity of manually labeled data, most of which regard the simulator as a stationary environment.", "Furthermore, there is no standard automatic metric for evaluating these user simulators, as it is unclear to define how closely the simulator resembles real user behaviors.", "In this paper, we propose Multi-Agent Dialog Policy Learning (MADPL), where the user is regarded as another dialog agent rather than a user simulator.", "The conversation between the user and the system is modeled as a cooperative interactive process where the system agent and the user agent are trained simultaneously.", "Two dialog agents interact with each other and collaborate to achieve the goal so that they require no explicit domain expertise, which helps develop a dialog system without the need of a well-built user simulator.", "Different from existing methods (Georgila et al., 2014; Papangelis et al., 2019), our approach is based on actor-critic framework (Barto et al., 1983) in order to facilitate pretraining and bootstrap the RL training.", "Following the paradigm of centralized training with decentralized execution (CTDE) (Bernstein et al., 2002) in multi-agent RL (MARL), the actor selects its action conditioned only on its local state-action history, while the critic is trained with the actions of all agents.", "in a cooperative setting.", "As shown in Fig. 1, only the user agent knows the user goal, while only the system agent can access the backend database.", "The user agent should express the requirements completely in an organized way, and the system should respond with useful information accurately and immediately.", "So it is inappropriate to apply simple self-play RL (Silver et al., 2017; Lewis et al., 2017) that views two agents as the same agent in this task.", "To address this issue, the system and the user are viewed as two asymmetric agents in MADPL.", "We introduce Hybrid Value Network (HVN) for role-aware reward decomposition.", "It decomposes the reward into two parts: one is the role-specific reward that focuses on its local target, and the other is the global reward that represents the shared goal.", "To evaluate the proposed approach, we conduct our experiments on a multi-domain, multi-intent task-oriented dialog corpus, MultiWOZ (Budzianowski et al., 2018).", "The corpus involves high dimensional state and action spaces, multiple decision making in one turn, which makes it more difficult to get a good system policy as well as a good user policy.", "The experiments demonstrate that MADPL can successfully build a system policy as well as a user policy with the aid of HVN, and two agents can achieve high task success rate in complex tasks by interacting with each other as well as with benchmark policies.", "avoid explicitly building a user simulator.", "We propose Hybrid Value Network for reward decomposition to deal with the asymmetric role issue between the system agent and the user agent in the task-oriented dialog.", "We conduct in-depth experiments on the multi-domain, multi-intent task-oriented dialog corpus to show the effectiveness, reasonableness and scalability of our algorithm.", "The goal of RL is to discover the optimal strategy ( a | s ) of the Markov Decision Process, which can be extended into the N -agent setting, where each agent has its own set of states S i and actions A i .", "In MARL, the state transition s = ( s 1 , . . . , s N ) s (cid:48) = ( s (cid:48) 1 , . . . , s (cid:48) N ) depends on the actions taken by all agents ( a 1 , . . . , a N ) according to each agent's policy i ( a i | s i ) where s i S i , a i A i , and similar to single RL, each agent aims to maximize its local total discounted return R i = (cid:80) t t r i,t .", "Since two or more agents learn simultaneously, the agents continuously change as the training proceeds, therefore the environment is no longer stationary.", "Many MARL algorithms (Lowe et al., 2017; Foerster et al., 2018; Rashid et al., 2018) have been proposed to solve challenging problems.", "Most of them use the CTDE framework to address the non-stationarity of co-adapting agents.", "It allows the policies to use extra information to ease training, but the learned policies can only use local information (i.e. their own observations) at execution time.", "Several studies have demonstrated that applying MARL delivers promising results in NLP tasks these years.", "While some methods use identical rewards for all agents (Das et al., 2017; Kottur et al., 2017; Feng et al., 2018), other studies use completely separate rewards (Georgila et al., 2014; Papangelis et al., 2019).", "MADPL integrates two types of rewards by role-aware reward decomposition to train a better dialog policy in task-oriented dialog.", "User modeling is essential for training RL-based dialog models, because a large amount of dialog samples are required for RL policy learning, making", "making it impractical to learn with real users directly", "from the beginning.", "There are three main approaches for user modeling.", "The first approach is to build a rule-based user simulator.", "Among these methods, the most popular one is agenda-based simulator (Schatzmann et al., 2007; Shah et al., 2018), which is built on hand-crafted rules with a stack-like agenda based on the user goal.", "The second approach is to build a user simulator from the dialog data (Keizer et al., 2010; El Asri et al., 2016; Kreyssig et al., 2018).", "Recently, Gur et al. (2018) uses a variational hierarchical seq2seq framework to encode user goal and system turns, and then generate the user response.", "Shi et al. (2019) uses two decoders with a copy and attention mechanism to predict a belief span first and then decode user utterance.", "The third approach is to use model-based policy optimization that incorporates a differentiable model of the world dynamics and assumptions about the interactions between users and systems (Su et al., 2018; Zhang et al., 2019), but this approach still requires real users or a user simulator for world model learning.", "Instead of employing a user simulator, a few methods jointly learn two agents directly from the corpus.", "Liu and Lane (2017) models the system and the user by iteratively training two policies.", "Papangelis et al. (2019) make the first attempt to apply MARL into the task-oriented dialog policy, whose algorithm is based on Q-learning for mixed policies.", "However, it is not well scalable to complex tasks such as multi-domain dialog.", "Therefore, MADPL uses the actor-critic framework instead to deal with the large discrete action space in dialog.", "We first formally describe the task, and then present the overview of our proposed model.", "Specifically, given a user goal G =( C , R ) composed of the user constraints C (e.g. a Japanese restaurant in the cen-ter of the city) and requests R (e.g. inquiry for address, phone number of a hotel), and given an external database DB containing all candidate entities and corresponding information, the user agent and system agent interact with each other in a dialog session to fulfill the user goal.", "There can be multiple domains in G , and two agents have to accomplish all the subtasks in each domain.", "Both agents can partially observe the environment, i.e. only the user agent knows G , while only the sys-Figure 2: Architecture of MADPL.", "HVN consists of three critics.", "Each critic estimates its return based on role-aware reward decomposition, and each actor uses the estimated value to optimize itself.", "tem agent can access DB , and the only way to know each other's information is through conversational interaction.", "Different from ordinary multi-agent task setting, two agents in dialog are executed asynchronously .", "In a single dialog turn, the user agent posts an inquiry first, then the system agent returns a response, and the two communicate alternately.", "Therefore, each dialog session can be seen as a trajectory of state-action pairs { ( s U 0 , a U 0 , s S 0 , a S 0 ); ( s U 1 , a U 1 , s S 1 , a S 1 ); . . . } , where the user agent and the system agent make decisions according to each dialog policy ( a U | s U ) , ( a S | s S ) respectively.", "Here we present a novel algorithm, Multi-Agent Dialog Policy Learning (MADPL), as shown in Fig. 2, which can be naturally formulated as a MARL problem.", "Two agents interact through dialog acts following (Georgila et al., 2014).", "We choose the actor-critic framework in order to learn an explicitly stochastic dialog policy (actor) for high scalability along with an estimated value function (critic) to bootstrap RL training.", "Besides, this can facilitate imitation learning to pretrain the dialog policy using human-human dialogs.", "Since two agents cooperate to reach success, yet their roles are asymmetric in the dialog, we propose Hybrid Value Network (HVN) to decompose the task reward into different parts for better policy learning.", "Note that our approach is fully data-driven without building a user simulator beforehand, and does not need any other human supervision during training.", "In the subsequent subsections, we will first explain the state and action used in two dialog policies.", "Then we describe how we decompose the reward and the proposed HVN.", "At last, we present model optimization.", "System Policy The system policy decides the system action a S according to the system dialog state s S to give the appropriate response to user agent.", "Each system action a S is a subset of dialog act set A as there may be multiple intents in one dialog turn.", "A dialog act is an abstract representation of an intention (Stolcke et al., 2000), which can be represented in a quadruple composed of domain , intent , slot type and slot value (e.g. [ restaurant , inform , food , Italian ]).", "In practice, dialog acts are delexicalized in the dialog policy.", "We replace the slot value with a count placeholder and refill it with the true value according to the entity selected from the external database DB , which allows the system to operate on unseen values.", "The system dialog state s St at dialog turn t is the concatenation of (I) user action at current turn a Ut ; (II) system action at the last turn a Ut 1 ; (III) the belief state b t (Williams et al., 2016) that keeps track of constraint slots and request slots supplied by the user agent; and (IV) embedding vectors of the number of query results q t from DB .", "User Policy The user policy decides the user action a U according to the user dialog state s U to express its constraint and request to the system agent.", "Similar to the system policy, the user policy uses delexicalized dialog acts as actions, and the value is refilled according to the user goal G .", "User dialog state s Ut is the concatenation of (I) last system action a St 1 ; (II) last user action a Ut 1 ; (III) the goal state g t that represents the remained constraint and request that need to send; (IV) inconsistency vector c t (Kreyssig et al., 2018) that indicates the inconsistency between the systems response and user constraint C .", "In addition to predicting dialog acts, the user policy outputs terminal signal T at the same time, i.e. = ( a U , T | s U ) .", "On the one hand, the roles between the user agent and the system agent are different.", "The user agent actively initiates a task and may change it during conversation, but the system agent passively responds to the user agent and returns the proper information, so the reward should be considered separately for each agent.", "On the other hand, two agents communicate and collaborate to accomplish the same task cooperatively, so the reward also involves a global target for both agents.", "Therefore, we decompose the mixed reward into three parts according to the characteristic of each component.", "The reward of each part is explained as follows: System Reward r St consists of (I) empty dialog act penalty a St = ; (II) late answer penalty if there is a request slot triggered but the system agent does not reply the information immediately; and (III) task success reward based on the user agent's description.", "User Reward r Ut consists of (I) empty dialog act penalty a Ut = ; (II) early request penalty if the user agent requests for information when there is still a constraint slot remained to inform; and (III) user goal reward whether the user agents have expressed all the constraints C and requests R .", "Global Reward r Gt consists of (I) efficiency penalty that a small negative value will be given at each dialog turn; (II) sub-goal completion reward once the subtask of G in a particular domain is accomplished; and (III) task success reward based on user goal G .", "Obviously, each agent should obtain its local reward, and both agents should receive the global reward during the training process.", "Note that the task success and the user goal reward are only computed at the end of the dialog, and the task success computed in the system reward differs from the one in the global reward.", "The value function aims to estimate the expected return given the current state V ( s t ) = E [ R t ] = E [ (cid:80) t (cid:48) t t (cid:48) t r t (cid:48) ] so that the policy can directly use the estimated cumulative reward for optimization, without sampling the trajectories to obtain rewards which may cause high variance.", "Another advantage by applying actor-critic approaches in MARL is that it can integrate with the CTDE framework: the actor of each agent benefits from a critic that is augmented with additional information about the policies of other agents during training.", "However, a simple centralized critic conditioned on the global state and joint actions cannot well exploit the domain knowledge mentioned above since each part of the overall rewards only depends on a subset of features, e.g. the system reward only depends on the system agent's behaviors.", "Inspired by Hybrid Reward Architecture (Van Seijen et al., 2017) that learns a separate Q function, we propose Hybrid Value Network to improve an estimate of the optimal role-aware value function.", "It first encodes the dialog state of each agent to learn a state representation h Ss = tanh( f Ss ( s S )) , h Us = tanh( f Us ( s U )) , where f ( ) can be any neural network unit.", "The value network V is separated into three branches VS , VU and VG for the value of system rewards, user rewards and global rewards, respectively.", "VS ( s S ) = f S ( h Ss ) , VU ( s U ) = f U ( h Us ) , VG ( s ) = f G ([ h Ss ; h Us ]) .", "The action space for the policies can be very large since we deal with multi-domain, complex dialog tasks, which makes it almost impossible for the RL policies to explore and learn from scratch.", "So the training process can be split into two stages (Fatemi et al., 2016; Takanobu et al., 2019): pretraining the dialog policy with the conversational corpus first and then using RL to improve the pretrained policies.", "We use -weighted logistic regression for policy pretraining here to alleviate data bias because each agent only generates several dialog acts in one dialog turn L ( X, Y ; ) = [ YT log ( X ) (1) + ( I Y ) T log( I ( X ))] , where X is the state and Y is the action from the corpus in this task.", "As for critic optimization, it aims to minimize the squared error between the temporal difference (TD) target r t + V ( s t +1 ) and the estimated value V ( s t ) = E [ r t + V ( s t +1 )] .", "Actor-critic algorithms have high variance since the critic is updated too frequently, which has contributed to severe changes in the estimated value, particularly in multi-agent tasks.", "So we introduce a target network (Mnih et al., 2015) to make the training process more stable.", "In the context of HVN, it aims to minimize the following loss functions: LSV ( ) = ( r S + V S ( s (cid:48) S ) V S ( s S )) 2 , LU V ( ) = ( r U + V U ( s (cid:48) U ) V U ( s U )) 2 , LGV ( ) = ( r G + V G ( s (cid:48) ) V G ( s )) 2 , LV = LSV + LUV + LGV , (2) Algorithm 1: Multi-Agent Dialog Policy Learning Require : Dialog corpus D with annotations of dialog acts { a } 1 Initialize weights , for system policy and user policy respectively 2 Pretrain policies , on human conversational data D using Eq.", "Initialize user goal and dialog state s , s repeat Sample actions a U , a S and terminal signal T using current policy , Execute actions and observe reward r U , r S , r G and new states s (cid:48) U , s (cid:48) S Update hybrid value network (critic) using Eq.", "15", "where HVN V is parameterized by , and is the weight of target network, and the overall loss LV is the sum of value estimation loss on each component reward.", "Each dialog policy aims to maximize all the related returns, e.g. the system policy aims to maximize the cumulative system rewards and global rewards E [ (cid:80) t t ( r St + r Gt )] .", "The advantage A ( s ) = r + V ( s (cid:48) ) V ( s ) estimated by the critic can evaluate the new state s (cid:48) and current state s to determine whether the dialog has become better or worse than expected.", "With the aid of HVN, the sum of the related component advantages can be used to update different agents.", "By using the log-likelihood ratio trick, the gradients for the system policy and the user policy yield: J ( )= log ( a S | s S )[ AS ( s S )+ AG ( s )] , (3) J ( )= log ( a U | s U )[ AU ( s U )+ AG ( s )] , where the system policy is parameterized by and the user policy by .", "MultiWOZ (Budzianowski et al., 2018) is a multi-domain, multi-intent task-oriented dialog corpus that contains 7 domains, 13 intents, 25 slot types, 10,483 dialog sessions, and 71,544 dialog turns.", "During the data collection process, a user is asked to follow a pre-specified user goal, and is allowed to change the goal during the session if necessary, so the collected dialogs are much closer to real-world conversations.", "The corpus also provides the domain knowledge that defines all the entities and attributes as the external database.", "Evaluation of a task-oriented dialog system mainly consists of the cost and task success.", "We count the number of dialog turns to reflect the dialog cost.", "A user utterance and a subsequent system utterance are regarded as one dialog turn.", "We utilize two other metrics: inform F1 and match rate to estimate the task success.", "Both metrics are calculated at the dialog act level.", "Inform F1 evaluates whether all the requested information has been informed, and match rate checks whether the booked entities match all the indicated constraints given by the user.", "The overall task success is reached if and only if both inform recall and match rate are", "1. 4.3 Baselines We compare MADPL with a series of baselines that involve both system policy learning and user policy learning.", "Note that we do not consider any other approaches that use a user simulator for policy training because our motivation is to avoid explicitly modeling a simulator.", "SL Supervised Imitation Learning directly uses the dialog act annotations and trains the agents simply by behavior cloning using Eq.", "1, which is the same as the pretraining phase in MADPL.", "The following three baselines are all RL algorithms that start from the pretrained policy: RL Independent Reinforcement Learning learns only one dialog policy by fixing another agent following the single RL setting, and the reward for Class Attraction Hospital Hotel Count 320 22 389 Police Restaurant Taxi Train 22 457 164 421 Num.", "the agent is the sum of role-specific reward and global reward.", "For example, the user policy uses the reward r = r U + r G at each dialog turn.", "CRL Centralized Reinforcement Learning is a MARL approach that uses a single centralized critic on the sum of reward r = r U + r S + r G to train two agents simultaneously, which also serves for the ablation test of MADPL.", "IterDPL Iterative Dialog Policy Learning (Liu and Lane, 2017) updates two agents iteratively using single RL training to reduce the risk of non-stationarity when jointly training the two agents.", "A set of 1,000 user goals are used for automatic evaluation as shown in Table", "1. When the dialog is launched, two agents interact with each other around a given user goal.", "The performance of interaction between the two trained policies are shown in Table", "2. MADPL reaches the highest match rate and task success among all the methods.", "It manages to improve the success rate of the pretrained policies from 49.7% to 70.1%.", "Single RL policies (row 2 to 4) have limited improvement, and even decline in match rate since they assume a stationary environment.", "The comparison between CRL and IterDPL indicates the effectiveness of CTDE in the multi-agent task.", "The superiority of MADPL against CRL shows that two agents benefit from the role-aware reward decomposition in HVN.", "The learning curves in Fig. 3 illustrates that the success rate grows rapidly in MADPL, and it always improves the success rate as the training proceeds.", "The average reward of each component reward is shown in 4.", "We run 10 different instances of MADPL with different random seeds.", "The solid curves correspond to the mean and the shaded region to the standard deviation of rewards over the System User Turns Inform Match Success SL SL 6.34 73.08 82.58 49.7 SL RL 8.75 76.86 76.28 60.2 RL SL 6.20 72.84 79.15 51.1 RL RL 7.92 75.96 70.37 58.7 CRL 8.13 68.29 89.71 66.6 IterDPL 8.79 74.01 81.04 64.6 MADPL 8.96 76.26 90.98 70.1 Table 2: Performance of the interaction between the user agent and the system agent.", "10 trials.", "We can observe that all the rewards increase steadily during the training process, which implies that HVN has estimated a proper return for policy training.", "It is essential to evaluate a multi-agent dialog system whether all the agents understand the semantic interaction rather than invent an uninterpretable language (Kottur et al., 2017; Lee et al., 2019a).", "To this end, we use two benchmark policies in the standardized task-oriented dialog system platform Convlab (Lee et al., 2019b) to examine all the methods.", "Each benchmark is a strong rule-based system policy or user policy at the dialog act level, which is used as the simulated evaluation in the DSTC-8 Track 1 competition and show a high correlation with real user interaction (Li et al., 2020).", "The trained system/user policy in each method is directly deployed to interact with the benchmark user/system policy during the test without any other finetuning, which can be regarded as a weakly zero-shot experiment.", "The same goal set in Table 1 is used here.", "Table 3 and Fig. 5 show the results of the interac-Figure 4: Learning curves of MADPL on system reward (top), user reward (middle) and global reward (bottom).", "tion between the benchmark user policy and the system agent of each model.", "The SOTA performance from GDPL (Takanobu et al., 2019) that directly trains with benchmark user policy is also presented as the soft performance upper bound.", "Among all the methods, MADPL has achieved the highest task success and the second-highest match rate.", "All the methods experience a decline in inform F1 after the RL training.", "Fig. 5 also shows that the success rate is unstable during training.", "This is because the action space of the system policy is much larger, thus more challenging to learn.", "In spite of that, the success rate of MADPL shows a rising trend.", "Table 4 and Fig. 6 show the results of the interaction between the user agent of each method and the benchmark system policy.", "Among all the methods, MADPL has achieved the highest inform F1 and task success.", "Though CRL improves the performance at the beginning, the success rate fails to increase further afterwards, while MADPL continues to improve all the time.", "This also indirectly indicates the advantage of using role-aware reward decomposition in HVN.", "We also investigate the domains in the user goals to observe the scalability of each method in the complex tasks.", "200 goals are randomly sampled under each setting.", "Fig. 7 presents the results of the interaction between two agents in different numbers or classes of domains.", "The success rate decreases substantially as the number of domains increases in the goal.", "When there are 3 domains in the goal, RL/RL gets a high inform F1 but a low match rate, IterDPL gets a high match rate but a low inform F1, while MADPL can still keep a high inform F1 and match rate, and obtains the highest task success.", "In terms of the class of domains, there are 7/10/6 informable slots that needs to be tracked in the Restaurant / Hotel / Train domain respectively.", "Among these, MADPL outperforms other baselines in the Restaurant and Hotel domains, and performs comparably in the Train domain.", "In brief, all the results indicate that MADPL has good scalability in multi-domain dialog.", "For human evaluation, we hire Amazon Mechanical Turkers to conduct pairwise comparison between MADPL and baselines.", "Since all the policies work at the dialog act level, we generate the texts from dialog acts using hand-crafted templates to make the dialog readable.", "Each Turker is asked to read a user goal first, then we show 2 dialog sessions around this user goal, one from MADPL and the other from another baseline.", "We randomly sample 100 goals for each baseline.", "For each goal, 5 Turkers are asked to judge which dialog is better (win, draw or lose) according to different subjective assessments independently: (I) system quality, (II) user quality, Figure 7: Performance of dialog agents according to the different number (left) or class (right) of domains in the dialog.", "and (III) task success.", "The system quality metric evaluates whether the system policy provides the user with the required information efficiently, and the user quality metric evaluates whether the user policy expresses the constraints completely in an organized way.", "Note that we do not evaluate the quality of language generation here.", "Table 5 shows the results of human preference by majority voting.", "We can observe that the high win rate of MADPL on the task success is consistent with the results of automatic evaluation, and MADPL outperforms three baselines significantly in all aspects (sign test, p-value < 0.01) except for the system quality against RL/RL policies.", "The proportion of the pairwise annotations in which at least 3 of 5 annotators assign the same label to a task is 78.7%/77.3%/83.3% for system quality/user quality/task success, respectively.", "This indicates that annotators have moderate agreements.", "The human judgements align well with the results of automatic evaluation, which also indicates the reliability of the metrics used in task-oriented dialog.", "system policy simultaneously.", "It uses the actor-critic framework to facilitate pretraining and bootstrap RL training in multi-domain task-oriented dialog.", "We also introduce role-aware reward decomposition to integrate the task knowledge into the algorithm.", "MADPL enables the developers to set up a dialog system rapidly from scratch.", "It only requires the annotation of dialog acts in the corpus for pretraining and does not need to build a user simulator explicitly beforehand.", "Extensive experiments 1 demonstrate the effectiveness, reasonableness and scalability of MADPL.", "As future work, we will apply MADPL in the more complex dialogs and verify the role-aware reward decomposition in other dialog scenarios.", "This work was jointly supported by the NSFC projects (Key project with No. 61936010 and regular project with No. 61876096), and the National Key R&D Program of China (Grant No. 2018YFC0830200).", "We would like to thank THUNUS NExT Joint-Lab for the support.", "The code is available at https://github.com/ truthless11/MADPL ." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "other", "other" ]
[ "This paper presents an efficient graph-enhanced approach to multi-document summarization (MDS) with an encoder-decoder Transformer model.", "This model is based on recent advances in pre-training both encoder and decoder on very large text data (Lewis et al., 2019), and it incorporates an efficient encoding mechanism (Beltagy et al., 2020) that avoids the quadratic memory growth typical for traditional Transformers.", "We show that this powerful combination not only scales to large input documents commonly found when summarizing news clusters; it also enables us to process additional input in the form of auxiliary graph representations, which we derive from the multi-document clusters.", "We present a mechanism to incorporate such graph information into the encoder-decoder model that was pre-trained on text only.", "Our approach leads to significant improvements on the Multi-News dataset, overall leading to an average 1 .", "8 ROUGE score improvement over previous work (Li et al., 2020).", "We also show improvements in a transfer-only setup on the DUC-2004 dataset.", "The graph encodings lead to summaries that are more abstractive.", "Human evaluation shows that they are also more informative and factually more consistent with their input documents.", "1 1 Introduction Abstractive Multi-Document Summarization (MDS), the task of writing a consolidated summary of the main information from multiple documents, has seen advancements with the introduction of large-scale datasets and powerful Transformer-based models (Liu et al., 2018; Liu and Lapata, 2019; Fabbri et al., 2019).", "However, some of the key challenges of MDS include lack of proper inter-document context-aware information, improper logical flow of information, and need 1 All our code publicly available at: https://github.", "for external deep context representations.", "Liu and Lapata (2019) and Li et al. (2020) have addressed the inter-document context modeling to some extent with local and global attention, and document-level similarity graphs.", "Further, Li et al. (2020) have addressed the later part of using external contextual information (large pre-trained language models, e.g., RoBERTa (Liu et al., 2019)) to improve the performance of MDS models.", "However, these pre-trained language models are (1) not scalable for long documents because of their encoding length limit and quadratic memory growth; and (2) they do not jointly explore alternate auxiliary information, e.g., semantic graphs derived from multi-document clusters.", "Addressing these issues, we present an efficient graph-enhanced approach to multi-document summarization using a pre-trained encoder-decoder Transformer model (Lewis et al., 2019), depicted in Fig. 1, along with an efficient encoding mechanism to encode longer input texts.", "To this end, we first provide a strong baseline for MDS on the Multi-News dataset (Fabbri et al., 2019) using a pre-trained encoder-decoder model, called BART (Lewis et al., 2019).", "Next, we incorporate a Longformer-based approach (Beltagy et al., 2020) into the pre-trained BART model, replacing the quadratic memory growth of the full self-attention mechanism with an efficient context window-based attention mechanism that scales the memory linearly w.r.t. the input length.", "This enables us to encode longer documents than previous work.", "This efficient encoding mechanism comprises local and global attention mechanisms that address the challenge of modeling inter-document context.", "Further, we build consolidated semantic graph representations of the multiple input documents and explore ways to incorporate them into the encoder-decoder model.", "The semantic graph for a given multi-document cluster is a compact representation of subject-predicate-object triplets (Stanovsky et al., 2018) extracted from the text of the documents; see Fig. 3 for an example.", "We propose a dual encoding mechanism that separately encodes the regular text of a multi-document cluster and a text representation of its graph.", "The regular text is encoded by the pre-trained BART encoder, while the graph text is encoded by a transformer encoder that is not pre-trained.", "Empirically, we show that our approach (includ-ing the ability to use longer parts of the input documents and add auxiliary graph encodings) leads to significant improvements on the Multi-News dataset (achieving state-of-the-art), overall leading to an average 1.8 ROUGE score improvement over previous work (Li et al., 2020).", "Based on various automatic evaluation metrics, we show that adding graph encodings can help the model abstract away from the specific lexical content of the input and generate summaries that are more abstractive .", "Further human evaluation shows that they are also more informative and factually more consistent with their input documents.", "We also test our model with auxiliary graph encodings on the DUC-2004 dataset (Over and Yen, 2004) in a test-only transfer setup, and show that it improves the generalization performance better than a non-graph baseline model.", "Finally, we present ablations, such as analyzing the effect of input document length on the performance, qualitative analysis of the output summaries, and effect of various graph encoding approaches on the performance of the MDS system.", "Researchers have been interested in automatically summarizing multiple documents since the late 1990s.", "First works (Mani and Bloedorn, 1997; Radev and McKeown, 1998) cited the gaining popularity of the World Wide Web (WWW) as a motivation for the task.", "They modeled multi-document collections as graph structures perhaps influenced by the link structure of the WWW itself.", "Mani and Bloedorn (1997) summarized pairs of documents by building a graph representation of each and performing graph matching to find salient regions across both documents.", "Radev and McKe-own (1998) summarized multiple documents by mapping them to abstract template representations, then generating text from the templates.", "In the early 2000s, datasets from the Document Understanding Conference (DUC), which included human-written summaries for multi-document clusters, sparked increased research interest.", "In LexRank, Erkan and Radev (2004) extracted the most salient sentences from a multi-document cluster by constructing a graph representing pairwise sentence similarities and running a PageRank algorithm on the graph.", "Subsequent approaches followed the same paradigm while improving diversity of the extracted sentences (Wan and Yang, 2006) or adding document-level information into the graph (Wan, 2008).", "Dasgupta et al. (2013) incorporated dependency graph features into their sentence relation graphs.", "Baralis et al. (2013) built graphs over sets of terms, rather than sentences.", "Li et al. (2016) built a graph over event mentions and their relationships, in order to summarize news events using sentence extraction techniques.", "Liu et al. (2015) and Liao et al. (2018) leveraged AMR formalism to convert source text into AMR graphs and then generate a summary using these graphs.", "More recently, the introduction of larger datasets for MDS has enabled researchers to train neural models for multi-document summarization.", "Liu et al. (2018) introduced a large-scale dataset for MDS called WikiSum, based on Wikipedia articles.", "Liu and Lapata (2019) introduced a hierarchical Transformer model to better encode global and local aspects in multiple documents and showed improvements on WikiSum.", "Fabbri et al. (2019) introduced an MDS dataset of human-written abstracts from the newser.com website, along with the source articles that are cited from these abstracts.", "Further, they also proposed a hierarchical neural model for MDS with an additional Maximal Marginal Relevance (MMR) module that calculates sentence ranking scores based on relevancy and redundancy.", "Li et al. (2020) further showed the usefulness of pre-trained language models to improve the performance on MDS.", "However, this approach lacks a pre-trained decoder, and it also limits the document length that can be encoded by the pre-trained language models.", "In contrast, our work utilizes the pre-trained seq2seq BART (Lewis et al., 2019) model to improve the performance on MDS.", "We have also incorporated the Longformer-based attention mechanism (Beltagy et al., 2020) into BART model to encode long documents.", "To encode graphs into an MDS neural model, Fan et al. (2019) constructed a semantic graph representing key phrases and entities from the documents, as well as their expressed relationships; they used linearized forms of these graphs as inputs to their Transformer model.", "In contrast, we use dual encoders for encoding both documents text and linearized graph text information.", "Recently, Li et al. (2020) constructed a similarity graph, topic graph, and discourse graph between input documents and encoded this information directly, rather than in linearized form, into a Transformer.", "In our work, we build semantic graphs at the sentence level and create a consolidated graph representation by efficiently removing less useful information.", "In this section, we first discuss our baseline MDS model utilizing the pre-trained BART sequence-to-sequence model (Lewis et al., 2019).", "Next, we integrate a Longformer approach (Beltagy et al., 2020) into the BART model for encoding long documents.", "Finally, we discuss our integration of graph encodings into the BART model.", "Bidirectional Auto-Regressive Transformer (BART) (Lewis et al., 2019) is a sequence-to-sequence Transformer-based model where the encoder is bi-directional and the decoder is uni-directional.", "The objective of this model is to reconstruct the actual input from given noisy text input.", "Input noising strategies include token masking, sentence permutation, document rotation, token deletion, and text infilling.", "The BART model Full Self-Attention Local Self-Attention Local + Global Self-Attention 2 Figure 2: Pictorial overview of various attention mechanisms.", "To perform multi-document summarization (MDS), we use the pre-trained BART model (trained as described above) and fine-tune it on the MDS datasets.", "Following Fabbri et al. (2019), we feed cluster documents as a single string joined by a special marker to the BART encoder.", "Recently, the Longformer model (Beltagy et al., 2020) was introduced to allow the pre-trained RoBERTa model (Liu et al., 2019) to encode longer documents than its pre-fixed 512 limit.", "This is achieved by replacing the traditional full self-attention mechanism (top diagram in Fig. 2) in the Transformers ( n 2 memory complexity) with a sparse context window-based attention mechanism which has linear memory in complexity w.r.t. the document length.", "Further, a small number of tokens are selected to attend over all other tokens, thus creating global attention along with the local context window-based attention (bottom diagram in Fig. 2).", "Previously, Longformer has only been explored for pre-trained encoder-only based models, e.g, RoBERTa.", "In our work, we explore this approach to the pre-trained sequence-to-sequence BART model.", "We integrate the Longformer including both local and global attention mechanisms, into the BART model, named BART-Long, to encode documents much longer than its maximum token limit of 1024 .", "In order to better encode the information from multiple documents, we incorporate global attention after every sentence and explore various context window sizes for local attention.", "Recently, Fan et al. (2019) converted each multi-document input of the MDS into a graph and then pass the linearized form of this graph as input to a non-pre-trained sequence-to-sequence model, replacing the original text input.", "In contrast, our work explores the integration of graph encodings into a pre-trained BART model with a separate graph encoder.", "It is important and also challenging to encode graph representations into the pre-trained model while leveraging the pre-existing knowledge from pre-trained models.", "Moreover, we utilize the BART-Long model described in Sec. 3.2 to avoid the limitation in the input length for encoding both graph and textual information.", "Next, we describe how we convert multiple input documents into a consolidated graph representation and later describe how we encode this information into an extended BART architecture.", "Graph Construction.", "Following Fan et al. (2019), we perform three steps for constructing a consolidated graph from multiple input documents.", "First, we do co-reference resolution within each document and extract open information extraction triplets (OIE) at the sentence level from all input documents.", "2 Each OIE triplet consists of a subject, a predicate, and an object.", "Once we have all the triplets, in the second step, we build a graph with subjects and objects as nodes and the predicates as the edge relationship between the nodes.", "We also calculate the TF-IDF scores for each word in a document.", "This is useful in identify-2 We use AllenNLP ( https://allennlp.org/ ) library for co-reference resolution and extracting OIE triplets.", "ing similar phrases and merging their corresponding nodes in the graph.", "3 Once we build the graph, we remove the clusters (sub-graphs) with only two nodes, thereby creating a consolidated graph .", "In the third step, we convert the graph into a linearized form .", "For this, we traverse sub-graphs in the order of their size, and within each sub-graph we simply start from a node with the highest centrality and move down the sub-graph in a breadth-first search approach to generate linearized text.", "We concatenate these texts together to form the linearized graph text.", "Fig. 3 gives an overview of our graph construction approach with examples of linearized graph.", "Here, we use special tokens like < sub > for subject, < pred > for predicate, < obj > for object, and < cat > for concatenating multiple predicates between a pair of a subject and an object.", "Linear Graph Model.", "Our initial experiments combining both the documents text and linearized graph text into one single input for the BART model gave a slight improvement.", "To further enable better encoding, we used two encoders: (1) encoding the documents' original text via the pre-trained BART encoder; and (2) encoding the linearized graph text via a new graph encoder, as shown in Fig. 4.", "Let x i and g i represent the tokens at position i corresponding to the documents text and linearized graph text, respectively.", "Also, let the corresponding token embeddings be e xi and e gi , and the positional embeddings be p xi and p gi .", "Then, the input to the BART encoder ( x 0 i ) and graph encoder 3 We define one representative unique string (as a node) from the pool of all matched strings.", "We manually set the TF-IDF matching threshold to 0.5 based on graph size.", "4 BART Encoder Graph Encoder N-K K x x Transformer Layer Transformer Layer Transformer Layer M x Linearized graph text Concatenate truncated documents Figure 4: Overview of our approach with BART encoder and a graph encoder.", "Evaluation Metrics.", "We evaluate our models via automatic evaluation metrics using ROUGE 6 (Lin, 2004), as well as human evaluations of informativeness, coherence, and factual consistency.", "Following previous work (Fabbri et al., 2019), we report the F1 scores of ROUGE-1, ROUGE-2, and ROUGE-L on Multi-News dataset, and report the F1 scores of ROUGE-1, ROUGE-2, and ROUGE-SU on DUC-2004 dataset with a 100 word limit.", "In order to have a fair comparison with previous work, we report summary-level ROUGE-L scores.", "Let the final outputs of the graph encoder with M Transformer layers be g M .", "Let the outputs of the BART encoder after K Transformer layers be x K .", "4 Now, we combine these outputs and give it as a single input to the ( K + 1) th layer of the BART encoder (as shown in Fig. 4).", "The combined input to ( K + 1) th Transformer layer is defined as: x K = [ x K ; g M ] (2) where [; ] represents the concatenation and x K represents the input to ( K + 1) th layer (total number of inputs at this layer is equal to the sum of documents text and graph text tokens).", "Our approach of having separate encoders for graph information could bring the linearized graph text representations closer to that of the pre-trained BART representations.", "Multi-News Dataset.", "The Multi-News dataset (Fabbri et al., 2019) consists of English news articles and the corresponding summaries written by professionals on the newser.com website.", "The articles in this dataset are curated from a diverse set of news sources (over 1 , 500 sites).", "In this work, we use the same splits provided by Fabbri et al. (2019), i.e., 44 , 972 / 5 , 622 / 5 , 622 examples for training/validation/test, respectively.", "Following Fabbri et al. (2019), we truncate N documents to a total length of L tokens such that 4 We set K = 1 and M = 1 in all our experiments.", "DUC-2004 Dataset.", "The DUC-2004 dataset (Over and Yen, 2004) consists of 50 topics with 10 English documents per topic.", "5 Each topic has 4 human-written summaries.", "In our work, we use this dataset as test-only setup to analyze the transfer skills of our models.", "Training Details.", "We tune all our models based on the validation performance.", "We start with the pre-trained BART large model and fine-tune on the Multi-News dataset.", "7 All our new methods are implemented on top of fairseq library.", "8 We train each model on 4 Nvidia V100 GPUs.", "By default, we use Adam optimizer with a learning rate of 2 10 5 and manually tune in the range: [1 10 5 , 4 10 5 ] with 500 warm-up steps.", "We apply dropout of 0 .", "1 and a label smoothing of 0 .", "1 .", "We perform standard tokenization following previous work (Fabbri et al., 2019) and lowercase both source and target.", "During inference, we use a minimum decoding length of 50 and a maximum decoding length of 500 .", "For our BART model with Longformer attention, we use a default attention context window size of 128 .", "We train our BART-Long model for 5 epochs which approximately takes 6 hours.", "For BART-Long-Graph model we train for 8 epochs which approximately takes 8 hours.", "In terms of total number of trainable parameters, BART-Long has 447 million parameters and BART-Long-Graph has 463 million parameters.", "9 5 https://duc.nist.gov/duc2004/ 6 https://pypi.org/project/pyrouge/ 7 The BART-Large model has 12 Transformer encoders and decoders.", "https://github.com/pytorch/fairseq 9 Note that models based on pre-trained RoBERTa also have a similar number of parameters given that RoBERTa has 24 Transformer layers with 1024 hidden size, whereas BART-based models have 12 Transformer layers each on the encoder and decoder sides with 1024 hidden size.", "In this section, we discuss the performance of various previous works on Multi-News and DUC-2004 datasets, and compare it with our proposed models.", "Baseline Results.", "Table 1 presents the performance of various previous works.", "First, we report the results of PG-BRNN, HiMAP, and Flat Transformer following Fabbri et al. (2019).", "Next, we report the results of Hierarchical Transformers, RoBERTa+Transformer Decoder, and the variants of GraphSum models following Li et al. (2020).", "Some of these previous works use RoBERTa based encoder representations while training MDS models, hence achieving strong results.", "Note that all these models use 500 tokens for the source input.", "BART-Long Results.", "Table 1 also presents the results on our BART-Long model as described in Sec. 3.2.", "Our BART-Long model is better than all previous works by a large margin, achieving a new state-of-the-art.", "This is because of two reasons: (1) The BART model has pre-trained encoder and decoder representations, whereas the previous works have pre-trained encoder-only models such as RoBERTa+Transformer Decoder and GraphSum + RoBERTa; (2) BART model has more number of parameters.", "10 Apart from the performance, our BART-Long model has the advantage to encode longer parts of the input documents more efficiently than the traditional Transformer models or RoBERTa style pre-trained models (more results 10 Note that RoBERTa+Transformer decoder (Li et al., 2020) also has a similar number of parameters (see training details in Sec. 4).", "on this in Sec. 5.4; Table 4).", "This is because BART-Long model uses linear memory complexity via its local and global attention mechanism.", "BART-Long-Graph Results.", "The results of our novel graph-based encodings into the BART model are shown in the last two rows of Table 1.", "Both these models perform statistically significantly better than our strong BART-Long baseline, where the main difference between these two models is the number of tokens used in the graph encoder.", "11 Note that we construct our graph using 2 , 000 tokens of input documents and use 500 or 1 , 000 tokens of linearized graph text as input along with 500 tokens of input documents text.", "12 We further calculated BERTScore (Zhang et al., 2020) for our models, and the F1 scores are 44 .", "06 , 44 .", "52 , and 44 .", "64 for BART-Long, BART-Long-Graph with 500 tokens of graph, and BART-Long-Graph with 1 , 000 tokens of graph, respectively.", "We have also tried pre-training the BART-Long-Graph with the criteria of decoding the original documents' text using noisy input, i.e., by removing some sentences randomly from linearized graph text and documents text.", "However, we do not see any significant improvement with this approach.", "11 Our BART-Long-Graph with 500 tokens of graph text is significantly better than BART-Long baseline on all ROUGE metrics with p< 0 .", "05 based on ROUGE script's 95% confi-dence interval.", "Whereas, our BART-Long-Graph with 1 , 000 tokens of graph text is statistically significantly better on ROUGE-1/2 metrics with p< 0 .", "05 , and it achieves the best average ROUGE score.", "12 BART-Long with longer inputs perform on par w.r.t. BART-Long-Graph (see Table 4), however, they suffer from generating more extractive summaries, whereas our graph methods generate more abstractive summaries (see Table 8).", "We also evaluate our proposed models in a test-only transfer setup using the DUC-2004 multi-document summarization dataset.", "Table 2 presents the results on this dataset comparing our models with previous works.", "Our models perform better than some of the extractive summarization methods (TextRank and MMR).", "However, some of the previous works perform better than our models, but we cannot strictly compare with them since they are trained on CNN/Daily Mail dataset.", "13 Comparing our baseline model and our model with graph encodings, we observe that graph encodings help improve the performance by 0 .", "9 on ROUGE-1 and 0 .", "5 on ROUGE-SU.", "This suggests that graph information is useful in transfer setups as well.", "We conduct a human evaluation on Amazon MTurk to analyze the effect of adding graph input to the BART-Long model (setup details in Appendix B).", "Informativeness and coherence: To evaluate how graph encodings impact informativeness and coherence of the generated summaries, we show human annotators pairs of summaries from the BART-Long model and the BART-Long-Graph model and ask them to indicate which one is more informative 13 If we compare these models (e.g., Hi-MAP) on Multi-News dataset, our models perform much better (see Table 1).", "and which one is more coherent; definitions are listed in the Appendix B. There is also an option for choosing None .", "The summaries are labeled as A and B using random permutation; we also show the target summary from the test set for reference.", "We obtain judgments from two annotators on 200 examples from the Multi-News test set.", "Table 3 shows the results; None represents all cases where either both annotators picked None or the two annotators did not give the same answer.", "We observe that BART-Long-Graph summaries were picked as more informative by both judges 25.5% of the time, compared to 17% for the BART-Long model.", "The results are closer for coherence, with a slight disadvantage for the BART-Long-Graph model.", "We hypothesize that using graph information, which has a different structure than natural text, makes the summary less coherent.", "Factual consistency: We evaluate factual consistency by highlighting single summary sentences and asking the annotators if it is consistent with the input articles.", "We ask three annotators to judge the factual consistency of the highlighted summary w.r.t. the articles on 200 outputs per model.", "For the BART-Long-Graph model, 72% of the summaries are judged as factually consistent by two or more annotators, compared to 68% for the BART-Long model.", "Frequently, news sources are hallucinated, e.g., as reported by TMZ.", "This error accounts for 18 of the 136 errors of the BART-Long-Graph model and 19 of the 144 errors of the BART-Long model.", "More details in Appendix B. 5.4 Ablations and Analyses What is the effect of input documents length over the performance?", "Table 4 presents the performance comparison of BART with Longformer (BART-Long) over different input lengths.", "At the same input length, BART-Long performance is slightly lower than the BART model without Longformer attention, i.e., using full self-attention.", "This is expected as we replace the full self-attention with Window Size R-1 R-2 R-L Avg.", "local and global attention with lower memory footprint.", "More importantly, BART-Long can encode longer documents and can achieve better results which is evident from the results in Table 4.", "14 Overall, we observe that the best results are achieved at a document length of 1 , 000 tokens, and no further improvement for any input length greater than that.", "What is the effect of attention context window size over the performance?", "We also compare the effect of various attention context window sizes in the local attention mechanism of BART-Long model over the summarization performance.", "Table 5 presents such ablation with attention context window sizes of 32 , 64 , 128 , 256 , and 512 , on the Multi-News dataset with 1 , 000 tokens input.", "15 Here, we observe that performance linearly improves till certain context window size and then stays more or less similar.", "Note that in Table 1 we use an attention context window size of 128 to trade-off between memory and performance.", "Different approaches of graph encodings.", "Table 6 presents the results on various graph encoding methods.", "First, we replace the original input with linearized graph text and we observe a significant drop in the performance (BL-Graph-Only'; second row in Table 6).", "This suggests that documents' text as input is very important to achieve good results.", "Next, we concatenate the documents' text with linearized graph text and give it has input to the BART model (BL-Graph-Concat') which achieves slightly better results over the baseline.", "However, when we add the linearized graph text as a separate graph encoder (BL-Separate-Graph'; same as our BART-Long-Graph' model in Table 1), we achieve the best results.", "How abstractive are the summaries?", "Abstractive summarizers generate surprisingly extractive summaries, copying large fragments unmodified from the input documents into the summaries (We-ber et al., 2018; Pilault et al., 2020).", "We hypothesize that providing graph representations of the input can help the model abstract away from the specific lexical content of the input and generate summaries that are more abstractive.", "Table 8 shows the lexical overlap between the summaries and their inputs when truncating the input documents to different numbers of words, and when adding a graph representation of the input (truncated to 1k graph tokens).", "Density measures the expected length of the extractive fragment that any randomly chosen summary word belongs to (Grusky et al., 2018); LCS(%) is the length of the longest common subsequence divided by the length of the summary; and 4-gr(%) is the proportion of 4-grams in the summaries that are extracted from the input.", "We BART-L ong Microsoft's acquisition of Nokia 's cell phone business is a big step in the company 's evolution into a devices and services company, but it 's not going to be the Apple of the mobile world.Nokia, which had a 35% market share of the cell phone market in 2003, made an operating profit of 5.48 billion Euros that year , according to the Wall Street Journal, but today 's sale price for the company which includes 1.65 billion Euros in patents is just 5.44 billion Euros.", "observe that longer text inputs make summaries more extractive , while adding a graph makes summaries more abstractive .", "16 Would better graph leads to better improvements?", "In order to answer how good is our graph construction approach, we choose to convert the target summary into a graph and use its linearized text as input to the model along with the original input documents' text and its linearized graph text.", "Table 7 presents such ablation where we linearly increase the amount of target graph information given as input, and we observe that using more target graph information leads to better performance.", "This suggests that a better way of including more salient information in the graph construction process could lead to a better summarization model.", "17 Qualitative analysis of output summaries.", "Table 9 presents generated summaries from two models, BART-Long and BART-Long-Graph.", "Both examples have the misattribution of source error as mentioned in Sec.5.3, motivating the need to improve factual consistency of abstractive summaries.", "The overlapped n -grams between the summary and the original source articles are highlighted in colors.", "Yellow and red stand for shorter and longer n -gram overlap, respectively.", "The visualizations show that 16 We observe similar trends when we add graph to longer or shorter text inputs.", "BART-Long-Graph produces more abstractive summaries, as shown in Table 8, due to the fact that it incorporates triplet-based information that abstract away from the surface of the source articles.", "We presented an efficient graph-enhanced approach to MDS that achieves state-of-the-art results on the Multi-News dataset using the pre-trained encoder-decoder Transformer model along with an efficient encoding mechanism.", "We also show improvements in a transfer-only setup on the DUC-2004 dataset.", "The graph encodings lead to summaries that are more abstractive.", "Human evaluation shows that they are also more informative and factually more consistent with their input documents.", "Finally, we present extensive ablations to better understand the usefulness of our method.", "We thank the anonymous reviewers for their helpful comments.", "This work was partially supported by NSF-CAREER Award 1846185, an Amazon ML Research Award, and a Microsoft PhD Fellowship." ]
[ "method", "abstain", "objective", "method", "result", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "objective", "abstain", "objective", "abstain", "result", "result", "abstain", "result", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "other", "objective", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "method", "other", "other" ]
[ "Lately proposed Word Sense Disambiguation (WSD) systems have approached the estimated upper bound of the task on standard evaluation benchmarks.", "However, these systems typically implement the disambiguation of words in a document almost independently, underutilizing sense and word dependency in context.", "In this paper, we convert the nearly isolated decisions into interrelated ones by exposing senses in context when learning sense embeddings in a similarity-based Sense Aware Context Exploitation (SACE) architecture.", "Meanwhile, we enhance the context embedding learning with selected sentences from the same document, rather than utilizing only the sentence where each ambiguous word appears.", "Experiments on both English and multilingual WSD datasets have shown the effectiveness of our approach, surpassing previous state-of-the-art by large margins (3.7% and 1.2% respectively), especially on few-shot (14.3%) and zero-shot (35.9%) scenarios.", "Word Sense Disambiguation (WSD) is the task of determining a word's sense given its context.", "Recently, contextualized representation learning (Devlin et al., 2019; Liu et al., 2019) have accelerated the advancement of WSD, raising the performance on a standard evaluation framework (Raganato et al., 2017a) from slightly higher than 70% (Raganato et al., 2017b; Luo et al., 2018; Kumar et al., 2019) to about 80% (Vial et al., 2019; Blevins and Zettlemoyer, 2020; Bevilacqua and * corresponding author Navigli, 2020).", "This is an estimated upper bound of the task, which is from the inter-annotator agreement: the percentage of words that are annotated with the same meaning by two or more annotators (Navigli, 2009).", "There is a clear trend that supervised systems tend to incorporate sense knowledge into their architecture, ranging from sense definition, usage examples to sense relation.", "However, the disambiguation of words in a document is almost independent of each other, especially from the perspective of senses in context.", "The connection of each word's disambiguation is limited to the utilization of a sentence (Loureiro and Jorge, 2019; Huang et al., 2019; Hadiwinoto et al., 2019; Scarlini et al., 2020a) or a small window of text (Bevilacqua and Navigli, 2020) because of computation cost or model restriction.", "More severely, the interaction of senses in context is barely explored.", "Similar to word cooccurrence, the appearance of one sense can sometimes dominate the choice of another sense in the same sentence (Agirre et al., 2014; Maru et al., 2019).", "In this paper, we introduce SACE, a similarity-based WSD approach.", "Precisely, we transform the previously almost isolated disambiguation of words in a document into interrelated ones to maximize the contribution of context from both word and sense perspectives.", "We summarize our contributions as follows:", "1. We devise an interactive sense embedding learning technique that takes into account senses in context via a selective attention layer in a neural architecture.", "It connects senses via their appearance in a piece of text rather than using manually constructed sense relations, being less costly.", "2. We introduce a method to better exploit the Word Sense Disambiguation: Towards Interactive Context Exploitation from Both Word and Sense Perspectives Ming Wang 1 and Yinglin Wang 2, * School of Information Management and Engineering Shanghai University of Finance and Economics, Shanghai, China 1 [email protected], 2 [email protected] 5219 context sentences of an ambiguous word in the neural architecture by selecting important sentences from the same document according to sentence relatedness.", "3. With experiments on corresponding datasets, the proposed architecture is proved to have an overwhelming advantage of few-shot and zero-shot WSD learning ability compared with other strong baselines.", "4. We show that the proposed architecture is portable to multilingual scenarios when trained merely on an English dataset with a multilingual pre-trained model, achieving new state-of-the-art on most tested benchmarks and the combined one.", "There are mainly two alternatives for solving WSD, namely knowledge-based and supervised approaches.", "While the former mainly relies on a sense inventory for disambiguation, the latter is dependent on sense-annotated corpora to train a sense classifier, either for each word or the whole vocabulary.", "However, many recently proposed systems combine the above two strategies, injecting sense knowledge into their supervised models while somehow inadequately modeling the provided context in a document from both word and sense perspectives.", "Early supervised approaches model the relational pattern between an ambiguous word's local features and its gold sense from sense-annotated data.", "IMS (Zhong and Ng, 2010) was one of the most prevalent systems that trained a sense classifier for each lemma in training data.", "In comparison, Raganato et al. (2017b) unified the disambiguation of words into a single sequence labeling architecture, relieving the efficiency issue.", "Many following systems improved this architecture by incorporating sense knowledge.", "For unseen lemmas, these systems require most frequent sense (MFS) fallback (select the most frequent candidate sense in the training data).", "To tackle this problem, LMMS (Loureiro and Jorge 2019) implements the disambiguation in a similarity-based manner.", "It learns a sense embedding for each labeled sense in SemCor (Miller et al., 1994) and maps them to full coverage of WordNet (Miller, 1995) senses using sense relations.", "BERT (Devlin et al., 2019) is used as a feature-extraction module for both gloss and context encoding.", "Further, BEM (Blevins and Zettlemoyer, 2020) utilizes two encoders for the above approach in a fine-tuning manner.", "Although the model is more effective even without exploiting sense knowledge other than glosses, it takes around 2.5 days for training.", "The employment of sense relations in previous supervised systems is mostly limited to explicitly defined sense relations including hypernymy and hyponymy relation, severely neglecting how senses in context contribute to the selection of a word's sense.", "For supervised WSD approaches, it is typical to use a small fraction of the whole context to carry out disambiguation, such as a sentence, or a sliding window of text.", "In contrast, knowledge-based WSD approaches tend to more sufficiently exploit a word's context, ranging from a sentence (Lesk, 1986; Wang and Wang, 2020), a few sentences (Agirre et al., 2018, Wang et al., 2020) to even the whole document (Chaplot and Salakhutdinov, 2018).", "Some studies draw in out-of-dataset context (Ponzetto and Navigli, 2010; Scarlini et al., 2020a) for disambiguation, including Wikipedia documents.", "Therefore, it is worth exploring whether the disambiguation of words within the same document can benefit from each other in a supervised system.", "The utilization of senses in context is far less investigated compared with words in context.", "UKB (Agirre et al., 2014, a knowledge-based system) is one of the related systems that model sense relations in context.", "It first connects senses in context via WordNet sense relations and operates personalized PageRank on the constructed sense graph to decide sense importance.", "For each word, the most important potential sense is considered as the correct sense.", "SyntagNet (Maru et al., 2019) improves the idea by introducing manually disambiguated sense pairs in context during sense graph construction.", "Although the system was able to challenge supervised systems at the time, it relied on human labor to obtain sense pairs in context.", "There was no attempt on integrating the utilization of senses in context into a supervised architecture.", "WSD is to select the correct sense of a word given its context.", "is the word in the sentence = { 1 , 2 , , , , } of a document = { 1 , 2 , , , , }.", "The candidate senses = { 1 , 2 , , , , } are from a sense inventory such as WordNet.", "Here, , , and denote the index of sentence, word, and sense respectively.", "In a similarity-based WSD approach, the disambiguation of a word is determined by the similarity between its context representation and each candidate sense representation .", "In many cases, both representations are vectors and the similarity is measured by their dot product after normalization.", "Then, the sense with the highest similarity is selected as the correct sense.", "Typically, a word's context representation is learned using the sentence where the word appears (Loureiro and Jorge, 2019; Scarlini et al., 2020a; Scarlini et al., 2020b).", "The representation of a candidate sense is obtained using its gloss/definition defined in WordNet (Blevins and Zettlemoyer, 2020).", "A common approach of encoding these two sequences in recent research is to utilize pre-trained models such as BERT, RoBERTa (Liu et al., 2019), and so on, taking the sum of the outputs of the last four layers as encoded features (Loureiro and Jorge, 2019; Scarlini et al., 2020a), as in (1) and (2).", "Before feeding and to the models, a special token [CLS]/[SEP] is added to the beginning/end of the sequence, modifying them into and , respectively.", "= ( 4 , 1 ) ( ) (1) For each 's context representation, a normal choice is to utilize the model's output at the position of the word ( ), using as input, shown in equation (1).", "If the word is tokenized into several pieces, their mean is taken.", "In contrast, for each sense representation, when it is fine-tuning a pre-trained model, the sense embedding is the output at the position of [CLS] (Blevins and Zettlemoyer, 2020), with the modified gloss as input, as in (2).", "[ ] = [ ] ( 4 , 1 ) ( ) (2) To utilize the supervision from a training corpus, a cross-entropy loss is implemented against the similarity distribution of candidate senses (the SoftMax product without index in (3)) and the one-hot ground-truth distribution, shown in equation (4).", "is a matrix of concatenated sense embeddings arranged in rows.", "is the dimension of the pre-trained model's hidden states (768 or 1024 of BERT).", "is equal to 1 when (the sense of ) is the correct sense, otherwise 0, representing each element in the ground-truth one-hot vector.", "For prediction, the model selects the sense with the largest dot product for each word.", ", = ( ( )) =1 In the above approach (from BEM, Blevins and Zettlemoyer, 2020), the embedding learning process of different senses is independent of each other, relying merely on sense gloss.", "Besides, the Figure 1: SACE Framework.", "interaction between different words' disambiguation is limited to the utilization of a sentence, leading to inadequate exploitation of the words in context.", "Therefore, we transform the above almost isolated decisions into interrelated ones by learning the sense and context embeddings interactively.", "The interactive sense embedding learning mainly involves a selective attention layer upon the original sense embeddings from the pre-trained model.", "The goal of this interaction is to assist the learning of one sense's embedding to be aware of the others in the same context.", "It is supported by the fact that many sense pairs are more commonly used than the others.", "In practice, each of the ambiguous words in the document has several candidate senses, which poses questions about which senses should be attended in the selective attention layer.", "To address this problem, we make use of the iterative characteristic of the model training.", "In other words, the system's predicted senses of each word within a particular context from the former iteration are attended.", "For the first iteration, the first sense of each word in context is attended.", "In such a strategy, the senses of monosemous words (has a single sense) can be exploited at all iterations.", "For convenient demonstration, we use the embedding of predicted senses of the context words in to enhance that of each sense of word .", "We note that, can be a larger context.", "In equation (5), is the number of words in .", "In (6), is a learnable weight matrix.", "( , ) = The attention score in (6) only takes into consideration the representation at [CLS] position (sentence level representation) for each gloss, neglecting the relatedness between each gloss word of two senses.", "To tackle this, we devise a combined attention score by considering both [CLS] and gloss word relevance, in equation (7).", "is a predefined gloss length of all senses for normalization.", "1 is obtained with equation (2) by changing the output position to .", "If the length (e.g., ) of a sense gloss is smaller than , is a zero vector where is larger than .", "In many previous supervised systems, the disambiguation of one word in a sentence is isolated from the words in the other sentences of the same document.", "We convert the isolated disambiguation into interactive ones by utilizing several highly related sentences within the same document for context embedding learning.", "For each sentence , we select its related sentences under two criteria, with one being the distance to , and the other being the semantic relatedness to .", "The first criterion can be regarded as local features and the second one is aimed at injecting global features while maintaining a low noise level.", "From the perspective of local features, directly surrounding sentences within a window are used as related sentences.", "For global features, we score context sentences and utilize the top related sentences for context embedding learning.", "Precisely, in a document , we regard each sentence as a document and calculate the TF-IDF score of each word in the vocabulary of for all sentences.", "The intuition behind modeling sentences with TF-IDF is that we find the average length of SemCor sentences is 22, which is reasonably long.", "This represents the original document as a matrix | | , where each row and column indicate sentence and word dimension respectively.", "For instance, ( , ) is the TF-IDF score of in .", "The score of concerning is shown as follows: = ( ) ( ) (8) After scoring all context sentences for each sentence , we concatenate related sentences with and utilize them as an input to BERT for context embedding learning.", "As an example, { 1 , +1 } are related sentences from local features, and if { 12 , +7 } are top-scored sentences from global features, we use = { 12 , 1 , , +1 , +7 } as an input to equation (1) and retrieve the enhanced context embedding of each word 5222 in .", "In such a way, different is retrieved for each sentence in the document.", "We note that, when the total sequence length is longer than 512, we remove the furthest sentences away from .", "For instance, 12 , +7 and so on in the above example will be removed in order.", "Finally, and [ ] in equation (4) are replaced with and respectively to calculate the loss, with which to update the weights of the pre-trained model and the selective attention layer.", "In a previous similarity-based WSD approach, Wang and Wang (2020) proposed a Try-again Mechanism (TaM) that takes into account not only the similarity of 's context embedding to the sense embedding of , but also to the sense embedding of during evaluation.", "Here, and are connected by either WordNet relations or the super-sense relation (i.e., senses that belong to the same super-sense category in WordNet).", "This mechanism in (9) manages to boost the performance of its knowledge-based system by a relatively large margin.", ", = + max ( ) (9) In this subsection, we reconstruct TaM so that it becomes effective in our model.", "This process helps the disambiguation of words to be even more interactive since it considers an increased number of senses by utilizing sense relation knowledge.", "In our implementation, we replace the above relations with only those derived from Coarse Sense Inventory (CSI, Lacerra et al., 2020).", "Similar to the utilization of super-sense categories, we connect senses that belong to the same label in CSI as related senses.", "Also, we change the direct sum of the above two similarities into a weighted sum using a hyperparameter .", "max ( ) In addition, our approach only learns a sense embedding for the candidate senses whose lemma is annotated in training data.", "Therefore, in TaM, we save sense embeddings from training for each http://lcl.uniroma1.it/wsdeval/home epoch and use them to implement TaM during evaluation.", "It is worth mentioning that for senses that do not have a sense embedding in , we neglect their calculation in equation (10).", "To validate the effectiveness of our approach, we use SemCor and an evaluation framework to train and evaluate our model, SACE base , respectively.", "The evaluation framework contains 5 English all-words WSD benchmarks.", "We report the experimental results on each dataset including SensEval-2 (SE2, Palmer et al., 2001), SensEval-3 (SE3, Snyder and Palmer, 2004), SemEval-2007 Task-17 (SE07, Pradhan et al., 2007), SemEval-2013 (SE13, Navigli et al., 2013) and SemEval-2015 (SE15, Moro and Navigli, 2015).", "Also, the results from Part-Of-Speech (POS) perspectives on their combined dataset (ALL) are reported.", "Following previous works, we train large models, SACE large on SemCor and SACE large+ on SemCor, WordNet Gloss Tagged (WNGT), and WordNet examples (WNE) for fair comparisons.", "Here, WNE is regarded as an extra sense gloss and is concatenated after the original sense gloss for sense embedding learning, which is similar to the implementation in SREF (Wang and Wang, 2020).", "For few-shot WSD, we partition ALL according to the gold label of each annotation into ALL WN_1st and ALL WN_others .", "Besides, according to whether senses and lemmas of ALL instances appear in SemCor, we extract two subsets, ALLZSS and ALLZSL , to evaluate the zero-shot learning ability of our model.", "For cross-lingual datasets, we use the WordNet version of the latest evaluation framework which contains test datasets for Spanish, Italian, French, and German.", "These datasets are preprocessed data from SemEval-2013 (Navigli et al., 2013) and SemEval-2015 (Moro and Navigli, 2015).", "The former only disambiguates nouns while the latter covers words in four POS (noun-N, verb-V, adjective-A, adverb-R).", "We note that the performance in each table is reported with F1 in percentage.", "Our base and large model utilize RoBERTa base and RoBERTa large respectively, which perform relatively better than BERT models.", "For cross-lingual evaluation, we fine-tune XLM-RoBERTa-base (SACE mul , Conneau et al., 2020) with the same training data as SACE large+ , following the setting in EWISER.", "In each system, two encoders are adopted, with one being a context encoder and the other being a sense gloss encoder.", "This is identical to the setting in BEM.", "We note that a major difference is that the pre-trained model adopted in the above papers is BERT.", "The hyperparameters of our model are selected using SE07.", "They include the number of surrounding sentences (2) on both sides of , the number of top related sentences (2) of and (0.1) in TaM.", "The learning rate for SACE base , SACE large , SACE large+ , and SACE mul is 1e-5, 1e-6, 1e-6, and 5e-6 respectively.", "To accelerate the model training, we organize the sentences in a document into batches according to the total number of candidate senses (400 for SACE base and SACE mul , 150 for SACE large and SACE large+ ), i.e., if the total number of candidate senses exceeds 400 or 150 when adding a sentence, then the sentence belongs to the next batch.", "For each batch, the gloss and context encoders are only called once.", "The context and gloss length is normalized to the maximal sequence length within each batch to reduce unnecessary padding and computation.", "Also, apex is employed for mixed-precision computing.", "More details are shown in Appendix A. 5.3 Baselines We compare the proposed model with previous supervised state-of-the-art from different perspectives.", "These systems include Sense Vocabulary Compression (SVC, Vial et al., 2019), EWISE (Kumar et al., 2019), LMMS (Loureiro and Jorge, 2019), GLU (Hadiwinoto et al., 2019), GlossBERT (Huang et al., 2019), EWISER (Bevilacqua and Navigli, 2020), BEM (Blevins and Zettlemoyer, 2020), ARES (Scarlini et al., 2020b) and SREF (Wang and Wang, 2020).", "BEM is our direct baseline, which utilizes two encoders to learn context and sense embedding separately and achieves state-of-the-art with only SemCor.", "For cross-lingual evaluation, we compare our results with those reported in SyntagNet, EWISER, ARES, MuLaN (Barba et al., 2020).", "These systems are all recently proposed systems with state-of-the-art performance.", "In this subsection, we demonstrate how each component of our model benefits WSD performance.", "In table 1, the system's performance on ALL has illustrated that enhancing the interaction between different words' disambiguation in the same document (WlC) can raise the system's performance by the largest margin, 1.5 F1.", "This promotion is slightly larger than that (1.2 F1) provided by the interactive sense embedding learning (SlC).", "The gloss word attention in SlC is also proved effective, which helps increase the system's performance by 0.5 F1, similar to the contribution of TaM, 0.6 F1.", "Most importantly, when all components are removed, the performance on ALL decreases to 78.4 F1.", "We note that the baseline here is different from BEM since we remove unnecessary padding and utilize RoBERTa.", "This has dramatically accelerated the training process from 3.5 hours to 0.5 hour per epoch while achieved similar performance.", "We also note that the experimental results reported in this paper are obtained using the same random seed as BEM.", "With different random seeds, the performance gap on ALL between SACE base and its baseline (-w/o all) ranges from 1.7 F1 to 2.7 F1.", "Table 2 demonstrates how our systems and lately proposed baselines perform on different partitions of ALL.", "When it is trained on SemCor, SACE base has already outperformed all its competitors by at least 1.9 F1, on ALL.", "This is obtained without utilizing prior sense relation knowledge.", "It is the first system that surpasses the estimated upper bound (80 F1) of the task using only SemCor.", "we use RoBERTa large , SACE large can further reach 81.9 F1 on ALL, surpassing the previous state-of-the-art by 2.9 (3.7% of 79.0) F1.", "This is a large margin given that BEM and EWISER are strong baselines.", "When extra training data and WNE are employed, a similar margin, 2.8 F1, is attained on ALL.", "Our systems also obtain state-of-the-art performance on each dataset, with the margin ranging from 0.2 to 2.9 F1 for SACE base and 1.8 to 3.0 F1 for SACE large , in the first category.", "As for SACE large+ , the margin above the previous best system for each dataset is even larger, varying from 1.7 to 5.5 F1.", "It is noteworthy that SACE base outperforms SACE large by 0.9 F1 on SE15 and they obtain similar performance on SE13.", "These two datasets are less ambiguous since each lemma has fewer candidate senses on average.", "This illustrates the competitive disambiguation capability of SACE base on easier instances.", "We also note that the development set in two categories is different, with the first being SE07 and the second being SE15.", "This is because we follow most systems' setting in the first category and follow EWISER's setting in the second category for better comparison.", "For the performance on different POS, our systems set new lines for all of them in ALL.", "The largest advancement comes from the higher disambiguation ability of verbs, making our system the first to reach the line of 70 F1.", "The systems also obtain unprecedented performance on noun disambiguation, surpassing the previous best system by 1.5, 2.4, and 2.4 for SACE base , SACE large , and SACE large+ respectively.", "SACE large+ is the only system that exceeds 85 F1 on noun disambiguation.", "Rare Sense Disambiguation Table 3 reports different systems' performance on ALL WN_1st and ALL WN_others , which has 4278 and 2525 annotations respectively.", "Compared with previous well-performing systems including LMMS and SREF, our systems achieve much better performance on both datasets, with the major contribution coming from WordNet 1 st sense disambiguation.", "On the contrary, SACE and BEM obtain similar performance on ALL WN_1st while SACE can disambiguate rare senses with higher accuracy.", "This shows a better few-shot learning ability of SACE in comparison to BEM because the ALL WN_others dataset only contains the words whose correct sense appears infrequently in SemCor.", "Here, sense disambiguation is defined as whether a system can select the sense as the correct sense, which is viewed from a sense perspective.", "In comparison, word or lemma disambiguation is to determine the correct sense of a word or lemma, which is viewed from a word perspective.", "Unseen Sense Disambiguation In the second column of table 4, different system's performance on ALLZSS (691 polysemous instances) is provided.", "This dataset only contains polysemous words whose gold label is not in SemCor, which evaluate the zero-shot sense disambiguation ability of different systems.", "It is shown that lately proposed systems have an overwhelming advantage of zero-shot sense disambiguation over ordinary baselines including WordNet S1 and BERT-base, with the margin ranging from about 12 F1 to about 42 F1.", "Specifically, although BEM outperforms its Training data Systems Datasets Concatenation of all Datasets SE2 SE3 SE07 SE13 SE15 ALL N V A R SemCor SVC (GWNC2019) 77.5 77.4 69.5 76.0 78.3 76.7 79.6 65.9 79.5 85.5 EWISE (ACL2019) 73.8 71.1 67.3* 69.4 74.5 71.8* 74.0 60.2 78.0 82.1 LMMS (ACL2019) 76.3 75.6 68.1 75.1 77.0 75.4 78.0 64.0 80.5 83.5 GlossBERT (EMNLP2019) 77.7 75.2 72.5* 76.1 80.4 76.8* --GLU (EMNLP2019) 75.5 73.6 68.1* 71.1 76.2 73.7* --ARES (EMNLP2020) 78.0 77.1 71.0 77.3 83.2 77.9 80.6 68.3 80.5 83.5 SREF (EMNLP2020) 78.6 76.6 72.1 78.0 80.5 77.8 80.6 66.5 82.6 84.4 EWISER (ACL2020) 78.9 78.4 71.0 78.9 79.3* 78.3* 81.7 66.3 81.2 85.8 BEM (ACL2020) 79.4 77.4 74.5* 79.7 81.7 79.0* 81.4 68.5 83.0 87.9 SACE base 80.9 79.1 74.7* 82.4 84.6 80.9* 83.2 71.1 85.4 87.9 SACE large 82.4 81.1 76.3 * 82.5 83.7 81.9 * 84.1 72.2 86.4 89.0 SemCor +WNGT +WNE SVC (GWNC2019) 79.7 77.8 73.4 78.7 82.6 79.0 81.4 68.7 83.7 85.5 EWISER (ACL2020) 80.8 79.0 75.2 80.7 81.8* 80.1* 82.9 69.4 83.6 87.3 SACE large+ 83.6 81.4 77.8 82.4 87.3 * 82.9 * 85.3 74.2 85.9 87.3 Table 2: English all-words WSD performance on different partitions of ALL utilizing two sets of training data.", "baselines by around 25 F1, our base and large system still beat BEM by almost 12 and 18 F1", "respectively.", "In the third column, we follow previous works and show how different systems perform on ALL ZSS* (1139 instances including monosemous ones).", "The aforementioned gaps become narrower since each system can correctly disambiguate monosemous instances.", "Unseen Lemma Disambiguation In the last two columns of table 4, the systems' performance on zero-shot lemmas is presented.", "The difference between these two datasets is whether monosemous lemmas are included.", "We believe it is more reasonable to focus on ALLZSL (222 polysemous instances) since monosemous lemmas do not require disambiguation and thus the statistics on ALL ZSL* cannot fully reveal the systems' zero-shot disambiguation ability of words.", "Similarly, it shows that lately proposed systems tend to outperform the baselines by large margins, varying from 19 to almost 36 F1.", "Among them, BEM performs the worst on this dataset, 2.2 F1 lower than a similar system, GlossBERT.", "In contrast, after incorporating both word and sense level context, our system obtains an unprecedented performance on this dataset, being the first system to reach the line of 90 F1 and beating BEM by almost 16 F1.", "Also, different from SREF and ARES, our systems do not rely on WordNet or SyntagNet sense relation knowledge.", "We utilize two multilingual datasets (including French-FR, German-DE, Italian-IT, and Spanish-ES subsets) to evaluate the multilingual transferability of our method.", "Table 5 presents the performance of some lately proposed systems and ours.", "For our system, the baseline is trained with the same training data as SACE large+ using XLM-RoBERTa-base, while removing all the proposed components including SlC, WlC, and TaM.", "For the systems under comparison, all but UKB +Syn utilizes English training data.", "Also, EWISER and MuLaN further employ SemCor and WNGT as their training data, being the same as SACE mul .", "It shows that SACE mul has obtained a new state-of-the-art on both the combined dataset and most individual datasets, surpassing its direct baseline by 2.4 F1.", "In detail, the largest margin, about 5.5 F1 on its Spanish and Italian subset, above the previous best system is acquired on SE15, which covers instances in all POS.", "This has revealed the overwhelming advantage of SACE mul on disambiguating instances of other POS.", "In contrast, SACE mul performs 6.5 F1 lower than MuLaN on the Spanish subset of SE13, which only covers noun instances.", "In a word, SACE mul is more compatible with real cross-lingual scenarios since it has a strong disambiguation ability of words in different POS.", "Error Analysis By comparing the disambiguation results of SACE base and its baseline (all factors removed), it is revealed that both systems have correctly disambiguated 5346 instances in ALL while 525 and 339 instances are only correctly disambiguated by SACE base and its baseline respectively.", "In other words, SACE base has falsely Models ALLZSS (n=691) ALL ZSS* (1139) ALLZSL (222) ALL ZSL* (670) WordNet 1 st 24.0 53.9 54.4 84.9 BERT-base 23.5 53.6 54.4 84.9 LMMS 36.7 61.6 74.8 91.7 GlossBERT 37.4 62.0 75.6 91.9 ARES 42.6 65.2 81.1 93.7 SREF 46.1 67.3 82.4 94.2 BEM 48.7 68.9 73.4 91.2 SACE base 60.4 76.0 90.0 96.7 SACE large 66.2 79.5 90.0 96.7 Table 4: Zero-shot lemma and sense disambiguation.", "SE13 SE15 DE ES FR IT ES IT Average UKB +Syn 76.4 74.1 70.3 72.1 63.4 69.0 71.1 EWISER 80.9 78.8 83.6 77.7 69.5 71.8 77.5 MuLaN 82.3 81.1 81.6 77.9 69.4 71.8 77.8 ARES 79.6 75.3 81.2 77.0 70.1 71.4 76.2 Baseline 80.5 74.9 80.7 73.6 72.7 74.9 76.3 SACE mul 82.6 74.6 83.0 78.1 75.6 77.3 78.7 Table 5: Multilingual all-words WSD Models ALL WN_1st (n=4728) ALL WN_other (n=2525) WordNet 1 st 100 0 LMMS 87.6 52.6 SREF 91.0 53.2 BEM 93.6 51.7 SACE base 94.2 56.1 SACE large 94.1 59.0 SACE large+ 94.7 60.8 Table 3: Rare sense disambiguation on ALL 5226 predicted 339 examples that are correctly predicted by its baseline.", "This indicates the proposed methods might have injected excessive noise for the disambiguation of these instances.", "Therefore, selective exploitation of context for different instances might be beneficial.", "The bottom half of table 6 shows an example ( country ) that SACE base falsely predicted.", "It is shown that the WlC does not manage to retrieve valuable information for disambiguating the word while injecting some irrelevant context.", "Case Study Table 6 gives an example of top related sentences (#47 and #19) of a particular sentence (#10) under disambiguation.", "Here, church is falsely predicted when WlC is disabled.", "It shows that WlC has detected similar sentences in the same document and incorporated valuable context for context embedding learning.", "Table 7 provides some examples regarding synsets that are connected by the selective attention layer, indicating its ability of detecting some syntagmatic sense relations and senses of close meaning.", "The connection is established by using the largest attention score , in a batch after filtering self-connection.", "exploitation method from both word and sense perspectives in a supervised similarity-based WSD architecture.", "Experiments on English and cross-lingual all-words WSD datasets verify the effectiveness of our approach, surpassing previous state-of-the-art by large margins.", "It also shows that the proposed method has an overwhelming advantage of learning few-shot and zero-shot WSD ability.", "For future work, we intend to utilize reinforcement learning to enhance current interactive WSD by customizing the context exploitation for different instances.", "The source code is available at: https://github.com/lwmlyy/SACE.", "This paper does not involve the presentation of a new dataset, an NLP application and the utilization of demographic or identity characteristics in formation.", "For compute time/power, the proposed system requires less GPU amount (1 versus 2 GPUs) and time (10 versus about 70 hours) for training compared with its direct baseline (Blevins and Zettlemoyer, 2020).", "We thank the anonymous reviewers and Jianzhang Zhang for their insightful comments.", "This work was supported by the National Natural Science Foundation of China (under Project No. 61375053) and the graduate innovation fund of Shanghai University of Finance and Economics (under Project No. CXJJ-2019-395)." ]
[ "abstain", "abstain", "method", "method", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "objective", "abstain", "objective", "objective", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "other", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "other", "abstain", "abstain", "other", "other" ]
[ "In this paper, we propose to study the problem of COURTVIEWGEN eration from the fact description in a criminal case.", "The task aims to improve the interpretability of charge prediction systems and help automatic legal document generation.", "We formulate this task as a text-to-text natural language generation (NLG) problem.", "Sequence-to-sequence model has achieved cutting-edge performances in many NLG tasks.", "However, due to the non-distinctions of fact descriptions, it is hard for Seq2Seq model to generate charge-discriminative court views.", "In this work, we explore charge labels to tackle this issue.", "We propose a label-conditioned Seq2Seq model with attention for this problem, to decode court views conditioned on encoded charge labels.", "Experimental results show the effectiveness of our method.", "1 1 Introduction Previous work has brought up multiple legal assistant systems with various functions, such as finding relevant cases given the query (Chen et al., 2013), providing applicable law articles for a given case (Liu and Liao, 2005) and etc., which have substantially improved the working efficiency.", "As legal assistant systems, charge prediction systems aim to determine appropriate charges such as homicide and assault for varied criminal cases by analyzing textual fact descriptions from cases (Luo et al., 2017), but ignore to give out the interpretations for the charge determination.", "Court view is the written explanation from judges to interprete the charge decision for certain criminal case and is also the core part in a legal document, which consists of rationales and a indicates equal contribution.", "charge where the charge is supported by the rationales as shown in Fig. 1. In this work, we propose to study the problem of COURTVIEWGEN eration from fact descriptions in cases, and we formulate it as a text-to-text natural language generation (NLG) problem (Gatt and Krahmer, 2017).", "The input is the fact description in a case and the output is the corresponding court view.", "We only focus on generating rationales because charges can be decided by judges or charge prediction systems by also analyzing the fact descriptions (Luo et al., 2017; Lin et al., 2012).", "COURT-VIEW-GEN has beneficial functions, in that: (1) improve the interpretability of charge prediction systems by generating rationales in court views to support the predicted charges.", "The justification for charge decision is as important as deciding the charge itself (Hendricks et al., 2016; Lei et al., 2016).", "(2) benefit the automatic legal document generation as legal assistant systems, by automatically generating court views from fact descriptions, to release much human labor especially for simple cases but in large amount, where fact descriptions can be obtained from legal professionals or techniques such as information extraction (Cowie and Lehn-ert, 1996).", "COURT-VIEW-GEN is not a trivial task.", "High-quality rationales in court views should contain the important fact details such as the degree of injury for charge of intentional injury , as they are important basis for charge determination.", "Fact details are like the summary for the fact description similar to the task of DOC ument SUM marization (Yao et al., 2017).", "However, rationales are not the simple summary with only fact details, to support charges, they should be charge-discriminative with deduced information which does not appear in fact descriptions.", "The fact descriptions for charge of negligent homicide usually only describe someone being killed without direct statement about 1854 FACT DESCRIPTION ... , 2009 7 10 23 , , ,", "task.", "Firstly, it is hard to maintain the discriminations of generated court views when input fact descriptions are none-discriminative among charges in subtle difference.", "For example, the charges of intentional homicide and negligent homicide are similar and the corresponding fact descriptions will be expressed in similar way.", "Both of the fact descriptions of the two charges will describe the defendant killing someone but will not directly point out that the defendant is in intention or in neglect, causing it hard to generate charge-discriminative court views.", "Secondly, high-quality court views should contain the fact details in the fact descriptions such as the degree of injury for intentional injury charge because fact details are the important basis for charge determination.", "the motive for killing, DOC-SUM will only summarize the fact of someone being killed, but rationales have to further contain the killing intention, aiming to be discriminative from those rationales for other charges like intentional homicide .", "However, it is hard to generate charge-discriminative rationales when input fact descriptions are not distinct among other facts with different charges.", "The fact descriptions for charge of intentional homicide are similar to those for charge of negligent homicide and also describe someone being killed but without clear motive, making it hard to generate charge-discriminative court views with accurate killing motives among the two charges.", "Traditional natural language generation (NLG) will need much human-labor to design rules and templates.", "To overcome the difficulties of COURT-VIEW-GEN mentioned above and the shortcomings of traditional NLG methods, in this work, we propose a novel label conditioned sequence to sequence model with attention for COURT-VIEW-GEN aiming to directly map fact descriptions to court views.", "The architecture of our model is shown in Figure 1. Fact descriptions are encoded into context vectors by an encoder then a decoder generates court views with these vectors.", "To generate more class-discriminative court views from none-discriminative fact descriptions among charges with subtle difference, we introduce to encode charges as the labels for the corresponding fact descriptions and decode the court views conditioned on the charge labels by additionally encoding the charge information.", "The intuition lies in that charge labels will provide extra information to classify the non-discriminative fact descriptions and make the decoder learn to select words related to the charges to decode.", "Recently, sequence-to-sequence model with encoder-decoder paradigm (Sutskever et al., 2014) has achieved cutting-edge results in many NLG tasks, such as paraphrase (Mallinson et al., 2017), code generation (Ling et al., 2016) and question generation (Du et al., 2017).", "Seq2Seq model has also exhibited state-of-the-art performances on task of DOC-SUM (Chopra et al., 2016; Tan et al., 2017).", "However, non-distinctions of fact descriptions render Seq2Seq model hard to generate charge-discriminative rationales.", "In this paper, we explore charge labels of the corresponding fact descriptions, to benefit generating charge-discriminative rationales, where charge labels can be easily decided by human or charge prediction systems.", "Charge labels will provide with extra information to classify the non-discriminative fact descriptions.", "We propose a label-conditioned Seq2Seq model with attention for our task, in which fact descriptions are encoded into context vectors by an encoder and a decoder generates rationales with these vectors.", "We further encode charges as the labels and decode the rationales conditioned on the labels, to entail the decoder to learn to select gold-charge-related words to decode.", "Widely used attention mechanism (Luong et al., 2015) is fused into the Seq2Seq model, to learn to align target words to fact details in fact descriptions.", "attention mechanism ( ? ) into Seq2Seq model.", "By applying attention technic, every time context vectors will contain most important information from the fact descriptions for decoder.", "Experimental results show that our model has strong performance on COURT-VIEW-GEN and exploiting charge labels will significantly improve the class-discriminations of generated court views especially for charges with subtle differences.", "Similar to Luo et al. (2017), we evaluate our model on Chinese criminal cases by constructing dataset from Chinese government web-site.", "Our contributions of this paper can be summarized as follows: We propose the task of court view generation which is meaningful but has bot been well studied before.", "Our contributions in this paper can be summarized as follows: We propose the task of court view generation and release a real-world dataset for this task.", "We formulate the task as a text-to-text NLG problem.", "We utilize charge labels to benefit charge-discriminative court views generation, and propose a label-conditioned sequence-to-sequence model with attention for this task.", "We introduce a novel label conditioned sequence to sequence model with attention for COURT-VIEW-GEN .", "Extensive experiments are conducted on a real-world dataset.", "The results show the efficiency of our model and exploiting charge labels for charge-discriminations improvement.", "Experimental results demonstrate the effectiveness of our model and exploiting charge labels will significantly improve the class-discriminations of generated court views.", "2 Related Work Our work is firstly related to previous studies on legal assistant systems.", "The task of charge prediction is to determine appropriate charges such as intentional homicide or intentional injury by analyzing the contents of fact descriptions.", "Previous works regard the task of charge prediction as a text classification problem ( ???? ).", "?", "adopt KNN to classify charges in Taiwan and recently, ?", "propose an attention based deep learning model to scale the charge classes to a large number.", "Besides, researchers also introduce to identify applicable articles for a given case ( ??? ), answer legal questions as a consult system ( ?? ) and search relevant cases for a given query ( ?? ).", "Previous work considers the task of charge prediction as a text classification problem (Luo et al., 2017; Liu et al., 2004; Liu and Hsieh, 2006; Lin et al., 2012).", "Recently, Luo et al. (2017) investigate deep learning methods for this task.", "Besides, there are also works on identifying applicable articles for a given case (Liu and Liao, 2005; Liu and Hsieh, 2006; Liu et al., 2015), answering legal questions as a consulting system (Kim et al., 2014; Carvalho et al., 2015) and searching relevant cases for a given query (Raghav et al., 2016; Chen et al., 2013).", "As a legal assistant system, COURT-VIEW-GEN can benefit automatic legal document generation by generating court views from fact descriptions obtained from the last phase, through legal professionals or other technics like information extraction (Cowie and Lehnert, 1996) from raw documents in a case, if we generate legal documents step by step.", "As a legal assistant system, COURT-VIEW-GEN can benefit automatic legal document generation by generating the part of court views from fact descriptions obtained from the last phase if we generate legal document step by step.", "Our work is also related to recent studies on model interpretation (Ribeiro et al., 2016; Lipton, 2016; Ling et al., 2017).", "Recently, much work has 1855 paid attention to giving textual explanations for classifications.", "Hendricks et al. (2016) generate visual explanations for image classification.", "Lei et al. (2016) propose to learn to select most supportive snippets from raw texts for text classification.", "COURT-VIEW-GEN can improve the interpretability of charge prediction systems by generating textual court views when predict the charges.", "Our label-conditioned Seq2Seq model steams from widely used encoder-decoder paradigm (Sutskever et al., 2014) which has been widely used in machine translation (Bahdanau et al., 2014; Luong et al., 2015), summarization (Tan et al., 2017; Nallapati et al., 2016; Chopra et al., 2016; Cheng and Lapata, 2016), semantic parsing (Dong and Lapata, 2016) and paraphrase (Mallinson et al., 2017) or other NLG problems such as product review generation (Dong et al., 2017) and code generation (Yin and Neubig, 2017; Ling et al., 2016).", "Hendricks et al. (2016) propose to encode image labels for visual-language models to generate justification texts for image classification.", "We also introduce charge labels into Seq2Seq model to improve the charge-discriminations of generated rationales.", "Widely used attention mechanism (Luong et al., 2015; Xu et al., 2015) is applied to generate fact details more accurately.", "Court View is the judicial explanation to interpret the reasons for the court making such charge for a case, consisting of the rationales and the charge supported by the rationales as shown in Fig. 1. In this work, we only focus on generating the part of rationales in court views.", "Charge prediction can be achieved by human or charge prediction systems (Luo et al., 2017).", "Final court views can be easily constructed by combining the generated rationales and the pre-decided charges.", "The input of our model is the word sequential fact description in a case and the output is a word sequential court view (rationales part).", "We define the fact description as x = ( x 1 , x 2 , , x | x | ) and the corresponding rationales as y = ( y 1 , y 2 , , y | y | ) .", "The charge for the case is denoted as v and will be exploited for COURT-VIEW-GEN .", "The task of COURT-VIEW-GEN is to find y given x condi-LSTM LSTM LSTM LSTM <E> <S> y 1 y 2 y |Y| V Attention Layer forward backward x 1 x 2 x 3 x |X| y 1 y 2 y 3 LSTMLSTMLSTMLSTMLSTMLSTMLSTMLSTM Figure 2: Label-conditioned Seq2Seq model with attention.", "where P (y | x , v) is the likelihood of the predicted rationales in the court view.", "Similar to Luong et al. (2015), our Seq2Seq model consists of an encoder and a decoder as shown in Fig. 2. Given the pair of fact description and rationales in court view ( x , y ), the encoder reads the word sequence of x and then the decoder will learn to predict the rationales in court view y .", "The probability of predicted y is given as follows: P", "where y <i = y 1 , y 2 , , y i 1 .", "We use a bidirectional LSTM (Hochreiter and Schmidhuber, 1997) as encoder and use another LSTM as decoder similar to Du et al. (2017).", "Decoder.", "From the decoder side, at time t , the probability to predict y t is computed as follows: P ( y t | y <t , c t ) = softmax( W 1 tanh( W 0 [ s t ; c t ])) where W 0 and W 1 are learnable parameters; s t is the hidden state of decoder at time t ; c t is the context vector generated from the encoder side containing the information of x at time t ; here the bias of model is omitted for simplification.", "The hidden state of s t is computed as follows: s t = LSTM d ( y t 1 , s t 1 ) where y t 1 is the word embedding vector for prestate target word at time t 1 .", "The initial state for decoder is initialized by the last state of encoder.", "Context vector of c t is computed by summing up the hidden states of { h k } | x | k =1 generated by the encoder with attention mechanism and we adopt global attention (Luong et al., 2015) in our work.", "Encoder with Attention.", "We adopt a one-layer bidirectional LSTM to encoder the fact descriptions.", "The hidden state h j at time j is computed as follows: h j = [ h j ; h j ] where h j is the concatenation of forward hidden state h j and backward hidden state h j , specifi-cally: h j = LSTM e ( x j , h j 1 ) h j = LSTM e ( x j , h j +1 ) The hidden outputs { h k } | x | k =1 will be used to compute the context vectors for decoder.", "P where s i is the hidden output state at time i in the decoder side.", "From this formula, encoding charge labels provides extra constrains comparing to Eq.", "(2), and restricts the target word searching space from the whole space to only gold-charge-related space for rationales generation, so model can generate more charge-distinct rationales.", "Charge labels are trainable parameters denoted by E v where every charge will have a trainable vector from E v , which will be updated in the model training process.", "As shown in Fig. 2, in the decoder side, at time t , y t is predicted with the probability as follows: P ( y t | y <t , c t , v) = softmax( W 1 tanh( W 0 [ s t ; c t ; E v [v] ])) (6) where E v [v] is the embedding vector of v obtained from E v .", "In this formula, we connect charge label v to s t and c t aiming to influence the word selection process.", "We hope that our model can learn the latent connections between the charge label v and the words of rationales in court views through this way, to decode out charge-discriminative words.", "As shown in Fig. 2, we further embed the charge label v to highlight the computing of hidden state s t at time t and s t is merged as follows: s t = LSTM d ( y t 1 , s vt 1 ) s vt 1 = f v ( s t 1 , v) f v = tanh( W v [ s t 1 ; E v [v] ] + b v ) (7) where W v and b v are learnable parameters.", "In this way, the information of charge label can be embedded into s t .", "From Eq.", "(3) and Eq.", "(4), attention weights c t are computed from s t , so encoding the charge label v to hidden states will make the model concentrate more on charge-related information from fact descriptions to help generate more accurate fact details.", "Suppose we are given the training data: { x ( i ) , y ( i ) , v ( i ) } Ni =1 , we aim to maximize the log-likelihood of generated rationales in court views given the fact descriptions and charge labels, so the loss function is computed as follows:", "L ( ) = NX i =1 log P (y ( i ) | x ( i ) , v ( i ) ; ) = NX i =1 | y ( i ) | X j =1 log P ( y ( i ) j | y ( i ) <j , x ( i ) , v ( i ) ; )", "We split the training data into multiple batches with size of 64 and adopt adam learning (Kingma and Ba, 2014) to update the parameters in every batch data.", "At the inference time, we encode the fact descriptions and charge labels into vectors and use the decoder to generate rationales in 1857 # Training set 153706 # Dev set 9152 # Test set 9123 Avg.", "court views based on Eq.", "(1).", "We adopt the algorithm of beam search to generate rationales.", "Beam search size is set to 5 .", "To make generation process stoppable, an indicator tag < /s > is added to the end of the rationales sequences, and when < /s > is generated the inference process will be terminated.", "The generated word sequential paths will be ranked and the one with largest value is selected as the final rationales in court view.", "Following Luo et al. (2017), we construct dataset from the published legal documents in China Judgements Online 2 .", "We extract the fact descriptions, rationales in court views and charge labels using regular expressions.", "The paragraph started with (our court identified that) is regarded as the fact description and the part between (our court hold that) and the charge are regarded as the rationales.", "Nearly all the samples in dataset match this extraction pattern.", "Length threshold of 256 is set up, and fact description longer than that will be stripped, leaving too long facts for future study.", "We use the tokens of < name > , < num > and < date > to replace the names, numbers and dates appearing in the corpus.", "We tokenize the Chinese texts with the open source tool of HanLP 3 .", "For charge labels, we select the top 50 charge labels ranked by occurrences and leave the left charges as others.", "Details about our dataset are shown in Table 1. For cases with multiple charges and multiple defendants, we can separate the fact descriptions and the court views according to the charges or the defendants.", "In this work, we only focus on the cases with one defendant and one charge, leaving the complex cases for future study, so we can collect large enough data from the published legal 2 http://wenshu.court.gov.cn 3 https://github.com/hankcs/HanLP documents without human to annotate the data.", "Word embeddings are randomly initialized and updated in the training process, with the size of 512 tuned from { 256 , 512 , 1024 } .", "Charge label vectors are initialized randomly with size of 512 .", "Maximal vocabulary size of encoder is set to 100 K words and decoder is 50 K by stripping words exceeding the bounds.", "Maximal source length is 256 and target is 50 .", "The hidden size of LSTM is 1024 tuned from { 256 , 512 , 1024 } .", "We choose perplexity as the update metric.", "Early stopping mechanism is applied to train the model.", "The initial learning rate is set to 0 .", "0003 and the reduce fac-tor is 0 .", "5 .", "Model performance will be checked on the validation set after every 1000 batches training and keep the parameters with lowest perplexity.", "Training process will be terminated if model performance is not improved for successive 8 times.", "Evaluation Metrics.", "We adopt both automatic evaluation and human judgement for model evaluation.", "BLEU-4 score (Papineni et al., 2002) and variant Rouge scores (Lin, 2004) are adopted for automatic evaluation which have been widely used in many NLG tasks.", "We set up two evaluation dimensions for human judgement: 1) how fluent of the rationales in court view is; 2) how accurate of the rationales is, aiming to evaluate how many fact details have been accurately expressed in the generated rationales.", "We adopt 5 scales for both fluent and accurate evaluation ( 5 is for the best).", "We ask three annotators who knows well about our task to conduct the human judgement.", "We randomly select 100 generated rationales in court views for every evaluated method.", "The three raters are also asked to judge whether rationales can be adopted for use in comprehensive evaluation ( adoptable ) and record the number of adoptable rationales for every evaluated method.", "Rand is to randomly select rationales in court views from the training set (method of Rand all ).", "We also randomly choose rationales from pools with same charge labels ( Rand charge ).", "Adopting Rand method is to indicate the low bound performance of COURT-VIEW-GEN .", "BM25 is a retrieval baseline to index the fact description match to the input fact description with highest BM25 score (Robertson and Walker, 1858 AUTOMATIC EVALUATIONMODEL (%) B-4 R-1 R-2 R-L Rand all 6 .", "1994) from the training set, and use its rationales as the result ( BM25 f2f ).", "Similar fact descriptions may have the similar rationales.", "Fact descriptions from pools with same charges are also retrieved ( BM25 f2f+charge ), to see how much improvement that adding charge labels can gender.", "MOSES+ (Koehn et al., 2007) is a phrase-based statistical machine translation system mapping fact descriptions to rationales.", "KenLM (Heafield et al., 2013) is adopted to train a trigram language model on the target corpus of training set which is tuned on the validation set with MERT.", "NN-S2S is the basic Seq2Seq model without attention from Sutskever et al. (2014) for machine translation.", "We set one LSTM layer for encoding and another one LSTM layer for decoding.", "We adopt perplexity for training metric and select the model with lowest perplexity on validation set.", "RAS is an attention based abstract summarization model from Chopra et al. (2016).", "To deal with the much longer fact descriptions, we exploit the more advanced bidirectional LSTM model for the encoder instead of the simple convolutional model.", "Another LSTM model is set as the decoder coherent to Chopra et al. (2016).", "Experimental Results.", "In automatic evaluation from Table 2, the evaluation scores are relatively high even for method of Rand charge , which indicates that the expressions of the rationales with same charge labels are similar with many overlapped n-grams, such that the rationales for crime of theft usually begin with (in intention of illegal possession).", "Accurately generating fact details like degree of injury or time of theft is more difficult.", "Retrieval method by adding charge labels is the strong baseline even better than basic Seq2Seq model.", "Adding attention mechanism will improve the performance indicated by the method of RAS which is supe-rior to retrieval methods.", "By exploiting charge labels, our full model achieves the best performance.", "The performances of statistical machine translation model are really poor, for it requiring the lengths of parallel corpus to be similar.", "In human evaluation, we can see that retrieval methods can not accurately express fact details, for that it is hard to retrieve rationales containing details all matching the fact descriptions.", "However, our system can learn to generate fact details by analyzing fact descriptions.", "Dropping attention mechanism will have negative effects on model performance.", "RAS has worse performance in ACC .", "whose main reason may lie in that RAS can not generate charge-discriminative rationales with deduced information , which demonstrates that our task is not the simple DOC-SUM task.", "For the fluent evaluation, generation models are highly close to retrieval methods whose rationales are written by humans, which reflects that the generation models can generate highly natural rationales.", "Charge2Charge Analysis.", "We first analyze the effects of exploiting charge labels on model performance charge to charge, by dropping to encode charges based on our full model.", "From the results shown in Fig. 3, we can find that the results can be improved much by exploiting charge labels among nearly all charges.", "This result also indicates that the non-distinct fact descriptions are common among nearly all charges and reflects the difficulty of this task, but utilizing charge labels can release the seriousness of the problem.", "Charge-discriminations Analysis.", "We further evaluate the effects of charge labels for charge-discriminations improvement on specific charges with non-distinct fact descriptions: intentional homicide , negligent homicide , duty embezzlement and corruption .", "For every charge, two participants are asked to count the number of ra-1859 0 5 10 15 20 25 30 35 40 45 50 charge label id 0.2 0.4 0.6 BLEU4 with charge without charge Figure 3: Results of impact of exploiting charge labels evaluated charge to charge in the metric of BLEU-4 (similar results can gender in other three metrics but are omitted for space saving).", "tionales that are relevant to the charge on 20 randomly selected candidates.", "From Fig. 4, the number of charge discriminative rationales can be much improved among every charge by utilizing charge information, which demonstrates that charge labels can provide with much extra charge-related information to deal with latent information in fact descriptions.", "For crimes of homicide , the motives for killing are latent in the descriptions of killing without direct statement, but our system can learn to align the motives in rationales to the charge labels which are the strong distinct indicator for the two motives.", "Ablation Study.", "We also ablate our full model to reveal different components of encoding charge labels for performance improvement.", "As shown in Table 3, / softmax comp. is to remove the part in Eq.", "(6) and yields worse performance than our full model, but better than / charge comp. that ignores to encode charge labels, which is same to the situation of / hidden comp. that removes the part in Eq.", "(7).", "Our full model is still better than the ablated models.", "This finding shows that both of the methods of exploiting charge labels can improve model performance and stacking them will achieve better results.", "Attention Mechanism Analysis.", "Heat map in Fig. 5 is used to illustrate the attention mechanism.", "The slight injury is aligned between the source and target.", "responsibility and run are well aligned to away, which demonstrate the ABLATIONSTUDYMODEL (%) B-4 R-1 R-2 R-L Our System 45 .", "efficiency of attention mechanism for generating fact details by forcing context vectors to focus more on fact details.", "Performance by Reference Size.", "We further investigate the model performance by rationales length in court views.", "As shown in Fig. 6, not surprisingly the model performance drops when the length of reference rationales increases.", "Within the size of 30 , BLEU-4 score can maintain around 0 .", "4 and F1 score keeps around 0 .", "5 .", "Exceeding the length of 30 , model performance decreases dramatically.", "Human eval.", "vs. Automatic eval.", "Are BLEU and Rouge suitable for COURT-VIEW-GEN evaluation ?", "Following the work of (Papineni et al., 2002; Liu et al., 2016), for the models evaluated in human judgemnet, we draw the linear regressions of their BLEU-4 and variant Rouge scores, as the function of ACC .", "and ADOPT .", "from human judgement respectively as shown in Fig. 7. From 10 20 30 40 50 Court View Length 0 0.4 0.8 p e rf o r m ace BLEU-4 F1 of ROUGE-2 Figure 6: Model performance by rationales length with BLEU-4 and full length of F1 of Rouge-2.", "1 6 acc.", "0 50 B4 1 6 acc.", "30 80 R1 1 6 acc.", "20 60 R2 1 6 acc.", "0 90 R-L 0 1 adopt.", "0 50 B4 0 1 adopt.", "30 80 R1 0 1 adopt.", "20 60 R2 0 1 adopt.", "0 80 R-L", "coef.:0.983", "coef.:0.995", "coef.:0.993", "coef.:0.990", "coef.:0.975", "coef.:0.975", "coef.:0.952", "coef.:0.965 Figure 7: ACC .", "the results, we can find that automatic evaluations track well with the human judgement with high correlation coefficients.", "This finding demonstrates that BLEU-4 and variant Rouges are adoptable for COURT-VIEW-GEN evaluation and provides the basis for future studies on this task.", "Error Analysis.", "Our model has the drawback of generating latent fact details , which appear in rationales but are not clearly expressed in fact descriptions.", "For example, for the time of theft in charge of larceny , the term of (several times) appears in rationales but may not be expressed in fact descriptions directly, only with descriptions of larceny but without exact term for this detail, so it will be hard for attention mechanism to learn to align in rationales to latent information in fact descriptions.", "In the generated rationales on test set, we find that only 42 .", "4% samples can accurately extract out the term of .", "It may need designed rules to deal with such details, like that count the time of theft from the descriptions, and if the time exceeds 1 then the term of can be generated in rationales.", "Fake Charge Label Conditioned Study.", "What generated rationales in court views will be if they are conditioned on fake charge labels?", "We select one fact description with gold charge of intentional injury , then generate rationales conditioned on fake charges of defiance and affray crime , intentional homicide and neglectful homicide .", "From Fig. 8, the rationales conditioned on fake charges will be partly relevant to fake charge labels and also maintain fact details from the input fact description of gold charge.", "For the fake charge of intentional homicide , its fact details should be caused someone dead, but instead express causing someone slight injury which is relevant to charge of intentional injury .", "For charge prediction systems, the discriminations between fact details and charges will help to remind people that the prediction results may be unreliable.", "Case Study.", "Examples of generated rationales in court views are shown in Fig. 8. Generally speaking, our full label-conditioned model has high accuracy on generating fact details better than baseline models.", "For charges of traffic accident crime and negligent homicide , all fact details are generated.", "The extra information from charge labels helps the model to capture more important fact details, by forcing model to pay more attention to charge-related information in fact descriptions.", "As for the charge-discrimination analysis, from the rationales of negligent homicide , we can infer that its fact description may relate to a traffic accident, which is non-distinct from that for traffic accident crime .", "Without encoding charge labels, Ours / c wrongly generates the rationales coherent to traffic accident crime , because traffic accidents are the strong indicator for traffic crimes, but the charge label will provide extra bias towards the homicide crime , so our full model can generate highly discriminative rationales.", "Utilizing charge labels, retrieval method can easily retrieve charge-related rationales, but hard to index rationales with accurate fact details.", "For charge of larceny , our full model extracts nearly all fact details but misses the fact of (several times), reflect-ing the shortcoming of dealing with latent details.", "In this paper, we propose a novel task of court view generation and formulate it as a text-to-text NLG problem.", "We utilize charge labels to benefit the generation of charge-discriminative rationales in court views and propose a label-conditioned Seq2Seq model with attention for this task.", "Extensive experiments show the efficiency of our model and exploiting charge labels.", "In the future: 1) More advanced technologies like reinforcement learning (Sutton and Barto, 1998) can be introduced to generate latent fact details such as the time of theft more accurately; 2) In this work, we only generate rationales in court views omitting charge prediction, it is interesting to see whether jointly generating the two parts will benefit both of the tasks; 3) Studying verification mechanism is meaningful to judge whether generated court views can really be adopted which is important for COURT-VIEW-GEN in practice; 4) More complex cases with multiple charges and multiple defendants will be considered in the future.", "measurement.", "Actually, the time of larceny is not all directly expressed in fact description and only describes the fact of larceny, so it is hard for model to learn to align the time of larceny in court view to latent information in fact description.", "Fake Charge Label Conditioned Study.", "What generated court views will be if they are conditioned on fake charge labels?", "We select one fact description with gold charge label of intentional injury then generate court views conditioned on fake charge labels of defiance and affray crime , intentional homicide and neglectful homicide .", "From Table ??", "We also very appreciate the comments from anonymous reviewers which will help further improve our work.", "This work is supported by National Natural Science Foundation of China (No. 61602490) and National Key R&D Plan (No. 2017YFB1402403).", "The work was done when Hai Ye interned in Beihang University from August, 2017 to January, 2018.", "corresponding fact will be caused someone dead, but instead express causing someone slight injury which is relevant to charge of intentional injury .", "The discriminations between fact details and charge will help to remind people that the prediction for charge may be unreliable.", "6 Conclusion and Future Works In this paper, we propose a meaningful but not-well studied task of court view generation.", "We introduce a novel charge label conditioned sequence to sequence model for COURT-VIEW-GEN .", "CoRR abs/1409.0473.", "Danilo S. Carvalho, Minh-Tien Nguyen, Chien-Xuan Tran, and Minh-Le Nguyen.", "2015.", "Lexical-morphological modeling for legal text analysis.", "In New Frontiers in Artificial Intelligence JSAI-isAI 2015 Workshops, LENLS, JURISIN, AAA, HAT-MASH, TSDAA, ASD-HR, and SKL, Kanagawa, Japan, November 16-18, 2015, Revised Selected Papers .", "pages 295311.", "Yen-Liang Chen, Yi-Hung Liu, and Wu-Liang Ho.", "2013.", "A text mining approach to assist the general public in the retrieval of legal documents.", "JASIST 64(2):280290.", "Experimental results show the effectiveness of our model.", "Generating court views conditioned on charge labels by encoding charge labels will significantly improve the class-discriminations of generated court views.", "Jianpeng Cheng and Mirella Lapata.", "2016.", "Neural summarization by extracting sentences and words.", "In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, Volume 1: Long Papers .", "In the future: 1) We will apply the copy mechanism ( ?? ) to improve the diversities and characteristics of generated court views which are important for generating high-quality court views; 2) More advanced technologies like reinforcement learning ( ? ) will be introduced to generate latent fact details such as the time of theft more accurately; 3) In this work, we only generate rationales in court views omitting charge prediction, it is interesting to see whether jointly generating the two parts will benefit both of the tasks.", "Sumit Chopra, Michael Auli, and Alexander M. Rush.", "2016.", "Abstractive sentence summarization with attentive recurrent neural networks.", "In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies .", "pages 93 98.", "Jim Cowie and Wendy Lehnert.", "1996.", "Information extraction.", "Communications of the ACM 39(1):8091.", "References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio.", "2014.", "Neural machine translation by jointly learning to align and translate.", "CoRR abs/1409.0473.", "Danilo S. Carvalho, Minh-Tien Nguyen, Chien-Xuan Tran, and Minh-Le Nguyen.", "2015.", "Lexical-morphological modeling for legal text analysis.", "In New Frontiers in Artificial Intelligence JSAI-isAI 2015 Workshops, LENLS, JURISIN, AAA, HAT-MASH, TSDAA, ASD-HR, and SKL, Kanagawa, Japan, November 16-18, 2015, Revised Selected Papers .", "Li Dong, Shaohan Huang, Furu Wei, Mirella Lapata, Ming Zhou, and Ke Xu.", "2017.", "Learning to generate product reviews from attributes.", "In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers .", "Association for Computational Linguistics, pages 623632.", "Li Dong and Mirella Lapata.", "2016.", "Language to logical form with neural attention.", "In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) .", "Association for Computational Linguistics, pages 3343.", "Xinya Du, Junru Shao, and Claire Cardie.", "2017.", "Learning to ask: Neural question generation for reading comprehension.", "In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, Volume 1: Long Papers .", "Jianpeng Cheng and Mirella Lapata.", "2016.", "Kenneth Heafield, Ivan Pouzyrevsky, Jonathan H. Clark, and Philipp Koehn.", "2013.", "Scalable modified 1862 kneser-ney language model estimation.", "In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, ACL 2013, 4-9, Volume 2: Short Papers .", "pages 690696.", "Lisa Anne Hendricks, Zeynep Akata, Marcus Rohrbach, Jeff Donahue, Bernt Schiele, and Trevor Darrell.", "2016.", "Generating visual explanations.", "In Computer Vision ECCV 2016 14th European Conference, Proceedings, Part IV .", "pages 319.", "Sepp Hochreiter and Jurgen Schmidhuber.", "1997.", "Long short-term memory.", "Neural Computation 9(8):17351780.", "Mi-Young Kim, Ying Xu, and Randy Goebel.", "2014.", "Legal question answering using ranking SVM and syntactic/semantic similarity.", "In New Frontiers in Artificial Intelligence JSAI-isAI 2014 Workshops, LENLS, JURISIN, and GABA, Kanagawa, Japan, October 27-28, 2014, Revised Selected Papers .", "pages 244258.", "Diederik P. Kingma and Jimmy Ba.", "2014.", "Adam: A method for stochastic optimization.", "CoRR abs/1412.6980.", "Tao Lei, Regina Barzilay, and Tommi S. Jaakkola.", "2016.", "Rationalizing neural predictions.", "In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing .", "pages 107117.", "Chin-Yew Lin.", "2004.", "Rouge: A package for automatic evaluation of summaries.", "In Proceedings of the ACL-04 Workshop.", "Association for Computational Linguistics .", "pages 7481.", "Wan-Chen Lin, Tsung-Ting Kuo, Tung-Jia Chang, Chueh-An Yen, Chao-Ju Chen, and Shou-de Lin.", "2012.", "Exploiting machine learning models for chinese legal documents labeling, case classification, and sentencing prediction.", "IJCLCLP 17(4).", "Wang Ling, Phil Blunsom, Edward Grefenstette, Karl Moritz Hermann, Tomas Kocisky, Fumin Wang, and Andrew Senior.", "2016.", "Latent predictor networks for code generation.", "In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, Volume 1: Long Papers .", "Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blun-som.", "2017.", "Program induction by rationale generation: Learning to solve and explain algebraic word problems.", "In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, Volume 1: Long Papers .", "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst.", "2007.", "Moses: Open source toolkit for statistical machine translation.", "In ACL 2007, Proceedings of the 45th Annual Meeting of the Association .", "pages 158167.", "Zachary Chase Lipton.", "2016.", "The mythos of model interpretability.", "CoRR abs/1606.03490.", "Chao-Lin Liu, Cheng-Tsung Chang, and Jim-How Ho.", "2004.", "Case instance generation and refinement for case-based criminal summary judgments in chinese.", "J. Inf.", "Sci.", "Eng.", "20(4):783800.", "Chao-Lin Liu and Chwen-Dar Hsieh.", "2006.", "Exploring phrase-based classification of judicial documents for criminal charges in chinese.", "In Foundations of Intelligent Systems, 16th International Symposium, IS-MIS 2006, Bari, Italy, September 27-29, 2006, Proceedings .", "pages 681690.", "Chao-Lin Liu and Ting-Ming Liao.", "2005.", "Classifying criminal charges in chinese for web-based legal services.", "In Web Technologies Research and Development APWeb 2005, 7th Asia-Pacific Web Conference Proceedings .", "pages 6475.", "Yi-Hung Liu, Yen-Liang Chen, and Wu-Liang Ho.", "2015.", "Predicting associated statutes for legal problems.", "Inf.", "Process.", "Manage.", "51(1):194211.", "Thang Luong, Hieu Pham, and Christopher D. Manning.", "2015.", "Effective approaches to attention-based neural machine translation.", "In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing .", "pages 14121421.", "Jonathan Mallinson, Rico Sennrich, and Mirella Lap-ata.", "2017.", "Paraphrasing revisited with neural machine translation.", "In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, Volume 1: Long Papers .", "pages 881893.", "Ramesh Nallapati, Bowen Zhou, Ccero Nogueira dos Santos, Caglar Gulcehre, and Bing Xiang.", "2016.", "Abstractive text summarization using sequence-to-sequence rnns and beyond.", "In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, CoNLL 2016, August 11-12, 2016 .", "pages 280290.", "Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu.", "2002.", "Bleu: a method for automatic evaluation of machine translation.", "In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics .", "Chia-Wei Liu, Ryan Lowe, Iulian Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau.", "2016.", "How NOT to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation.", "In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, 2016 .", "pages 2122 2132.", "Bingfeng Luo, Yansong Feng, Jianbo Xu, Xiang Zhang, and Dongyan Zhao.", "2017.", "Learning to predict charges for criminal cases with legal basis.", "In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing .", "pages 27172726.", "K. Raghav, P. K. Reddy, and V. B. Reddy.", "2016.", "Analyzing the extraction of relevant legal judgments using paragraph-level and citation information.", "In AI4JArtificial Intelligence for Justice .", "Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin.", "2016.", "why should I trust", "you?: Explaining the predictions of any classifier.", "In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining .", "pages 11351144.", "Stephen E. Robertson and Steve Walker.", "1994.", "Some simple effective approximations to the 2-poisson model for probabilistic weighted retrieval.", "In Proceedings of the 17th Annual International ACM-SIGIR Conference on Research and Development in Information Retrieval.", "(Special Issue of the SIGIR Forum) .", "pages 232241.", "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le.", "2014.", "Sequence to sequence learning with neural networks.", "In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014 .", "pages 31043112.", "Richard S Sutton and Andrew G Barto.", "1998.", "Reinforcement learning: An introduction , volume 1. MIT press Cambridge.", "Jiwei Tan, Xiaojun Wan, and Jianguo Xiao.", "2017.", "Abstractive document summarization with a graph-based attentional neural model.", "In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, Volume 1: Long Papers .", "pages 11711181.", "Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C. Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Bengio.", "2015.", "Show, attend and tell: Neural image caption generation with visual attention.", "In Proceedings of the 32nd International Conference on Machine Learning, ICML .", "pages 20482057.", "Jin-ge Yao, Xiaojun Wan, and Jianguo Xiao.", "2017.", "Recent advances in document summarization.", "Knowl.", "Inf.", "Syst.", "53(2):297336.", "Pengcheng Yin and Graham Neubig.", "2017.", "A syntactic neural model for general-purpose code generation.", "In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, Volume 1: Long Papers .", "pages 440450." ]
[ "objective", "abstain", "method", "abstain", "abstain", "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "method", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "result", "method", "objective", "objective", "method", "objective", "objective", "abstain", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "abstain", "result", "abstain", "other", "other", "other", "other", "method", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "result", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "Machine translation of user-generated codemixed inputs to English is of crucial importance in applications like web search and targeted advertising.", "We address the scarcity of parallel training data for training such models by designing a strategy of converting existing non-code-mixed parallel data sources to codemixed parallel data.", "We present an mBERT based procedure whose core learnable component is a ternary sequence labeling model, that can be trained with a limited code-mixed corpus alone.", "We show a 5.8 point increase in BLEU on heavily code-mixed sentences by training a translation model using our data augmentation strategy on an Hindi-English codemixed translation task.", "Code-mixing (CM), the phenomenon of mixing words from two languages in a sentence, is getting increasingly commonplace in several bilingual communities 1 .", "Recently, much research has focused on training language models over code-switched data for tasks like automatic speech recognition (ASR) (Winata et al., 2019; Gonen and Goldberg, 2019).", "In this paper we focus on the less explored problem of translating code-switched inputs to a high-resource language like English.", "This task is compelling in applications like Web search, targeted advertising, and recommendations, which require matching user-generated code-mixed queries to rich English content.", "A major challenge in such applications is the lack of parallel data from code-mixed input to English.", "While years of effort have made available rich parallel datasets for translation, these are mostly over formal sources like news, which tend to be less code-mixed.", "In this paper we show how to create high-quality parallel data for training a code-mixed translation model by exploiting three 1 https://github.com/gentaiscool/code-switching-papers types of resources:", "1) Parallel data from non-code-mixed sentences to English,", "2) Code-mixed sentences, and", "3) Monolingual sentences in English.", "Contributions: (1) We present an mBERT (De-vlin et al., 2019) based procedure for converting non-CM parallel data to CM parallel data.", "The core learnable component of our procedure requires fine-tuning mBERT for a three-way sequence labeling task, and can be easily trained using the limited code-switched sentences alone.", "We apply this model to convert source sentences of the parallel data to code-mixed sentences, while keeping the target English sentences in-tact.", "We also extend the existing back-translation method of using monolingual target data, with our code-switched augmentation.", "(2) We experiment on a rich public code-mixed dataset obtained from a literacy promotion project.", "We show that with our data augmentation strategy the translation BLEU improves from 43.9 to 46.4 overall.", "On sentences that are more heavily code-mixed our accuracy increases by 5.8 BLEU points, and on an adversarial test set where the baseline provides poor accuracy we show a 5.4 point BLEU increase.", "(3) We show that our data augmentation strategy improves performance for code-switched test sets while maintaining state of the art performance on non-code-switched inputs.", "Most prior work on CM has focused on training a language model (LM) in the context of automatic speech recognition.", "The main challenge addressed in these works is the limited availability of codemixed sentences.", "Gonen and Goldberg (2019) and Lee and Li (2020) propose different methods of training LMs for CM sentences without explicitly creating synthetic CM data, but another popular strategy is to first create synthetic CM data and train the LM with such synthetic data.", "We next summarize existing approaches to generate synthetic CM data: Chang et al. (2019) propose to learn switching patterns from code-mixed data using a GAN-based adversarial training.", "Gao et al. (2019) use BERT as the generator which is fine-tuned by masking the English words in a CM corpus using GAN-based adversarial training.", "In contrast, ours is a much simpler sequence labeling formulation.", "Taneja et al. (2019) proposes to splice fragments chosen from monolingual corpus of two languages based on statistics of length distribution and phone transition.", "Pratapa et al. (2018) use Equivalence Constraint Theory to define rules on top of parse trees of two sentences to create grammatically valid artificial CM data.", "Samanta et al. (2019) use a variational autoencoder to generate synthetic CS data.", "Winata et al. (2019) propose a sequence-to-sequence model using a copy mechanism that learns when to switch from one language to another.", "To train their model they depended on high-resource commercial translation models for translating the code-mixed input to monolingual sentences in both English and native language.", "Since our goal is to train such a translation model, we did not want to depend on such resources.", "One key difference is that our goal is translation from code-mixed to English, and not designing a representative LM for code-mixed data.", "Since our final target is English, when designing our data augmentation strategy we give higher priority to preserving the distribution of the target-side (En-glish) than the CM input.", "Our strategy is to augment the training data by converting existing non-code-mixed parallel corpus into a parallel corpus with code-mixed source.", "Let L , M , E denote the space of non-code-mixed sentences, code-mixed sentences, and English sentences respectively.", "We have available a parallel corpus of non-code-mixed and English pairs ( L, E ) ( L , E ) , a code-mixed corpus M M , and a monolingual corpus EM E .", "Our goal is to train a translation model from code-mixed input to English T : M (cid:55) E .", "Our focus is Indic languages such as Hindi that are often code-mixed with English.", "The codemixed corpus in our case contains three kinds of tokens: native tokens in native script e.g. Devanagari, English tokens in Latin script, and English tokens transliterated to native script.", "Figure 1 shows an Sentence while $i 4 Translation Now, we have specified the condition for while loop as $i less than or equal to 4.", "example sentence pair from our code-mixed test corpus with these different types of tokens.", "In Table 1 we present statistics of tokens of the three types in a parallel Hindi-English corpus and our test code-mixed corpus.", "Given the huge gap in the fraction of En tokens in the original parallel data, we propose methods for synthetic data creation that perturb non-code-mixed source sentences in the parallel data to a code-mixed sentence.", "For this we train a model F : ( L , E ) (cid:55) ( M , E ) for converting sentences in the native language to their code mixed forms using the parallel English sentence.", "Our model is based on mBERT, and consists of two phases: the first phase predicts words to switch in a monolingual sentence, and the second phase generates the switched words by harnessing parallel data.", "We describe these phases next: 3.1 Predict Code-Mixed Patterns in Monolingual Sentences We train an m-BERT based sequence-labeling task that takes as input a monolingual sentence and predicts tokens that should be translated to the other language to produce a natural sounding codemixed sentence.", "For training data, we use the small amount of code-mixed data M M .", "These sentences are first labeled with word level Language IDs, using Zhang et al. (2018).", "The langId tool assigns three types of labels:{En, En-Trans, Na} where En-Trans refers to English words written in native script, and Na refers to native words in native script.", "We then generate a synthetic monolingual sentence from each code mixed sentence z M by replacing all words in z with the label En' to their translations in the source language and script.", "Words that are predicted En-Trans' (English words written in Native script) are transliterated 2 to Latin and then translated only a fraction f of the time.", "Since sentences in L comprising the parallel data also contain transliterated English words in our corpus, we choose f so as to account for the difference in transliterated English words between native and code-mixed.", "The resulting sentence z (cid:48) is treated as being from L , and thus compatible with the monolingual sentences in L that we wish to code-mix.", "In Figure 2 we show an example of this transformation in the Training' box.", "Finally, we fine-tune m-BERT for a sequence-labeling task of predicting the language ID tags on z (cid:48) .", "Note, if we reapply the langID tool on z (cid:48) , the replaced tokens will not be predicted as English.", "In contrast, m-BERT can learn the code-mixing patterns so that it can predict which tokens in a monolingual sentence are most natural candidates for expressing in English.", "In the second phase, we use existing alignment libraries such as SimAlign (Jalili Sabet et al., 2020) to align source and target words between sentence pairs ( x , y ) in the parallel data ( L, E ) .", "Let p 1 , . . . , p n denote the predicted switches on an input non-code-mixed sentence x : x 1 , . . . , x n by the m-BERT model above.", "Then for each token x i x that is predicted to switch to English i.e., p i {En, En-Trans} we replace the word with its aligned word(s) in y if they exist.", "Additionally, if p i is En-Trans we transliterate the aligned English word to the native script.", "The resulting code-mixed sentence x (cid:48) and y form a parallel pair for training the translation model.", "In Figure 2 we show an example in the Inference' box.", "An advantage of this method of augmenting the training data is that the target sentence y is not synthetically generated, and thus helps to preserve the language model of the target sentences.", "We apply the above transformation on the given parallel cor-2 We use the IndicTrans (Bhat et al., 2015) library for transliterating target words.", "pus ( L, E ) .", "Also, for the monolingual English sentence EM we use a back-translation model to translate EM to sentences LM in the native language L .", "This gives us pseudo parallel data ( LM , EM ) .", "We transform this corpus also to code-mixed parallel data using the above process.", "Parallel Corpus For Hi En experiments, we use the IIT Bombay English-Hindi Parallel Corpus (Kunchukuttan et al., 2018) as the base parallel training data for our models.", "The corpus contains parallel data from a number of diverse sources and domains.", "Test and dev splits are from the WMT 2014 English-Hindi shared task (Bojar et al., 2014).", "The training set has about 1.6M sentence pairs, the dev set has 500 sentences and the test set has 2507 sentences.", "We also move about 2,000 randomly selected sentences from the training set to the dev set.", "For Bn En, we use 1M parallel sentences from Opus (Tiedemann, 2012) for training and 2000 randomly selected pairs each for validation and testing.", "Code-Mixed Parallel Test Dataset While code-mixing is most common in social media and web search, it is difficult to get parallel data from these applications.", "One rare find was a video lectures website called the Spoken Tutorial Project 3 .", "The project comprises of transcripts of video lectures spanning technologies like operating systems, programming languages, and popular software in heavily code-mixed Hindi and Bangla (among other 3 https://spoken-tutorial.org/ Indian languages), and also in English.", "After aligning the timestamps and some cleaning we collected 30.6K parallel sentences for code-mixed Hindi, and 28.6K sentences for code-mixed Bengali.", "4 Code-mixing statistics on this dataset is shown in Table 1.", "Non-Parallel Code-Mixed We also collect all source sentences that could not be aligned and are therefore not a part of the parallel test data.", "This dataset of 26.4K sentences for code-mixed Hindi and 17.4K sentences for code-mixed Bangla, serves as our limited code-switched corpus M during training.", "Mononlingual English All English sentences from the Spoken Tutorial dataset for which there are no parallel code-mixed sentence in Hindi or Bangla comprise the monolingual English corpus.", "We found around 54K such sentences.", "This dataset is used to create back-translated data, and serves to domain adapt the translation models to the target distribution.", "For En Hi, use the Helsinki-NLP model 5 from Huggingface for back-translation.", "For En Bn back-translations, we train a model with the same parallel data from Opus that we use for forward models.", "PHINC Dataset We also evaluate the efficacy of our data augmentation methods on the recently released PHINC dataset (Srivastava and Singh, 2020).", "The dataset contains roughly 13.5K translated sentence pairs from Twitter.", "The source texts are almost exclusively written in Latin and contain a mixture of Hindi and English words.", "Since no train-test splits are provided by the authors, we randomly split the dataset into 5000 test sentence pairs and use 500 sentence pairs for validation.", "We separate the remaining 8000 sentence pairs into a code-mixed corpus and a monolingual English corpus, to match the setup for our other experiments.", "All models that we train for this dataset involve a preliminary step of transliterating the Devanagari source (in IITB parallel data, and back-translations) to Latin.", "Model and Experiment Setup All models are trained with the fairseq toolkit (Ott et al., 2019).", "For data preparation, we first run tokenization with IndicNLP (Kunchukuttan, 2020) for source sentence and Moses tokenizer 6 for target sen-4 Our aligned data is available at https://github.com/shruikan20/Spoken-Tutorial-Dataset 5 https://huggingface.co/Helsinki-NLP/opus-mt-en-hi 6 https://github.com/moses-smt/mosesdecoder tences.", "For models trained for PHINC data, Devanagari source is transliterated to Latin using IndicTrans (Bhat et al., 2015).", "Next, we apply BPE with code learnt on training set for source and target jointly, for 20,000 operations.", "We train with the transformer architecture with shared source and target embeddings.", "We use Adam optimizer with lr = 5e-4 and 4000 warmup steps, train upto 100 epochs and select the best checkpoint based on loss on the validation split.", "Results on Hi En Spoken Tutorial dataset are reported by training 3 models with different seeds and averaging BLEU scores from the best checkpoint for each model.", "For other datasets we only train a single model for each method.", "Baselines To evaluate the importance of conditioning on the monolingual sentence, we design simpler variants that switch tokens based on content independent code-mixing statistics from the limited code-mixed data M .", "These two methods serve as competitive baselines for our model: Unigram Random that switches tokens to En or En-Trans based on their unigram statistics in M , and Bigram Random that switches based on bigram statistics in the LangId of adjacent tokens.", "We also compare against Samanta et al. (2019), by training their model on our limited code switched data, and then sampling switching patterns to perturb data similar to the Bigram Random method.", "Finally, to tease apart the effect our perturbations from domain adaptation we also compare against the As Is baseline where we train models with parallel and back-translated in-domain monolingual English data.", "Overall Results In Table 2 we present BLEU for our code-mixed Hindi translation model on four test sets: the code-mixed test set (ST-Test), the non-code-mixed test-set (NewsTest), and two adversarial subsets of ST-Test that we create as follows.", "The first, ST-OOV , comprises of sentence pairs where across source and target, at least two words were not found in the training data.", "This check is performed before sub-word tokenization.", "The second, ST-Hard , comprising of the 2,000 sentence pairs on which the sentence-level BLEU from the base model was the lowest.", "For code-mixed Bangla, we have equivalent test sets except the NewsTest.", "Table 3 presents our results for code-mixed Bangla.", "In Table 4, we present results on models trained for the PHINC dataset on the code-mixed test set only.", "Method ST-Test ST-OOV ST-Hard NewsTest As Is 43.93 ( 0.44) 41.37 ( 0.46) 18.63 ( 0.61) 21.66 ( 0.27) Samanta19 45.42 ( 0.37) 43.33 ( 0.43) 21.97 ( 0.78) 21.86 ( 0.11) Unigram 45.15 ( 0.18) 43.08 ( 0.31) 21.92 ( 0.33) 21.59 ( 0.28) Bigram 45.63 ( 0.1) 43.22 ( 0.03) 22.27 ( 0.2) 21.60 ( 0.26) mBERT 46.40 ( 0.38) 44.55 ( 0.25) 23.41 ( 0.25) 21.67 ( 0.19) Table 2: Average BLEU scores comparing models trained with different perturbation methods for code-mixed Hindi to English translation.", "We observe that our mBERT-based method substantially beats the AsIs method across all test sets for code-mixed Hindi and Bangla Spoken Tutorial data and PHINC data.", "The mBERT method also provides higher gains than the baselines on all three code-mixed test sets for Hi En, while not reducing the accuracy on the original NewsTest.", "For Bn En and PHINC, we observe that the Unigram and Bigram methods perform similar to the mBERT method showing that these are competitive methods in themselves.", "Overall, the effectiveness of perturbing parallel data is shown clearly in these experiments.", "An interesting observation from Table 2 is that although our gain was about 2.5 BLEU points on ST-Test, on the adversarial sets we observed much higher gains 3.2 for ST-OOV and 4.8 for ST-Hard.", "Our model also outperforms Samanta et al. (2019) on all code-mixed test sets while maintaining similar performance on NewsTest.", "Sensitivity to amount of Code-mixing We investigate the gains in BLEU achieved by our method on sentences with varying levels of code mixing measured as the fraction of En and EnFigure 3: Improvements in BLEU with mBERT based model versus baseline across three splits of the test set.", "Trans words in source sentences in the code-mixed Hindi ST-Test set.", "We split the test set into three parts Low (below 0.25), Medium (below 0.5), and High.", "Figure 3 shows the BLEU achieved by our method and the baseline.", "The biggest gains of about 5.8 BLEU can be seen in the test sentences with high levels of code-mixing.", "This shows that our data augmentation strategy does have the desired effect of better handling of heavily codemixed inputs.", "Machine translation of code-mixed inputs to English is an important task for which parallel training data is scarce.", "We presented a simple mBERT-based method of converting existing parallel data into code-mixed parallel data.", "Augmenting existing training data with this synthetic parallel data leads to substantial gains in BLEU on heavily codemixed inputs without worsening accuracy on non-code-mixed inputs.", "However, gains are larger for some language pairs than others.", "Furthermore, code-mixed data from informal sources like Twitter presents additional challenges like noisy inputs stemming from non-canonical transliterations, informal language use, and misspellings.", "Our ongoing and future work includes evaluating the model on more languages and handling noisy inputs.", "Acknowledgements The experiments reported in the paper were made possible by a Google Cloud Platform grant and a Tensor Flow Research Cloud free TPU access.", "We would like to thank Prof. Kannan M. Moudgalya from IIT Bombay, who leads the Spoken Tutorial initiative and provided us access to transcripts from the website enabling these experiments.", "Finally, we would also like to thank Aravindan Raghuveer from Google Research for discussions which helped shape the direction for this work." ]
[ "abstain", "method", "method", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "objective", "result", "result", "result", "result", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "objective", "objective", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "other", "result", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain" ]
[ "Transformer-based models generally allocate the same amount of computation for each token in a given sequence.", "We develop a simple but effective token dropping method to accelerate the pretraining of transformer models, such as BERT, without degrading its performance on downstream tasks.", "In particular, we drop unimportant tokens starting from an intermediate layer in the model to make the model focus on important tokens more effi-ciently if with limited computational resource.", "The dropped tokens are later picked up by the last layer of the model so that the model still produces full-length sequences.", "We leverage the already built-in masked language modeling (MLM) loss to identify unimportant tokens with practically no computational overhead.", "In our experiments, this simple approach reduces the pretraining cost of BERT by 25% while achieving similar overall fine-tuning performance on standard downstream tasks.", "Nowadays, the success of neural networks in a variety of NLP tasks heavily relies on BERT-type language models containing millions to billions of parameters.", "However, the pretraining process of these models is computationally expensive, generating significant carbon dioxide emission (Strubell et al., 2019; Patterson et al., 2021).", "In practice, there is the need to perform large-scale language model pretraining for diverse applications (Lee et al., 2020; Chalkidis et al., 2020; Zou et al., 2021; Rogers et al., 2020) in different languages (Antoun et al., 2020; Sun et al., 2021).", "In this paper, we develop a technique that significantly reduces the pretraining cost of BERT models (Devlin et al., 2019) without hurting their test performance on a diverse set of fine-tuning tasks.", "Recent efforts of efficient training involve mixed-precision training (Shoeybi et al., 2019), distributed training (You et al., 2020), better modeling on rare words and phrases (Wu et al., 2021), designing more effective and data-efficient pretraining objectives (Lan et al., 2020; Clark et al., 2020; Raffel et al., 2020), progressive stacking (Gong et al., 2019), and so on.", "While these approaches contribute to efficient training with reduced computational cost, most of them focus on the model architecture or the optimization process.", "In this paper, we focus on a simple but efficient BERT-pretraining strategy that has been under-explored before, i.e., token dropping, which removes the redundant tokens in each sequence that are less informative to training.", "Since not all tokens contribute equally to the output or the training objective, and the computational complexity of transformer-based models grows at least linearly with respect to the sequence length, shortening the input sequences can accelerate the training effectively.", "Among existing studies, the depth-adaptive transformer approach aims to reduce the autoregressive inference time by allocating less computation on easy-to-predict tokens (Elbayad et al., 2020).", "To improve the training efficiency, Dai et al. (2020) perform pooling on the embeddings of nearby tokens.", "However, directly dropping tokens during pretraining was not studied until very recently in faster depth adaptive transformer (Liu et al., 2021), where the important tokens are identified either through (1) mutual information-based estimation between tokens and predefined labels or through (2) a separate BERT model that exhaustively computes the masked language model (MLM) loss for each token.", "On the contrary, we focus on accelerating the task-agnostic pretraining phase without requiring any labels or any computation by a separate language model.", "Specifically, we identify important tokens as the ones hard to predict by the model 3774 itself through its loss during training, which is adaptive to its training process and leads to practically no computational overhead.", "We show examples of dropped tokens in Figure 1.", "Recent approaches such as RoBERTa (Liu et al., 2019) suggest packing input sequences.", "In this way, there are no [PAD] tokens, which makes it a non-trivial task to identify unimportant tokens.", "We identify unimportant tokens in each sequence with the smallest historical MLM loss (we take the running average of the MLM loss of each token throughout the pretraining process).", "By removing them from intermediate layers of a BERT model during training, we save an enormous amount of computation and memory.", "We keep them in the first several layers as well as in the last layer so that they are still present in the model.", "Therefore, the inputs and outputs of BERT model are kept consistent with the conventional all-token training process.", "Without modifying the original BERT architecture or training setting, this simple token-dropping strategy trains intermediate layers mainly on a few important tokens.", "As demonstrated in our experiments, models pretrained in this way generalize well on diverse downstream tasks with full sequences.", "To summarize, our contributions are as follows.", "(1) We show that BERT models can be pretrained with only a subset of the layers focusing on important tokens.", "Even though the model is trained on sub-sequences of important tokens only, it generalizes well to full sequences during fine-tuning on downstream tasks.", "(2) We identify important tokens through the pretraining process by exploring the training dynamics, with minimal computational overhead and without modifying the model architecture.", "(3) We show that our token dropping strategy can save 25% of pretraining time while achieving similar performance on downstream tasks.", "Code is available at https://github.com/ tensorflow/models/tree/master/ official/projects/token_dropping .", "Recall that a sequence in BERT consists of two sentences as well as the classification token [CLS] and the separator token [SEP] .", "If the resulting number of tokens is smaller than 512, then padding tokens are added to ensure that each sequence is Does(1.1) the(0.3) experiment (3.2) look (1.3) good (2.10)", "?(0.6)", "They(1.0) smell (2.1) something(1.8) horribly (5.4) wrong (2.3)", ".(0.2)", "Saturn (2.9) is(0.4) the(0.3) sixth (3.0) planet (2.4) from (1.0) the(0.3) sun (1.9) and(0.6) the(0.3) second (1.8) -(0.8) largest (1.4) in(0.5) the(0.3) solar (2.3) system (2.2) ,(0.4) after (1.5) jupiter (3.3)", ".(0.2)", "It(0.7) is(0.4) a(0.5) gas (2.2) giant (4.0) with (0.9) an(0.8) average (1.2) radius (3.3) of(0.3) about (1.3) nine (3.5) and(0.6) a(0.5) half (2.0) times (1.6) that(0.8) of(0.3) earth (2.4)", ".(0.2)", "We decide to use sequence packing (Liu et al., 2019) so that there would be no [PAD] symbols, throughout the paper.", "We also remove the next-sentence prediction training criteria as well.", "The rationale for using sequence-packing is two-fold.", "First, sequence packing provides a competitive baseline in terms of pretraining efficiency (So et al., 2019; Liu et al., 2019; Kosec et al., 2021; Zhang et al., 2021).", "Second, using sequence-packing can stress-test our algorithm under the absence of padding symbols to see if it brings further improvements beyond dropping padding tokens: without sequence packing, our algorithm can label [PAD] as the unimportant tokens, which trivially improves pretraining efficiency; with sequence packing, however, our algorithm has to identify and drop real tokens as unimportant tokens to improve the efficiency.", "Define T to be the input sequence length and d k , d v to be the size of each individual key and value vector respectively.", "The multi-head attention function with h attention heads is defined as: MultiHeadAttention( Q, K, V ) = concat( H 1 , . . . , H h ) WO , where H i = Attention( QW Qi , KW Ki , V W Vi ) = softmax (cid:32) ( QW Qi )( KW Ki ) (cid:62) d k (cid:33) V W Vi .", "Use d model to denote the hidden size of the model (usually equal to hd k ).", "We have the following: Q, K, V RT d model , W Qi R d model d k , W Ki R d model d k , W Vi R d model d v , W Oi R hd v d model .", "Besides the attention sub-layer, each BERT encoder layer also contains the feed-forward sub-layer (or feed-forward network, abbreviated as FFN).", "Each FFN is a position-wise function: it is applied to each position of the input, identically.", "The input to FFN is a matrix RT d model .", "The input will be transformed by a two-layer perceptron (with ReLU activation in between) into an output matrix RT d o .", "In Vaswani et al. (2017), d o = d model and the hidden size of the intermediate layer d ff = 4 d model due to empirical investigation.", "Suppose the input sequence contains 512 tokens.", "Having 512 hidden states (corresponding to 512 tokens) after each encoder layer may not be necessary, given that certain words may never heavily influence the meaning of the sentence.", "3 Moreover, removing unimportant tokens in intermediate layers would produce a dropout effect, so that our network would learn to reconstruct the masked words with more noisy hidden states.", "Therefore, we decide to allocate the full amount of computation only to important tokens.", "Figure 2 gives an illustration of where the unimportant tokens are dropped in a BERT model.", "Each row of query Q , key K , and value V in a self-attention module in each transformer encoder layer corresponds to a single token.", "Suppose L f = L full is the set of layers whose input covers all the tokens; 4 L h = L half is the set of layers whose input only cover a proper subset of tokens.", "3 Relatedly, Zhang et al. (2019) have shown in computer vision, using fully-connected networks and convolutional neural networks, that certain layers called ambient layers can be reset with almost no negative consequence on performance, while other layers called critical layers are necessary.", "4 In this paper, input covers all the tokens means that the query, key, and value metrics in the layer have T rows, so no rows are discarded from the matrices.", "Separation.", "During stage-1 pretraining, if the layer l L f and the next layer l +1 L h , then we remove the rows in Q corresponding to the unimportant tokens for layer l + 1 but keep K and V intact.", "After the removal, we have Q RM d model where M is the number of important tokens.", "We also have K, V RT d model where T is the input sequence length.", "5 Suppose l (cid:48) is the first layer above layer l +1 such that l (cid:48) L f .", "Suppose l + 2 L h .", "Then, for layers l + 2 , . . . , l (cid:48) 1 , we have Q, K, V RM d model , which means that their rows correspond to only the important tokens.", "Merging.", "Given that l (cid:48) is the first layer above layer l + 1 such that l (cid:48) L f , before layer l (cid:48) , we 5 In practice, using TensorFlow, the separation step in stage-1 pretraining and the merge step can be done using the function tf.gather() .", "The number of important tokens for different sequences has to be the same in order to use modern accelerators like TPUs.", "Using sparse tensors can address the issue of having a different number of important tokens, but sparse tensor operations in practice are slow.", "merge the hidden states corresponding to the unimportant tokens (taken from the outputs of layer l ) with the hidden states corresponding to the important tokens (taken from the outputs of layer l (cid:48) 1 ).", "We keep the order of hidden states consistent with the order of the input tokens.", "Alternatively: token passing instead of token dropping.", "In layers where unimportant tokens are dropped, the input to the layers effectively corresponds to partial and incoherent sentences.", "We thus attempt the token passing approach, which can ensure that the input to such layers corresponds to complete and coherent sentences.", "Token passing is described as follows.", "In layers l + 1 , . . . , l (cid:48) 1 L h , we can keep the rows of K and V corresponding to the unimportant tokens.", "More specifically, the rows of K that correspond to important tokens come from the hidden states outputted by the previous encoder layer.", "The rows of K that correspond to unimportant tokens come from the hidden states outputted by layer l .", "This procedure results in Q RM d model and K, V RT d model for layers l + 1 , . . . , l (cid:48) 1 .", "See Section 5 for empirical studies.", "Determining l and l (cid:48) .", "We leave details on determining l and l (cid:48) to later sections.", "Empirically, l = LE 2 1 and l (cid:48) = LE 1 consistently lead to good performance, where LE is the total number of encoder layers.", "For instance, if LE = 12 , then the full layers in L f (i.e., layers in which the query, key, and value matrices all have T rows) would be layers 1 through 5 as well as layer 12.", "At test-time or when we fine-tune on downstream tasks, all the encoder layers are full layers, meaning", "meaning we do not do any token dropping.", "Given the mismatch between the neural network in stage 1 and the neural network used for fine-tuning and test-time, during stage 2, we simply pretrain using the full model (i.e., all tokens passing through all layers).", "Stage-2 pretraining requires only a smaller number of steps, compared to stage-1 pretraining.", "However, stage-2 pretraining turns out to be unnecessary, which we discuss in later sections.", "In this subsection, we elaborate on which tokens to drop (i.e., which corresponding rows to discard in the query, key, and value matrices) in a given sequence.", "First, we never drop special tokens including [MASK] , [CLS] , and [SEP] .", "In other words, we always treat these tokens as important tokens.", "Recall that we use sequence packing in all of our experiments, unless noted otherwise.", "Therefore, there are no padding tokens [PAD] .", "6 We introduce two approaches for identifying important tokens in the following sub-sections.", "In the ablation studies (Section 4.2), we will introduce more straightforward approaches as baselines.", "Updating the cumulative loss vector.", "We use a vector m R |V| to approximate the difficulty of learning a specific token in the vocabulary V .", "The vector m is updated throughout the pretraining.", "Recall that BERT pretraining involves the masked language modeling (MLM) objective, where the model is asked to predict the tokens of the masked-out input tokens.", "Suppose n tokens in a sequence are masked out, then we would obtain n MLM negative log-likelihood (NLL) losses.", "For each token, we update the corresponding entry in the cumulative loss vector as follows: m i m i + (1 ) (cid:96) i , (1) where (cid:96) i is the NLL loss that corresponds to the token i and (0 , 1) is a coefficient that is close to 1.", "In particular, we never update the cumulative losses corresponding to the aforementioned special tokens ( [MASK] , [CLS] , and [SEP] ).", "The losses for those tokens are set to a large number such as 10 4 .", "7 6 If we do not use sequence packing, we would always drop the [PAD] tokens.", "Deciding which tokens are unimportant.", "We need to drop the rows in the query, key, and value matrices corresponding to the unimportant tokens.", "To decide which tokens will be treated as unimportant ones, given a sequence of 512 tokens, we simply look up the 512 corresponding cumulative losses using m , and label the tokens that correspond to the smallest cumulative losses as unimportant tokens.", "In other words, suppose we have a sequence x = ( x 1 , x 2 , . . . , x T ) where T is the sequence length.", "Use [ T ] to denote { 1 , 2 , . . . , T } .", "Suppose : [ T ] [ T ] is a function such that x (1) , x (2) , . . . , x ( T ) are the tokens sorted in decreasing order of the aforementioned cumulative loss.", "Then, we are treating x (1) , . . . , x ( M ) as important tokens (i.e., the tokens to keep), where M is a positive integer (e.g., M = int( T/", "2) ).", "We are treating x ( M +1) , . . . , x ( T ) as unimportant tokens.", "Optionally: adding randomness.", "We can optionally assign every token with a nonzero probability to be selected as an important token, which can potentially make the model generalize well on full sequences.", "For example, let J = int(0 . 05 T ) , given x (1) , x (2) , . . . , x ( T ) as described above, we replace the last J important tokens x ( M J +1) , . . . , x ( M ) with J tokens randomly chosen from x ( M J +1) , . . . , x ( T ) .", "Then, the J randomly chosen tokens will be treated as important tokens.", "In later sections, we will empirically investigate whether the randomness is helpful.", "Before the start of pretraining, we count the number of occurrences of each token in the vocabulary V .", "During pretraining, given a sequence, suppose there are s special tokens.", "This approach assigns the special tokens as well as the M s tokens that correspond to the lowest frequency as important tokens, where M is the target number of important tokens in a sequence.", "It treats the rest of the tokens as unimportant tokens.", "the padding token has the smallest lossgiven that NLL loss is always non-negative for all other tokens.", "We use the sequence-packed version of the dataset (Section 2.1) so as to ensure that we have to drop meaningful tokens instead of the [PAD] tokens.", "Downstream tasks.", "We fine-tune on GLUE tasks (Wang et al., 2018), whose datasets are on the larger end.", "We only use the 6 largest GLUE datasets: MNLI, where we use MNLI-m to denote MNLI-matched and MNLI-mm to denote MNLI-mismatched (Williams et al., 2018), QNLI (Ra-jpurkar et al., 2016), QQP 8 , SST (Socher et al., 2013), and the GLUE diagnostics set AX (Wang et al., 2018).", "Additionally, we also experiment on the question answering datasets: SQuAD v1.1 (Ra-jpurkar et al., 2016) and SQuAD v2.0 (Rajpurkar et al., 2018).", "The evaluation metric for each task can be found in Table 1.", "By default, the total training steps of each model is 1 million, using the settings in Section 4.4.", "We experiment with the following models.", "First, we have the baseline models.", "baseline (no sequence packing) : The original BERT with the non-sequence-packed input.", "baseline : The original BERT with the sequence-packed input.", "baseline (75% steps) : The original BERT with the sequence-packed input but only trained for 75 % of the steps.", "This baseline is trained using a similar amount of computation as our proposed token dropping methods.", "Next, we have the following methods that aim to save pretraining time.", "For token dropping methods, we drop 50% of the tokens (unless mentioned otherwise) in order to compare with the average pooling method (Dai et al., 2020) which reduces the sequence length by half.", "token drop : We perform stage-1 pretraining using the cumulative-loss token-dropping for 1M steps.", "token drop (rand) : Similar to the token drop method, except that we randomly drop 50% non-special tokens in a sequence, instead of dropping unimportant tokens.", "Special tokens like [CLS] and [SEP] are not dropped.", "8 https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs 3778 token drop (half-rand) : It is similar to the token drop method except adding extra randomness to the important token selection, as introduced in Section 3.3.", "This half-random method can be viewed as a combination of token drop and token drop (rand). token drop (layer rearranged) : It is similar to the token drop method except moving the last layer that processes all tokens to the beginning of the model.", "In other words, the layers in Figure 2 are rearranged such that full-sequence layers are only at the bottom.", "token drop (freq) : Similar to the token drop method, except that we identify important tokens using the frequency-based token-dropping scheme, as discussed in Section 3.3.2.", "token avg : Similar to the token drop method, except that we use average-pooling to compress the sequences instead of token drop.", "Suppose layer l L f and the immediate next layer l (cid:48) L h , as described in Section 3.", "Instead of dropping rows of the query, key, and value matrices, we apply average pooling with a window size of 2 and a stride of 2.", "In other words, suppose q 1 , . . . , q T are the rows of the query vector.", "Then, the T/ 2 new query vectors are ( q 1 + q 2 ) / 2 , ( q 3 + q 4 ) / 2 , . . . , ( q T 1 , q T ) / 2 , assuming that T is an even number.", "This idea is introduced in Funnel transformer (Dai et al., 2020).", "We also experiment with adding the optional stage-2 pretraining phase to the methods described above.", "In such cases, we first perform stage-1 pretraining for 900k steps and then stage-2 pretraining for 100k steps.", "To distinguish between the stage-1 only methods, we add + stage-2 at the end of the method description.", "The BERT architectures are the same as the ones in Devlin et al. (2019).", "We experiment on both BERT-base and BERT-large.", "For each BERT architecture, we train with two different input sequence lengths: 512 and 128.", "We use the sequence-packed input data, unless otherwise noted.", "We use TPUv3 to pretrain the BERT models.", "The batch size of each pretraining step is 512.", "We train each BERT model for 1 million steps.", "We use the AdamW optimizer (Loshchilov and Hutter, 2019).", "We adopt a peak learning rate of 1 e 4 and use the linear decay scheduler for the learning rate.", "We conduct extensive hyperparameter tuning for downstream tasks.", "For all GLUE tasks, we test different numbers of training epochs { 2 , 3 , 4 , 5 , 6 , 8 , 10 } and peak learning rate values { 5 e 6 , 1 e 5 , 2 e 5 , 3 e 5 , 4 e 5 } using the baseline pretrained BERT model.", "{ 3 , 6 } and { 1 e 5 , 2 e 5 } give the best overall results.", "Thus, for every pretrained model, we finetune on each individual GLUE task using the combinations of the two best and values (four settings in total) and take the best validation result.", "For SQuAD tasks, we test { 1 , 2 , 3 , 4 , 5 , 6 , 8 } and { 5 e 5 , 6 e 5 , 8 e 5 , 1 e 4 , 1 .", "2 e 4 } using the baseline pretrained BERT model and find out that { 4 , 8 } and { 2 e 5 , 4 e 5 } produce the best results overall.", "Thus, we fine-tune every model with these settings and report the best validation result.", "We apply the linear decay learning rate schedule that ends with zero for all experiments.", "For each method, we pretrain two models with different random seeds.", "Then these two models are fine-tuned separately on individual downstream tasks.", "We then report the averaged result as the final result for each task.", "Table 1 shows the ablation study.", "As mentioned, each number in the table corresponds to the average performance of two pretrained models (using different random seeds) that are then separately fine-tuned.", "On whether stage-2 pretraining is useful.", "There is a mismatch between the neural network in stage-1 pretraining and the neural network used for fine-tuning and test-time.", "Therefore, we propose stage-2 pretraining where there is no token dropping so as to address the train-test mismatch.", "Comparing token drop with token drop + stage-2 in Table 1, we see that the performance of the model trained without stage-2 pretraining and the model trained with stage-2 pretraining perform sim-3779 Methods BERT-base AX MNLI-mm MNLI-m QNLI QQP SST GLUE-avg SQuAD SQuAD SQuAD (corr.) (acc.) (acc.) (acc.) (F1) (acc.) -v1 (F1) -v2 (F1) -avg baseline (no sequence packing) 76.36 84.61 84.28 91.56 90.94 95.73 87.25 90.11 78.89 84.50 baseline 76.52 84.47 84.44 90.58 90.97 96.18 87.19 89.71 79.00 84.35 baseline (75%) 76.38 84.43 84.36 90.21 90.82 96.00 87.04 89.33 78.14 83.73 proposed token drop 77.77 85.28 85.20 91.25 91.00 95.54 87.67 90.44 81.09 85.77 token drop + stage-2 77.70 84.91 85.04 91.40 91.00 95.98 87.67 90.32 79.90 85.11 token drop (half-rand) 77.08 84.92 84.81 91.36 90.94 96.80 87.65 90.34 80.38 85.36 token drop (half-rand) + stage-2 77.25 85.19 84.89 91.52 90.67 94.94 87.41 90.47 79.81 85.14 token drop (rand) + stage-2 76.88 84.56 84.56 91.27 90.78 95.65 87.28 89.65 78.61 84.13 token drop (freq) + stage-2 76.19 84.35 84.27 91.05 90.80 96.48 87.19 89.38 77.32 83.35 token avg + stage-2 76.92 84.83 84.69 90.94 90.89 97.03 87.55 90.23 79.35 84.79 token pass + stage-2 77.04 84.58 84.86 91.36 90.89 95.67 87.40 89.98 79.85 84.92 token drop (layer rearranged) + stage-2 76.61 84.52 84.37 90.78 90.76 96.65 87.28 90.05 78.38 84.21 Table 1: Evaluating different pretraining methods by finetuning pretrained models on downstream tasks.", "ilarly.", "We hypothesize that the train-test mismatch can be easily addressed during downstream task fine-tuning.", "On determining which tokens are important.", "Figure 1 shows which tokens are labeled as important using three examples from our token drop model.", "Additionally, in Section 3.3, we propose to optionally replace the important tokens that have the lowest cumulative losses with unimportant tokens.", "Comparing token drop with token drop (half-rand) and token drop (rand) in Table 1, we see that adding randomness does not help.", "Finally, we see that the cumulative-loss-based dropping performs better than frequency-based dropping and random dropping.", "On how many tokens to drop.", "We report results with different token dropping percentages on training the BERT-base model in Table 4.", "We see that dropping more than 62.5% of the tokens yield worse results.", "By default, our experiments drop 50% of the tokens.", "On determining which layers to drop.", "Comparing token drop (half-rand) + stage-2 with token drop (layer rearranged) + stage-2, we can see that putting one full-sequence layer at the end of the model yields better results.", "On token dropping vs. token passing.", "Comparing token drop + stage-2 with token pass + stage-2, we see that passing the unimportant tokens instead of dropping them does not affect the performance.", "Recall that for layers where unimportant tokens are dropped, token dropping would make the input to such layers correspond to incoherent sentences, which could impact BERT's learning ability.", "However, we find that doing token passing makes pretraining slightly less efficient while providing no improvement on downstream performance.", "On token dropping vs. token averaging.", "Comparing token drop + stage-2 with token avg + stage-2, we see that average pooling instead of dropping unimportant tokens yields slightly worse results.", "This means that our importance-driven token selection is more efficient than directly averaging embedding across every nearby token pair.", "We test our method on BERT-base and BERT-large with a sequence length of 128 and 512.", "We report the results in Table 2.", "Overall, our proposed method performs similarly as the baseline method.", "As shown in Table 3, when taking the average across all GLUE and SQuAD scores and across all four settings (two BERT models times two sequence lengths) and two pretraining runs with different random seeds, our proposed token dropping method outperforms the baseline method by 0.3% (85.16% to 85.45%) in addition to the 25% pretraining time reduction.", "One strategy to improve data efficiency during language model pretraining is by designing better pretraining objectives (Lan et al., 2020; Clark", "et al., 2020; Raffel et al., 2020).", "Concurrently, researchers have also been exploring certain hardware properties to improve pretraining efficiency, e.g., mixed-precision training (Shoeybi et al., 2019) and huge-batch distributed training (You et al., 2020).", "Recently, Wu et al. (2021) propose to tackle the efficient pretraining problem through rare words or phrases, and they provide rare words with a note embedding to make models better aware of the contextual information in a sequence.", "The faster depth-adaptive transformer approach is applied to text classification tasks (Liu et al., 2021).", "It identifies important tokens by either computing the mutual information between each token and the given sequence label, or using a separate BERT model to exhaustively evaluate the masked language model loss for each token.", "There is a rich body of literature on faster inference of sequence generation problems, such as early layer exits during translation (Elbayad et al., 2020; Han et al., 2021), non-autoregressive machine translation (Gu et al., 2018; Tu et al., 2020b), and amortizing the cost of complex decoding objectives (Chen et al., 2018; Tu et al., 2020a; Pang et al., 2021a).", "Several ideas are particularly relevant to token-wise layer dropping: Zhang and He (2020) propose to use a fixed probability to drop an entire layer during pretraining; here, we use the more fine-grained 3781 token-wise layer dropping.", "The dynamic halting algorithm (Dehghani et al., 2019), motivated by the finding that transformers fail to generalize to many simple tasks, stops the processing of a token through upper layers if its representation is good enough.", "However, the implementation does not improve training time, as its goal is to improve performance.", "We present a simple yet effective approach to save BERT pretraining time.", "Our approach identifies unimportant tokens with practically no computational overhead and cuts unnecessary computation on these unimportant tokens for training.", "Experiments show that BERT models pretrained in this manner save 25% pretraining time, while generalizing similarly well on downstream tasks.", "We show that our token dropping approach performs better than average pooling along the sequence dimension.", "Future work will involve extending token dropping to pretraining transformer models that can process a much longer context, as well as extending this algorithm to a wider range of transformer-based tasks, including translation and text generation.", "The authors thank the anonymous reviewers for helpful feedback." ]
[ "abstain", "objective", "method", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "method", "result", "abstain", "abstain", "method", "method", "objective", "abstain", "abstain", "objective", "objective", "result", "abstain", "objective", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "method", "method", "abstain", "result", "abstain", "other" ]
[ "Temporal common sense (e.g., duration and frequency of events) is crucial for understanding natural language.", "However, its acquisition is challenging, partly because such information is often not expressed explicitly in text, and human annotation on such concepts is costly.", "This work proposes a novel sequence modeling approach that exploits explicit and implicit mentions of temporal common sense, extracted from a large corpus, to build TACOLM, 1 a t empor a l co mmon sense l anguage m odel.", "Our method is shown to give quality predictions of various dimensions of temporal common sense (on UDST and a newly collected dataset from Real-News).", "It also produces representations of events for relevant tasks such as duration comparison, parent-child relations, event coreference and temporal QA (on TimeBank, HiEVE and MCTACO ) that are better than using the standard BERT.", "Thus, it will be an important component of temporal NLP.", "Time is crucial when describing the evolving world.", "It is thus important to understand time as expressed in natural language text.", "Indeed, many natural language understanding (NLU) applications, including information retrieval, summarization, causal inference, and QA (UzZaman et al., 2013; Chambers et al., 2014; Llorens et al., 2015; Bethard et al., 2016; Leeuwenberg and Moens, 2017; Ning et al., 2018b), rely on understanding time.", "However, understanding time in natural language text heavily relies on common sense inference.", "Such inference is challenging since commonsense information is rarely made explicit in text (e.g., how long does it take to open a door?) Even when such information is mentioned, it is often 1 https://cogcomp.seas.upenn.edu/page/ publication_view/904 0 0.1 0.2 0.3 0.4 seconds hour week year Dr. Porter is taking a walk.", "affected by another type of reporting bias : people rarely say the obvious, in order to communicate more efficiently, but sometimes highlight rarities (Schubert, 2002; Van Durme, 2009; Gordon and Van Durme, 2013; Zhang et al., 2017; Bauer et al., 2018; Tandon et al., 2018).", "This is an even more pronounced phenomenon when it comes to temporal common sense (TCS) (Zhou et al., 2019).", "In Example 1, human readers know that a typical vacation is likely to last at least a few days, and they would choose will not to fill in the blank for the first sentence; instead, with a slight change of context vacation walk outside, people typically prefer will for the second one.", "Similarly, any system which correctly answers this example for the right reason would need to incorporate TCS in its reasoning.", "in Example 1, the duration of taking a vacation and taking a walk are not expressed explicitly, so that systems are required to read between the lines to support the inference.", "A pre-trained language model may not handle this issue well, as it cannot identify the TCS dimensions in temporal mentions and effectively learn from them.", "As a result, it cannot generalize well to similar events without explicit temporal mentions.", "To handle this problem, we design syntactic rules that can collect a vast amount of explicit mentions of TCS from unannotated corpus such as Gigaword (Graff et al., 2003) ( 3.3).", "We use this data to pre-train our model so that it distinguishes different dimensions.", "A second challenge occurs when the text is highlighting rare and special cases.", "As a result, temporal mentions in natural text may follow a distorted distribution in which certain kinds of common events are under-represented.", "For instance, we may rarely see mentions of I opened the door in 3 sec-onds, but we may see it took me an hour to open this door in text.", "To overcome this challenge, we exploit the joint relationship among temporal dimensions.", "Although we rarely observe the true duration of opening the door in free-form text, we may see phrases like I opened my door during the fire alarm, providing an upper-bound to the duration of the event (i.e., opening the door does not take longer than the alarm.) We believe that we are the first to exploit such phenomena among temporal dimensions.", "This paper studies several important dimensions of TCS inference: duration (how long an event takes), frequency (how often an event occurs) and typical time (when an event typically happens).", "2 As a highlight, Fig. 1 shows the distributions (over time units) we predict for the duration and frequency of three events.", "We can see that taking a vacation lasts from days to months while taking a walk lasts from minutes to hours.", "As shown, our model is able to produce different and sensible distributions for the take event, depending on the context in which take occurs.", "Our work builds upon pre-trained contextual language models (Peters et al., 2018; Devlin et al., 2019; Liu et al., 2019).", "However, a standard language modeling objective does not lead to a model that handles the two challenges mentioned above; in addition, other systematic issues limit its ability to handle TCS.", "In particular, language models do 2 E.g., typical time in a day (the morning), typical day of a week (on Sunday), and typical time of a year (summer).", "not directly utilize the ordinal relationships among temporal units.", "For example, hours is longer than minutes, and minutes are longer than sec-onds. 3 Fig. 2 shows that BERT does not produce a meaningful duration distribution for a set of events with a gold duration of day (extracted in 3.3).", "Our proposed system, on the other hand, is able to utilize the ordinal relationships and produce unimodal distributions around the correct labels in both Fig. 1 and Fig. 2 .", "Contributions.", "This work proposes an augmented pre-training for language models to improve their understanding of several important temporal phenomena.", "We address two kinds of reporting biases by effectively acquiring weak supervision from free-form text and utilizing it to learn multiple temporal dimensions jointly.", "Our model incorporates other desirable properties of time in its objective (ordinal relations between temporal phrases, the circularity of certain dimensions, etc.) to improve temporal modeling.", "Our experiments show 19% relative improvement over BERT in intrinsic evaluations, and 5-10% improvements in most extrinsic evaluations done on three time-related datasets.", "Furthermore, the ablation study shows the value of each proposed component of our construction.", "Overall, this is the first work to incorporate a wide range of temporal phenomena within a contextual language model.", "The rest of this paper is organized as follows.", "We distinguish our work with the prior work in 2.", "The core of our construction, including extraction of cheap supervision from raw data and augmenting a language model objective function with temporal signals, is in 3 .", "We conclude by showing intrinsic and extrinsic experiments in 4.", "3 The relationship can be more complex.", "E.g., hours is closer to minutes than centuries is; days of a week forms a circle: Mon. is followed by Tue. and preceded by Sun. 2 Related Work Common sense has been a popular topic in recent years, and existing NLP works have mainly investigated the acquisition and evaluation of common sense reasoning in the physical world.", "These works include but are not limited to size, weight, and strength (Bagherinezhad et al., 2016; Forbes and Choi, 2017; Elazar et al., 2019), roundness and deliciousness (Yang et al., 2018), and intensity (Cocos et al., 2018).", "A handful of these works uses cheap supervision.", "For example, Elazar et al. (2019) recently proposed a general framework that discovers distributions of quantitative attributes (e.g., length, mass, speed, and duration) from explicit mentions (or co-occurrences) of these attributes in a large corpus.", "However, Elazar et al. (2019) restrict events to be verb tokens, while we handle verb phrases containing more detailed information (e.g., taking a vacation is very different from taking a break, although they share the same verb take).", "Besides, there has been no report on the effectiveness of this method on temporal attributes.", "On the other hand, time has long been an important research area in NLP.", "Prior works have focused on the extraction and normalization of temporal expressions (Strotgen and Gertz, 2010; Angeli et al., 2012; Lee et al., 2014; Vashishtha et al., 2019), temporal relation extraction (Ning et al., 2017, 2018c; Vashishtha et al., 2019), and time-line construction (Leeuwenberg and Moens, 2018).", "Recently, MCTACO (Zhou et al., 2019) summarizes five types of TCS and the three temporal dimensions studied here are all in their proposal.", "4 MCTACO shows that modern NLU techniques are still a long way behind humans on TCS understanding, suggesting that further research on this topic is needed.", "There have been works on temporal common sense, such as event duration (Pan et al., 2006; Gusev et al., 2011; Williams, 2012; Vempala et al., 2018; Vashishtha et al., 2019), typical temporal ordering (Chklovski and Pantel, 2004; Ning et al., 2018a,b), and script learning (i.e., what happens next after certain events) (Granroth-Wilding and Clark, 2016; Li et al., 2018; Peng et al., 2019).", "Those on duration are highly relevant to this work.", "(Pan et al., 2006) annotates a subset of documents from TimeBank (Pustejovsky et al., 2003) with 4 They additionally propose typical order of events and stationarity (whether a state holds for a very long time or indefinitely).", "less-than-one-day and more-than-one-day annotations and provides the first baseline system for this dataset.", "Vempala et al. (2018) significantly improve earlier work by using additional aspectual features for this task.", "Vashishtha et al. (2019) annotate the UDS-T dataset with event duration annotations and propose a joint method that extracts both temporal relations and event durations.", "Our approach has two notable differences from this line of work.", "First, we work on duration, frequency, and typical timejointly on three dimensions of TCS, while the works above only focused on duration.", "Second, we focus more on obtaining cheap supervision signals from unlabeled data, while these other works all have access to human annotations.", "With respect to harnessing cheap supervision, Williams (2012); Gusev et al. (2011) propose to mine web data using a collection of hand-designed query patterns.", "In contrast to our approach, they are based on counting instead of machine learning and cannot handle the contextualization of events.", "In this work, we focus on three major temporal dimensions of events, namely Duration , Frequency and Typical Time .", "Here, Typical Time means the typical occurring time of events during a day, day of a week, and month or season of a year.", "We follow the same definition to each of the dimensions (also called properties) in Zhou et al. (2019).", "As mentioned earlier, commonsense information extraction comes with the challenge of reporting biases.", "For example, people may not report the duration of opening the door, or the frequency of going to work.", "However, it is often possible to get supportive signals from other dimensions, as people mention going to work associated mostly with a day in a week, hence we may know the frequency of such an event.", "We argue that many temporal dimensions are interrelated and a joint learning scheme would suit this task.", "Beyond duration, frequency and typical time, we also introduce auxiliary dimensions that are not meant to be used by themselves but will Figure 3: Examples of the extraction process for each temporal dimensions.", "The temporal arguments are marked orange and the result of the extraction are tuples of the form ( event,value,dimension ).", "help the prediction of other dimensions.", "The auxiliary dimensions we define here are event Duration Upper-bound and event Relative Hierarchy .", "The former represents values that are upper-bounds to an event's duration but not necessarily the exact duration.", "The latter consists of two sub-relations, namely temporal ordering and duration inclusion of event-pairs.", "We collect a few pattern-based extraction rules based on SRL parses for each temporal dimension (including the auxiliary dimensions).", "We design the rules to have high precision, while not compromising too much on recall.", "We overcome the potential sparsity issue (and the resulting low recall problem) by extracting from a massive amount of data.", "Fig. 3 provides some examples of the input/output for each dimension, as we describe the specific extraction process below.", "We first process the entire Gigaword (Graff et al., 2003) corpus and use AllenNLP's SRL model (Gardner et al., 2018; Shi and Lin, 2019) to annotate all sentences.", "We extract the ones that contain at least one temporal argument (i.e., the arg-tmp constituent of SRL annotations) and use textual patterns to categorize each sentence into a corresponding dimension with respect to an associated verb.", "These patterns are inspired by earlier works and are extensively improved with iterative manual error analysis.", "The rest of this section is devoted to explaining the key design ideas used for these patterns.", "the temporal unit word, and normalize them into the nearest unit among the nine units in our scope: (second, minute, hour, day, week, month, year, decade, century.) We ignore particular phrases such as for a second chance where the semantic of second is not temporal related.", "We found that for is the only high-precision preposition that indicates exact values of duration.", "Frequency.", "Such temporal arguments are usually composed of a duration phrase and a numerical head (e.g., four times per) indicating the frequency within the duration (e.g., week).", "Thus, we check for multiple keywords that indicate the start of a frequency expression, including every, per, once, . . . times.", "If so, we extract the duration value as well as the numerical head's value.", "We ignore any temporal phrases that contain when since they often convey semantics that does not fit any of our temporal categories; e.g., when everyday life... is not describing the frequency of the corresponding verb.", "We represent the frequency with duration d , with a definition of occurring once every d elapses.", "For example, the frequency of four times per week is represented as 1.75 days.", "Similarly, we normalize them into the nearest unit among the nine duration units described above, and 1.75 days is extracted as days.", "Typical Time.", "We pre-define a list of typical time keywords, including the time of day (e.g., morning etc.), time of week (e.g., Monday etc.), month (e.g., January etc.) and season (e.g., winter etc.) We check if any of the typical time keywords appear in the temporal argument and verify if the temporal argument is, in fact, describing the time of occurrence.", "This is done by filtering out the temporal arguments that contain a set of invalid prepositions, including until, since, following, since such keywords often do not indicate the actual time of occurrence.", "Duration Upper-bound.", "Many temporal arguments describe the duration upper-bound instead of the exact duration value.", "For example, as described in (Gusev et al., 2011), did [activity] yester-day indicates something that happened within a day.", "We extend the set of patterns to include in [temporal expression] or keywords such as next (e.g., the next day), last (e.g., last week), previous (e.g., previous month), or recent (e.g., recent years).", "We normalize the values into the same label set of the nine unit words as the duration dimension.", "Event Relative Hierarchy.", "A system can learn about an event with comparisons to other ones, as we show in 1.", "To acquire hierarchical relationships between events, we check whether the SRL temporal argument starts with a keyword that indicates a relation between the main event and another event phrase.", "We consider five such keywords, namely before, after, during, while and when.", "We use these keywords to label the relative relationship between the two events.", "Here, we assume that during and while are the same, which indicates that the main event is not longer than the one in the argument.", "Note that certain keywords might have meanings that do not suggest temporal relationships (e.g., while has a different sense similar to whereas.) We rely on SRL annotations to identify the appropriate sense of the keywords.", "We use the temporal keyword as labels, but keep the entire event phrase in the SRL temporal argument for later use in 3.5.", "Resulting data.", "We collect 25 million instances that are successfully parsed into one of our temporal dimensions from the entire Gigaword corpus (Graff et al., 2003).", "Each instance is in the form of ( event,value,dimension ) tuples (Fig. 3), with a dimension distribution shown in Fig. 4.", "For all events, we remove the related temporal argument so that it does not contain direct information about the dimension or the value.", "For example, as shown in Fig. 3, for 2 hours is removed, and only Jack rested before the speech is kept so that the target duration does not present in the event.", "Note that value is also called and used as label in later contexts related to classification tasks.", "The temporal values in one dimension are naturally related to each other via a certain ordering and appropriate distance measures.", "To account for and utilize this external knowledge, we use a soft cross-entropy to encourage predictions that are aligned with the external knowledge.", "where D is the instances in the training data and y represent the degree to which the target labels align with the external knowledge.", "Thus, y is a probability vector, i.e., has non-zero values and sum to 1.0.", "Now we describe how we construct y to apply the aforementioned external knowledge.", "Duration , Frequency , and Upper-bound take the same set of labels of duration units.", "We first define a function logsec(.) which takes a unit and normalizes it to its logarithmic value in seconds (e.g., minute 60 4.1).", "For each instance in these dimensions, with an observed gold label g , we assume a normal distribution with a mean value of = logsec ( g ) and a fixed standard deviation of = 4 .", "Then, we construct y so that, y [ i ] = 1 2 e ( logsec ( l ) ) 2 / 2 2 (2) where l is the i th label.", "For typical time , the labels are placed with approximately equal distances in a circular fashion.", "For example, Monday is before Tuesday and after Sunday.", "We assume adjacent units have a distance of 1, and we generate y based on a Gaussian distribution with a standard deviation of 0 .", "5 .", "In other words, we assume the two immediate neighbors of a gold label are reasonably possible.", "For hierarchy , we construct y as a one-hot vector where only the gold label has a value of 1, and the rest are zeroes.", "Our goal is to build a model that is able to predict temporal labels (values) given events and dimensions.", "Instead of building a classification layer on top of a pre-trained model, we follow previous work (Huang et al., 2019) and place the label into the input sequence.", "We mask the label in the sequence and use the masked token prediction objective as the classification objective.", "To produce more general representations, we also keep the temporal label and mask the event tokens instead at a certain probability, so that we are able to maximize both P ( Tmp-Label | Event ) and P ( Event | Tmp-Label ) in the same learning process, where Tmp-Label refers to the temporal label associated with the event.", "Specifically, we use the reserved unused tokens in BERT-base model lexicon to construct a 1-to-1 mapping from every value in every dimension to the new vocabulary.", "We choose not to use the existing representations for temporal terms that are already included in BERT's in-use lexicon, such as minutes or weeks, because these keywords have different temporal semantics in different dimensions.", "Instead, we assign unique and separate lexicon entries to different values in different dimensions, even though the values may share the same surfaces.", "Consider each ( event,value,dimension ) tuple, we map value and dimension to their new vocabularies [Val] and [Dim] , and we use [ W 1 , W 2 , . . . , W n ] to represent the tokens in the sentence, and W verb the event verb anchor from SRL.", "We now form a sequence [ W 1 , W 2 , . . . [Vrb] , W verb , . . . W n , [SEP] , [Vrb] , [Dim] , [Val] , [Arg-Tmp-Event] ], where [Vrb] is a marker token that is the same across all instances.", "[Arg-Tmp-Event] is the event phrase in the SRL temporal argument, as described in hierarchy .", "[Arg-Tmp-Event] is empty for all dimensions other than hierarchy .", "We mask [Val] with probability p mask and [Dim] with probability p dim .", "We individually mask each event tokens with probability p event when we do not mask [Val] nor [Dim] .", "Soft cross-entropy is used when predicting [Val] , and a regular Cross-entropy is used for other tokens.", "We use the pre-trained token-recovery layer, and follow BERT's setting to randomly keep a token's surface or change it to noise during recovery.", "In the experiments, we explore a set of config-urations of the system.", "We explore the effect of having only one sentence or the two additional neighboring sentences as input contexts.", "We also experiment with all-event-masking, where we mask tokens in the event with a much higher probability.", "The goal of this masking scheme is to reduce the predictability of event tokens based on other event tokens to alleviate prior biases and focus more on the temporal argument.", "For example, BERT predicts coffee for the [MASK] in I had a cup of [MASK] this evening because of the strong prior of cup of.", "By masking more tokens in the event, the remaining ones will be more conditioned to the temporal cue.", "The label imbalance in the training data largely hinders our goal, as we should not assume a prior distribution as expressed in natural language.", "For example, seconds appears around ten times less than years in the data we collected for duration , leading to a biased model.", "We use weight adjustment to fix this.", "Specifically, we apply weight adjustment to the total loss with a weight factor calculated as the observed label's count relative to the number of all instances.", "We experiment with several variants of the proposed system to study the effect of each change.", "Input Size.", "A model with three input sentences (including the event sentence's left/right neighbors) are labeled with MS .", "Non MS models use only one sentence in which the event occurs.", "All Event Masking.", "A model with p event = 0 .", "6 is labeled as AM , and p event = 0 .", "15 otherwise.", "Final Model.", "Our final model includes all auxiliary dimensions ( AUX ) (mentioned in 3.2), uses soft cross-entropy loss ( SL ) and applies weight adjustment ( ADJ ) (mentioned in 3.6).", "We study each changes' effect by ablating them individually.", "To deal with the skew present in the training data ( 3), we down-sample to ensure roughly the same occurrences of each dimension (except for frequency because of its low quantity).", "As a result, 4.3 million sentences were used in pre-training (down-sampled from 25 million mined sentences).", "We employ a learning rate of 2e-5 with 3 epochs and set p mask = 0 .", "6 and p dim = 0 .", "1 .", "Other parameters are the same as those of the BERT base model.", "We use epoch 2's model for extrinsic evaluations to favor generalization, and epoch 3's model for intrinsic evaluations as it achieves the best performance across tasks.", "We evaluate our method on the temporal value recovery task, where the inputs are a sentence representing the event, an index to the event's verb, and a target dimension.", "The goal is to recover the temporal value of the given event in the given dimension.", "Datasets.", "To ensure a fair comparison, we sample instances from a new corpus RealNews (Zellers et al., 2019) that have no document overlap with our pre-training data and, at the same time making the data not strictly in-domain.", "We apply the same pattern extraction process mentioned in 3.3 on the new data and collect instances that are uniformly distributed across dimensions and values.", "In addition, we ask annotators on Mechanical Turk to filter out the events that cannot be recovered by common sense.", "For example, I brush my teeth [Month] will be discarded because all candidate answers are approximately uniformly distributed so that one cannot identify a subgroup of labels to be more likely.", "Specifically, we ask one annotator to select from 4 choices regarding each (event, temporal value) tuple.", "The choices are", "1) the event is unclear and abstract;", "2) the event has a uniform distribution across all labels within the dimension;", "3) the given label is one of the top 25% choices among all other labels within the dimension and", "4) the given label is not very likely.", "We keep the instances for which the annotator selects option 3), verifying that the label is a very likely choice for the given dimension.", "For the RealNews corpus, we annotate 1,774 events that are roughly uniformly distributed across dimensions and labels, among which 300 events are preserved.", "We also apply the same process to UDST dataset.", "We find the majority of the original annotation to be unsuitable, as there are many annotations to events that are seemingly undecidable by common sense.", "We first apply an initial filtering by using only events of which the anchor word is a verb and require all existing annotations from (Vashishtha et al., 2019) of the same instance to have an average distance less than two units.", "We then use our method to annotate 1,047 events, and eventually, 142 instances are left.", "Systems.", "In both datasets, we compare our proposed system with BERT.", "To use BERT's predictions on temporal values without supervision, we artificially add prepositions querying the target dimension as well as a masked token right after the verb.", "For example, I ran to the campus will be transformed as I ran for 1 [MASK] to the campus.", "The specific prepositions added are for 1 (dura-tion), every (frequency), in the (time of the day), on (week), in (month), and in (season).", "We then rank the temporal keywords (singular) in the given dimension according to the masked token's predictions.", "For our model, we follow the sequence formulation described above, recover and rank the masked [Val] token.", "In addition, we also compare with a baseline system called BERT + naive finetune , which is BERT fine-tuned on the same pre-training data we used for our proposed models, with a higher probability of masking a temporally related keyword (i.e., all values we used in all dimensions).", "Unlike our model, we only use soft cross-entropy loss and do not distinguish the dimensions each keyword is expressing.", "Metrics.", "Following Vashishtha et al. (2019), we employ a metric distance that measures the rank difference between a system's top prediction and the gold label with respect to an ordered label set.", "For duration and frequency where values are in a one-directional order, we use the absolute difference of the label ranks.", "For other dimensions where the labels are in circular relationships, we use the minimal distance between two labels in both directions, so that January will have a distance 1 with December.", "This is similar to an MAE metric, and we report the averaged number across instances.", "30 20 10 0 10 20 30 40 30 20 10 0 10 20 30 seconds weeks centuries Figure 5: Representations of events (whose durations were labeled as seconds, weeks, or centuries) obtained from the original BERT base model.", "The results on the filtered RealNews dataset and filtered UDST dataset are shown in Table 1.", "We see that our proposed final model is mostly better than other variants, and achieves 19% improvement over BERT on average on the normalized scale.", "We plot the embedding space of events with duration of seconds weeks or centuries in Fig 5 and Fig 6.", "We take the verb's contextual representation, apply PCA to reduce the dimension from 768 to 50, and then t-SNE to reduce it further to 2.", "Comparing the two plots, we see that the clusters formed by BERT embeddings have a wider distribution over the space, and the clusters have more points in overlap, even though the three sets of events have drastically different duration values.", "Our proposed model's embedding is able to better cluster the events based on this temporal feature, which is expected.", "Beyond unsupervised intrinsic experiments, we also evaluate the capability of the event temporal representation as a product of our model.", "That is, we finetune both BERT baseline and our model with the same process to compare the internal representations of the transformers.", "We use TimeML (Saurei et al., 2005; Pan et al., 2006), a dataset with event duration annotated as lower and upper bounds.", "The task is to decide whether a given event has a duration longer or shorter than a day.", "This is a suitable task to evaluate the embeddings because deciding longer/shorter than a day requires reasoning with more than one label, and would also benefit from auxiliary dimensions like duration upper-bound.", "The dataset contains 2,251 events, and we split the events based on sentences into 1,248/1,003 train/test.", "We formulate the training as a sequence classification task by taking the entire sentence and adding a special marker to the left of the verb indicating its position.", "The marker is unseen to both BERT and our model.", "We use the transformer output of the first token and feed it to an MLP for classification.", "We use a learning rate of 5e-5 and train for 3 epochs, and we repeat every reported number with 3 different random initialization and take the average.", "Table 2 shows the results of the TimeBank experiment.", "We see around 7-11% improvement over BERT on this task.", "Comparing with the state-of-the-art (Vempala et al., 2018) with a different train-ing/testing split, our model is within 1.5% of the best results but uses 25% less training data.", "We apply our event representations to the task of event sub-super relation extraction.", "This is a proper evaluation because the task naturally benefits from temporal commonsense knowledge.", "Intuitively, short duration or high frequency indicates the event being at a lower hierarchy and vice versa.", "We test if the temporal focused event representations will improve.", "We use HiEVE (Glavas et al., 2014), a dataset with annotations of four event relationships: no relation (NoRel), coreference (Coref), Child-Parent (C-P) and Parent-Child (P-C).", "There is no official split for this dataset, so we randomly 80/20 split the data at the document level and down-sample negative NoRel instances with a probability of 0.4.", "Similarly, we formulate the problem as a sequence classification task, where two events are put into one sequence separated by [SEP] , and verbs are marked by adding a marker token to their left.", "We use the representations of the first token and feed it to an MLP for classification.", "We train each model with a 5e-5 learning rate and 3 epochs.", "Each reported number is an average from 3 runs under different random initialization.", "During inference time, the probability scores for non-negative relations are averaged from the same event pair's sequences in both orders.", "Table 3 shows the results of the HiEVE experiment.", "We see that TACOLM improves by 4% and 8% on the coreference task and the parent-child tasks over BERT, respectively.", "We also evaluate on MCTACO (Zhou et al., 2019), a question answering dataset that requires comprehensive understandings of temporal common sense and reasoning.", "We compare the exact-match score across the 5 dimensions defined in MCTACO , although this work only focuses on 3 of them.", "We use the original baseline system and interchange transformer weights to compare between BERT and ours.", "However, because our model replaces temporal expressions with special tokens, it is at disadvantage to be directly evaluated on the original dataset with temporal expressions in natural language.", "To fix this, we run the same extraction system in 3.3 with modifications to identify the dimension a question is asking, and augment candidate answers with our special tokens representing the temporal values (if any) mentioned.", "This introduces rule-based dimension identification as well as coarse unit normalization to the systems, so we train/evaluated BERT baseline with the same modi-fied data as well.", "Each number is an average of 5 runs with different random initializations.", "Results on MCTACO are shown in Table 4.", "As expected, we find that our model achieves better System Duration Ordering Stationarity Frequency Typical Time BERT 33.4 36.5 57.6 43.3 39.5 TACOLM 34.6 35.1 57.9 45.1 40.9 Table 4: Performance on MCTACO .", "performance on the three dimensions that are focused in this work (i.e., duration, frequency, and typical time) as well as stationarity.", "However, the improvements are not very substantial, indicating the difficulty of this task and motivates future works.", "The model also does slightly worse on ordering, which is worth investigating in future works.", "Temporal common sense (TCS) is an important yet challenging research topic.", "Despite the existence of several prior work on event duration, this is the first attempt to jointly model three key dimensions of TCSduration, frequency, and typical time from cheap supervision signals mined from unannotated free text.", "The proposed sequence modeling framework improves over BERT in terms of handling reporting bias, taking into account the ordinal relations and exploiting interactions among multiple dimensions of time.", "The success of this model is confirmed by intrinsic evaluations on RealNews and UDS-T (where we see a 19% improvement), as well as extrinsic evaluations on TimeBank, HiEVE and MCTACO .", "The proposed method may be an important module for future applications related to time .", "This research is based upon work supported in part by the office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via IARPA Contract No. 2019-19051600006 under the BETTER Program and by Contract FA8750-19-2-1004 with the US Defense Advanced Research Projects Agency (DARPA).", "The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government.", "This research is also supported by a grant from the Allen Institute for Artificial Intelligence (allenai.org)." ]
[ "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "result", "abstain", "objective", "method", "result", "result", "result", "method", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "method", "objective", "result", "objective", "abstain", "method", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "objective", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "other", "other" ]
[ "Self-training has proven effective for improving NMT performance by augmenting model training with synthetic parallel data.", "The common practice is to construct synthetic data based on a randomly sampled subset of large-scale monolingual data, which we empirically show is sub-optimal.", "In this work, we propose to improve the sampling procedure by selecting the most informative monolingual sentences to complement the parallel data.", "To this end, we compute the uncertainty of monolingual sentences using the bilingual dictionary extracted from the parallel data.", "Intuitively, monolingual sentences with lower uncertainty generally correspond to easy-to-translate patterns which may not provide additional gains.", "Accordingly, we design an uncertainty-based sampling strategy to efficiently exploit the monolingual data for self-training, in which monolingual sentences with higher uncertainty would be sampled with higher probability.", "Experimental results on large-scale WMT English German and English Chinese datasets demonstrate the effectiveness of the proposed approach.", "Extensive analyses suggest that emphasizing the learning on uncertain monolingual sentences by our approach does improve the translation quality of high-uncertainty sentences and also benefits the prediction of low-frequency words at the target side.", "1 1 Introduction Leveraging large-scale unlabeled data has become an effective approach for improving the performance of natural language processing (NLP) models (Devlin et al., 2019; Brown et al., 2020; Jiao et al., 2020a).", "As for neural machine translation Work was mainly done when Wenxiang Jiao was interning at Tencent AI Lab.", "1 The source code is available at https://github.", "com/wxjiao/UncSamp (NMT), compared to the parallel data, the monolingual data is available in large quantities for many languages.", "Several approaches on boosting the NMT performance with the monolingual data have been proposed, e.g., data augmentation (Sen-nrich et al., 2016a; Zhang and Zong, 2016), semi-supervised training (Cheng et al., 2016; Zhang et al., 2018; Cai et al., 2021), pre-training (Siddhant et al., 2020; Liu et al., 2020).", "Among them, data augmentation with the synthetic parallel data (Sen-nrich et al., 2016a; Edunov et al., 2018) is the most widely used approach due to its simple and effective implementation.", "It has been a de-facto standard in developing the large-scale NMT systems (Has-san et al., 2018; Ng et al., 2019; Wu et al., 2020; Huang et al., 2021).", "Self-training (Zhang and Zong, 2016) is one of the most commonly used approaches for data augmentation.", "Generally, self-training is performed in three steps: (1) randomly sample a subset from the large-scale monolingual data; (2) use a teacher NMT model to translate the subset data into the target language to construct the synthetic parallel data; (3) combine the synthetic and authentic parallel data to train a student NMT model.", "Recent studies have shown that synthetic data manipulation (Edunov et al., 2018; Caswell et al., 2019) and training strategy optimization (Wu et al., 2019b; Wang et al., 2019) in the last two steps can boost the self-training performance significantly.", "However, how to efficiently and effectively sample the subset from the large-scale monolingual data in the first step has not been well studied.", "Intuitively, self-training simplifies the complexity of generated target sentences (Kim and Rush, 2016; Zhou et al., 2019; Jiao et al., 2020b), and easy patterns in monolingual sentences with deterministic translations may not provide additional gains over the self-training teacher model (Shri-vastava et al., 2016).", "Related work on computer vision also reveals that easy patterns in unlabeled data with the deterministic prediction may not provide additional gains (Mukherjee and Awadallah, 2020).", "In this work, we investigate and identify the uncertain monolingual sentences which implicitly hold difficult patterns and exploit them to boost the self-training performance.", "Specifically, we measure the uncertainty of the monolingual sentences by using a bilingual dictionary extracted from the authentic parallel data (2.1).", "Experimental results show that NMT models benefit more from the monolingual sentences with higher uncertainty, except on those with excessively high uncertainty (2.3).", "By conducting the linguistic property analysis, we find that extremely uncertain sentences contain relatively poor translation outputs, which may hinder the training of NMT models (2.4).", "Inspired by the above finding, we propose an uncertainty-based sampling strategy for self-training, in which monolingual sentences with higher uncertainty would be selected with higher probability (3.1).", "Large-scale experiments on WMT English German and English Chinese datasets show that self-training with the proposed uncertainty-based sampling strategy significantly outperforms that with random sampling (3.3).", "Extensive analyses on the generated outputs confirm our claim by showing that our approach improves the translation of uncertain sentences and the prediction of low-frequency target words (3.4).", "Contributions.", "Our main contributions are: We demonstrate the necessity of distinguishing monolingual sentences for self-training.", "We propose an uncertainty-based sampling strategy for self-training, which selects more complementary sentences for the authentic parallel data.", "We show that NMT models benefit more from uncertain monolingual sentences in self-training, which improves the translation quality of uncertain sentences and the prediction accuracy of low-frequency words.", "In this section, we aimed to understand the effect of uncertain monolingual data on self-training.", "We first introduced the metric for identifying uncertain monolingual sentences, then the experimental setup and at last our preliminary results.", "Notations.", "Let X and Y denote the source and target languages, and let X and Y represent the sentence domains of corresponding languages.", "Let B = { ( x i , y i ) } Ni =1 denote the authentic parallel data, where x i X , y i Y and N is the number of sentence pairs.", "Let M x = { x j } M x j =1 denote the collection of monolingual sentences in the source language, where x j X and M x is the size of the set.", "Our objective is to obtain a translation model f : X (cid:55) Y , that can translate sentences from language X to language Y .", "Data Complexity.", "According to Zhou et al. (2019), the complexity of a parallel corpus can be measured by adding up the translation uncertainty of all source sentences.", "Formally, the translation uncertainty of a source sentence x with its translation candidates can be operationalized as conditional entropy: H ( Y | X = x ) = (cid:88) y Y p ( y | x ) log p ( y | x ) (1) T x (cid:88) t =1 H ( y | x = x t ) , (2) where T x denotes the length of the source sentence, x and y represent a word in the source and target vocabularies, respectively.", "Generally, a high H ( Y | X = x ) denotes that a source sentence x would have more possible translation candidates.", "Equation (2) estimates the translation uncertainty of a source sentence with all possible translation candidates in the parallel corpus.", "It can not be directly applied to the sentences in monolingual data due to the lack of corresponding translation candidates.", "One potential solution to the problem is utilizing a trained model to generate multiple translation candidates.", "However, generation may lead to bias estimation due to the generation diversity issue (Li et al., 2016; Shu et al., 2019).", "More importantly, generation is extremely time-consuming for large-scale monolingual data.", "Monolingual Uncertainty.", "To address the problem, we modified Equation (2) to reflect the uncertainty of monolingual sentences.", "We estimate the target word distribution conditioned on each source word based on the authentic parallel corpus, and then use the distribution to measure the translation uncertainty of the monolingual example.", "Specifi-cally, we measure the uncertainty of monolingual sentences based on the bilingual dictionary.", "For a given monolingual sentence x j M x , its uncertainty U is calculated as: U( x j |A b ) = 1 T x T x (cid:88) t =1 H ( y |A b , x = x t ) , (3) which is normalized by T x to avoid the length bias.", "A higher value of U indicates a higher translation uncertainty of the monolingual sentence.", "In Equation 3, the word level entropy H ( y |A b , x = x t ) captures the translation modalities of each source word by using the bilingual dictionary A b .", "The bilingual dictionary records all the possible target words for each source word, as well as translation probabilities.", "It can be built from the word alignments by external alignment toolk-its on the authentic parallel corpus.", "For example, given a source word x with all three word translations y 1 , y 2 and y 3 and the translation probabilities of p ( y 1 | x ) , p ( y 2 | x ) and p ( y 3 | x ) , respectively, the word level entropy can be calculated as follows: H ( y |A b , x i ) = (cid:88) y j A b ( x i ) p ( y j | x i ) log p ( y j | x i ) .", "Data.", "We conducted experiments on two large-scale benchmark translation datasets, i.e., WMT English German (En De) and WMT English Chinese (En Zh).", "The authentic parallel data for the two tasks consists of about 36 .", "8 M and 22 .", "1 M sentence pairs, respectively.", "The monolingual data we used is from newscrawl released by WMT2020.", "We combined the newscrawl data from year 2011 to 2019 for the English monolingual corpus, consisting of about 200M sentences.", "We randomly sampled 40M monolingual data for En De and 20M for En Zh unless otherwise stated.", "We adopted newstest2018 as the validation set and used newstest2019/2020 as the test sets.", "For each language pair, we applied Byte Pair Encoding (BPE, Sennrich et al., 2016b) with 32K merge operations.", "Model.", "We chose the state-of-the-art TRANSFORMER (Vaswani et al., 2017) network as our model, which consists of an encoder of 6 layers and a decoder of 6 layers.", "We adopted the open-source toolkit Fairseq (Ott et al., 2019) to implement the model.", "We used the TRANSFORMER-BASE model for preliminary experiments (2.3) and the constrained scenario (3.2) for efficiency.", "For the unconstrained scenario (3.3), we adopted the TRANSFORMER-BIG model.", "Results on these models with different capacities can also reflect the robustness of our approach.", "For the TRANSFORMERBASE model, we trained it for 150K steps with 32K ( 4096 8 ) tokens per batch.", "For the TRANSFORMER-BIG model, we trained it for 30K steps with 460K ( 3600 128 ) tokens per batch with the cosine learning rate schedule (Wu et al., 2019a).", "We used 16 Nvidia V100 GPUs to conduct the experiments and selected the final model by the best perplexity on the validation set.", "Evaluation.", "We evaluated the models by BLEU score (Papineni et al., 2002) computed by Sacre-BLEU (Post, 2018) 2 .", "For the En Zh task, we added the option --tok zh to SacreBLEU.", "We measured the statistical significance of improvement with paired bootstrap resampling (Koehn, 2004) using compare-mt 3 (Neubig et al., 2019).", "First of all, we investigated the effect of monolingual data uncertainty on the self-training performance in NMT.", "We conducted the preliminary experiments on the WMT En De dataset with the TRANSFORMER-BASE model.", "We sampled 8M bilingual sentence pairs from the authentic parallel data and randomly sampled 40M monolingual sentences for the self-training.", "To ensure the quality of synthetic parallel data, we trained 2", "BLEU+case.mixed+lang.[Task]+numrefs.1", "+smooth.exp++test.wmt[Year]+tok.[Tok]+version.1.4.14 , Task=en-de/en-zh, Year=19/20, Tok=13a/zh 3 https://github.com/neulab/compare-mt 2021-01-23 Uncertainty vs. Performance BLEU 35 36 37 U n ce r t a i n t y 1.5 2.5 3.5 Data bins 1 2 3 4 5 UncertaintyBLEU All (36.5) BLEU 32 34 36 38 Additional Monolingual Data 0M 8M 16M 24M 32M 40M Uncertain Certain Figure 2: Relationship between uncertainty of monolingual data and the corresponding NMT performance.", "We reported the translation performance in Figure 2. As seen, there is a trend of performance improvement with the increase of monolingual data uncertainty (e.g., bins 1 to 4) until the last bin.", "The last bin consists of sentences with excessively high uncertainty, which may contain erroneous synthetic target sentences.", "Training on these sentences forces the models to over-fit on these incorrect synthetic data, resulting in the confirmation bias issue (Arazo et al., 2020).", "These results corroborate with prior studies (Chang et al., 2017; Mukherjee and Awadal-lah, 2020) such that learning on certain examples brings little gain while on the excessively uncertain examples may also hurt the model training.", "After that, we ranked all the 40M monolingual sentences and grouped 4 https://github.com/pytorch/fairseq/ tree/master/examples/backtranslation 5 https://github.com/clab/fast_align them into 5 equally-sized bins (i.e., 8M sentences per bin) according to their uncertainty scores.", "At last, we performed self-training with each bin of monolingual data.", "a TRANSFORMER-BIG model for translating the source monolingual data to the target language.", "We generated translations using beam search with beam width 5 , and followed Edunov et al. (2018) 4 to filter the generated sentence pairs (See Appendix A.1).", "Self-training v.s. Data Size.", "We took a look at the performance of standard self-training and its relationship with data size.", "Figure 1 showed the results.", "Obviously, self-training with 8M synthetic data can already improve the NMT performance by a significant margin (36.2 averaged BLEU score on WMT En De newstest2019 and newstest2020).", "Increasing the size of added monolingual data does not bring much more benefit.", "With all the 40M monolingual sentences, the final performance achieves only 36.5 BLEU points.", "It indicates that adding more monolingual data only is not a promising way to improve self-training, and more sophisticated approaches for exploiting the monolingual data are desired.", "Self-training v.s. Uncertainty.", "In this experiment, we first adopted fast-align 5 to establish word alignments between source and target words in the authentic parallel corpus and used the alignments to build the bilingual dictionary A b .", "Then we used the bilingual dictionary to compute the data uncertainty expressed in Equation (3) for the sentences in the monolingual data set.", "We further analyzed the differences between the monolingual sentences with varied uncertainty to gain a deeper understanding of the uncertain data.", "Specifically, we performed linguistic analysis on the five data bins in terms of three properties: 1) sentence length that counts the tokens in the sentence, 2) word rarity (Platanios et al., 2019) that measures the frequency of words in a sentence with a higher value indicating a more rare sentence, and 3) translation coverage (Khadivi and Ney, 2005) that measures the ratio of source words being aligned with any target words.", "The first two reflect the properties of monolingual sentences while the last one reflects the quality of synthetic sentence pairs.", "We also presented the results of the synthetic target sentences for reference.", "Details of the linguistic properties are in Appendix A.2.", "The results are reported in Figure 3. For the length property, we find that monolingual sentences with higher uncertainty are usually longer except for those with excessively high uncertainty (e.g., bin 5).", "The monolingual sentences in the last data bin noticeably contain more rare words than other bins in Figure", "3(b), and the rare words in the sentences pose a great challenge in the NMT training process (Gu et al., 2020).", "In Figure", "3(c), the overall coverage in bin 5 is the lowest among the self-training bins.", "In contrast, bin 1 with the lowest uncertainty has the highest coverage.", "These observations suggest that monolingual sentences in bin 1 indeed contain the easiest patterns while 2021-01-23 Uncertainty vs. Properties 0 25 50 Data bins 1 2 3 4 5 SourceTarget 7.5 8.0 8.5 Data bins 1 2 3 4 5 SourceTarget 0.85 0.90 0.95 1.00 Data bins 1 2 3 4 5 SourceTarget Uncertain Certain Uncertain Certain Uncertain Certain", "monolingual sentences in bin 5 are the most difficult ones, which may explain their relatively weak performance in Figure 2. 3 Exploiting Monolingual Uncertainty By analyzing the effect of monolingual data uncertainty on self-training in Section 2, we understood that monolingual sentences with relatively high uncertainty are more informative while also with high quality, which motivates us to emphasize the training on these sentences.", "In this section, we introduced the uncertainty-based sampling strategy for self-training and the overall framework.", "With the aforementioned measure of monolingual data uncertainty in Section 2.1, we propose the uncertainty-based sampling strategy for self-training, which prefers to sample monolingual sentences", "sentences with relatively high uncertainty.", "To ensure the data diversity and avoid the risk of being dominated by the excessively uncertain sentences, we sample monolingual sentences according to the uncertainty distribution with the highest uncertainty penalized.", "Specifically, given a bud-get of N s sentences to sample, we set two hyper-parameters to control the sampling probability as follows: p = (cid:2) U( x j |A b ) (cid:3) (cid:80) x j M x [ U( x j |A b )] , (5) = (cid:40) 1 , U( x j |A b ) U max , max ( 2U max U( x j |A b ) 1 , 0) , else , (6) where is used to penalize excessively high uncertainty over a maximum uncertainty threshold U max (See Figure", "4(a)), the power rate is used to adjust the distribution such that a larger gives more probability mass to the sentences with high uncertainty (See Figure", "4(b)).", "The maximum uncertainty threshold U max is assigned to the uncertainty value such that R % of sentences in the authentic parallel corpus have monolingual data uncertainty below than it.", "R is assumed to be as high as 80 to 100.", "Because for monolingual data with uncertainty higher than this threshold, they may not be translated correctly by the teacher model as there are inadequate such sentences in the authentic parallel data for the model to learn.", "As a result, monolingual sentences with uncertainty higher than U max should be penalized in terms of the sampling probability.", "Overall Framework.", "Figure 5 presents the framework of our uncertainty-based sampling for 2021-01-23 ST Teacher NMT Model X Y Alignment Model Bitext (1.1) Train (1.2) Train x y 1 y 2 y 3 p ( y 1 | x ) p ( y 2 | x ) p ( y 3 | x ) Bilingual Dictionary X m Mono Uncertainty-based Sampling Translate X m ' Y m ' Synthetic (2) Sample (3) Generate X m ' Student NMT Model (4) Train + Bitext Figure 5: Framework of the proposed uncertainty-based sampling strategy for self-training.", "self-training, which includes four steps: 1) train a teacher NMT model and an alignment model on the authentic parallel data simultaneously; 2) extract the bilingual dictionary from the alignment model and perform uncertainty-based sampling for monolingual sentences; 3) use the teacher NMT model to translate the sampled monolingual sentences to construct the synthetic parallel data; 4) train a student NMT model on the combination of synthetic and authentic parallel data.", "We first validated the proposed sampling approach in a constrained scenario, where we followed the experimental configuration in Section 2.3 with the TRANSFORMER-BASE model, the 8M bitext, and the 40M monolingual data.", "It allows the efficient evaluation of our approach with varied combinations of hyper-parameters and also the comparison with related methods.", "Specifically, we performed our approach by sampling 8M sentences from the 40M monolingual data and then combining the corresponding 8M synthetic data with the 8M bitext to train the TRANSFORMER-BASE model.", "Table 1 reported the impact of and R on the BLEU score.", "As shown, sampling with high uncertainty sentences and penalizing those with excessively high uncertainty improves translation performance from 36.6 to 36.9.", "In these experiments, the uncertainty threshold U max for penalizing are 2.90 and 2.74, which are determined by the 90% and 80% ( R =90 and 80 in Table 1) most certain sentences in the authentic parallel data, respectively.", "Obviously, the proposed uncertainty-based sampling strategy achieves the best performance with R at 90 and at 2. In the following experiments, we use R = 90 and = 2 as the default setting for our sampling strategy if not otherwise stated.", "Effect of Sampling.", "Some researchers may doubt that the final translation quality is affected by the quality of the teacher model.", "Therefore, translations of high-uncertainty sentences should contain many errors, and it is better to add the results of oracle translations to discuss the sampling effect and the quality of pseudo-sentences separately.", "To dispel the doubt, we still used the aforementioned 8M bitext as the bilingual data, and used the rest of WMT19 En-De data (28.8M) as the held-out data (with oracle translations) for sampling.", "The results are listed in Table 2. Clearly, our uncertainty-based sampling strategy (UNCSAMP ) outperforms the random sampling strategy (RANDSAMP ) when manual translations are used (Rows 2 vs. 3), demonstrating the effectiveness of our sampling strategy based on the un-System Data En De En Zh 2019 2020 Avg 2019 2020 Avg Wu et al. (2019b) BITEXT 37.3 +R ANDSAMP 39.8 Shi et al. (2020) BITEXT 38.6 +R ANDSAMP 41.9 This Work BITEXT 39.6 31.0 35.3 37.1 42.5 39.8 +R ANDSAMP 41.6 33.1 37.3 37.6 43.8 40.7 +S RCLM 41.7 33.1 37.4 37.3 44.0 40.7 +U NCSAMP 42.5 34.4 38.4 38.2 44.3 41.3 Table 4: Translation performance on WMT En De and WMT En Zh test sets.", "certainty.", "Another interesting finding is that using the pseudo-sentences outperforms using the manual translations (Rows 4 vs. 2, 5 vs. 3).", "One possible reason is that the TRANSFORMER-BIG model to construct the pseudo-sentences was trained on the whole WMT19 En-De data that contains the held-out data, which serves as self-training to decently improve the supervised baseline (He et al., 2019).", "Comparison with Related Work.", "We compared our sampling approach with two related works, i.e., difficult word by frequency (DWF, Fadaee and Monz, 2018) and source language model (SRCLM, Lewis, 2010).", "The former one was proposed for monolingual data selection for back-translation, in which sentences with low-frequency words were selected to boost the performance of back-translation.", "The latter one was proposed for in-domain data selection for in-domain language models.", "Details of the implementation of related work are in Appendix A.3.", "technique developed for back-translation may not work for self-training.", "As for SRCLM, it achieves a marginal improvement over RANDSAMP .", "The proposed UNCSAMP approach outperforms the baseline RANDSAMP by +0.7 BLEU point, which demonstrates the effectiveness of our approach.", "In addition to our UNCSAMP approach, we also utilized another N -gram language model at the target side to further filter out the synthetic data with potentially erroneous target sentences.", "By filtering out 20% sentences from the sampled 8M sentences, our UNCSAMP approach achieves a further improvement up to +0.9 BLEU point.", "We extended our sampling approach to the unconstrained scenario, where the scale of data and the capacity of NMT models for self-training are increased significantly.", "We conducted experiments on the high-resource En De and En Zh translation tasks with all the authentic parallel data, including 36.8M sentence pairs for En De and 22.1M for En Zh, respectively.", "For monolingual data, we considered all the 200M English newscrawl monolingual data to perform sampling.", "We trained the TRANSFORMER-BIG model for experiments.", "Table 4 listed the main results of large-scale self-training on high-resource language pairs.", "As shown, our TRANSFORMER-BIG models trained on the authentic parallel data achieve the performance competitive with or even better than the submissions to WMT competitions.", "Based on such strong baselines, self-training with RANDSAMP improves the performance by +2.0 and +0.9 BLEU points on En De and En Zh tasks respectively, demonstrating the effectiveness of the large-scale self-training for NMT models.", "With our uncertainty-based sampling strategy UNCSAMP , self-training achieves further significant improvement by +1.1 and +0.6 BLEU points over the random sampling strategy, which demonstrates the effectiveness of exploiting uncertain monolingual sentences.", "In this section, we conducted analyses to understand how the proposed uncertainty-based sampling approach improved the translation performance.", "Concretely, we analyzed the translation outputs of WMT En De newstest2019 from the TRANSFORMER-BIG model in Table 4. Uncertain Sentences.", "As we propose to enhance high uncertainty sentences in self-training, one remaining question is whether our UNCSAMP approach improves the translation quality of high uncertainty sentences.", "Specifically, we ranked the source sentences in the newstest2019 by the monolingual uncertainty, and divided them into three equally sized groups, namely Low, Medium and High uncertainty.", "The translation performance on these three groups is reported in Table 5. The first observation is that sentences with high uncertainty are with relatively low BLEU scores (i.e., 31.0), indicating the higher difficulty for NMT models to correctly decode the source sentences with higher uncertainty.", "Our UNCSAMP approach improves the translation performance on all sentences, especially on the sentences with high uncertainty (+10.9%), which confirms our motivation of emphasizing the learning on uncertain sentences for self-training.", "Low-Frequency Words.", "Partially motivated by Fadaee and Monz (2018), we hypothesized that the addition of monolingual data in self-training Unc BITEXTRANDSAMPUNCSAMPBLEU (cid:52) (%) Low 38.1 39.7 41.5 8.9 Med 34.2 36.7 37.4 9.3 High 31.0 33.4 34.4 10.9 Table 5: Translation performance on uncertain sentences.", "has the potential to improve the prediction of low-frequency words at the target side for the NMT models.", "Therefore, we investigated whether our approach has a further boost to the performance on the prediction of low-frequency words.", "We calculated the word accuracy of the translation outputs with respect to the reference in newstest2019 by compare-mt .", "Following Wang et al. (2020), we divided words into three categories based on their frequency, including High: the most 3,000 frequent words; Medium: the most 3,001-12,000 frequent words; Low: the other words.", "Table 6 listed the results of word accuracy on these three groups evaluated by F-measure.", "First, we observe that low-frequency words in BITEXT are more difficult to predict than mediumand high-frequency words (i.e., 52.3 v.s. 65.2 and 70.3), which is consistent with Fadaee and Monz (2018).", "Second, adding monolingual data by self-training improves the prediction performance of low-frequency words.", "Our UNCSAMP approach outperforms RANDSAMP significantly on the low-frequency words.", "These results suggest that emphasizing the learning on uncertain monolingual sentences also brings additional benefits for the learning of low-frequency words at the target side.", "by synthetic parallel data has been the most simple and effective way to utilize monolingual data for NMT,", "which can be achieved by self-training (He et al., 2019) and back-translation (Sennrich et al., 2016a).", "While back-translation has dominated the NMT area for years (Fadaee and Monz, 2018; Edunov et al., 2018; Caswell et al., 2019), recent works on translationese (Marie et al., 2020; Graham et al., 2019) suggest that NMT models trained with back-translation may lead to distortions in automatic and human evaluation.", "To address the problem, starting from WMT2019 (Barrault et al., 2019), the test sets only include naturally occurring text at the source-side, which is a more realistic scenario for practical translation usage.", "In this new testing setup, the forward-translation (Zhang and Zong, 2016), i.e., self-training in NMT, becomes a more promising method as it also introduces naturally occurring text at the source-side.", "Therefore, we focus on the data sampling strategy in the self-training scenario, which is different from these prior studies.", "Data Uncertainty in NMT.", "Data uncertainty in NMT has been investigated in the last few years.", "Ott et al. (2018) analyzed the NMT models with data uncertainty by observing the effectiveness of data uncertainty on the model fitting and beam search.", "Wang et al. (2019) and Zhou et al. (2020) computed the data uncertainty on the back-translation data and the authentic parallel data and proposed uncertainty-aware training strategies to improve the model performance, respectively.", "Wei et al. (2020) proposed the uncertainty-aware semantic augmentation method to bridge the discrepancy of the data distribution between the training and the inference phases.", "In this work, we propose to explore monolingual data uncertainty to perform data sampling for the self-training in NMT.", "In this work, we demonstrate the necessity of distinguishing monolingual sentences for self-training in NMT, and propose an uncertainty-based sampling strategy to sample monolingual data.", "By sampling monolingual data with relatively high uncertainty, our method outperforms random sampling significantly on the large-scale WMT English German and English Chinese datasets.", "Further analyses demonstrate that our uncertainty-based sampling approach does improve the translation quality of high uncertainty sentences and also benefits the prediction of low-frequency words at the target side.", "The proposed technology has been applied to TranSmart 6 (Huang et al., 2021), an interactive machine translation system in Tencent, to improve the performance of its core translation engine.", "Future work includes the investigation on the confirmation bias issue of self-training and the effect of decoding strategies on self-training sampling.", "This work is partially supported by the Research Grants Council of the Hong Kong Special Administrative Region, China (CUHK 2410021, Research Impact Fund (RIF), R5034-18; CUHK 14210717, General Research Fund), and Tencent AI Lab Rhi-noBird Focused Research Program (GF202036).", "We sincerely thank the anonymous reviewers for their insightful suggestions on various aspects of this work." ]
[ "abstain", "result", "objective", "method", "abstain", "method", "abstain", "objective", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "result", "objective", "abstain", "result", "abstain", "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "objective", "abstain", "abstain", "other", "other" ]
[ "As NLP models become larger, executing a trained model requires significant computational resources incurring monetary and environmental costs.", "To better respect a given inference budget, we propose a modification to contextual representation fine-tuning which, during inference, allows for an early (and fast) exit from neural network calculations for simple instances, and late (and accurate) exit for hard instances.", "To achieve this, we add classifiers to different layers of BERT and use their calibrated confidence scores to make early exit decisions.", "We test our proposed modification on five different datasets in two tasks: three text classification datasets and two natural language inference benchmarks.", "Our method presents a favorable speed/accuracy tradeoff in almost all cases, producing models which are up to five times faster than the state of the art, while preserving their accuracy.", "Our method also requires almost no additional training resources (in either time or parameters) compared to the baseline BERT model.", "Finally, our method alleviates the need for costly retraining of multiple models at different levels of efficiency; we allow users to control the inference speed/accuracy tradeoff using a single trained model, by setting a single variable at inference time.", "We publicly release our code.", "1 1 Introduction The large increase in the size of artificial intelligence models often increases production costs (Amodei and Hernandez, 2018; Schwartz et al., 2019), and can also limit adoption on real-time devices.", "Compared to training , which is a one-time large investment, inference costs are incurred for every instance in production, and can thus add up Research completed during an internship at AI2.", "significantly.", "For instance, Microsoft reports that using BERT (Devlin et al., 2019) to process Bing queries requires more than 2,000 GPUs concurrently.", "2 We present a method to reduce the inference cost of today's common models in NLP: fine-tuned contextual word representations.", "Our method exploits variation along two axes: models differ in size and cost, and instances vary in difficulty.", "Our method assesses the complexity of each test instance and matches it with the most efficient model in our toolbelt. 3 As a result, some instances, which we refer to in this paper as easy or simple, can be solved by small models, leading to computational savings, while other instances (termed hard or difficult) have access to larger models, thus 2 https://tinyurl.com/tzhj3o8 3 Our approach should not be confused with model ensembles (Kuncheva and Whitaker, 2003), where the prediction of multiple models is combined, on every instance , in order to improve accuracy, at the expense of slower inference time.", "We apply our method to the BERT-large model, modifying its fine-tuning procedure by adding multiple output layers to some of its original (cid:96) = 24 layers.", "4 A classifier at the k th layer, is more efficient, though (presumably) less accurate than a classifier at a later (cid:96) th layer (where (cid:96) > k ).", "At inference time, we run each instance on these classifiers in increasing order of depth.", "For each classification decision, we use its confidence as an inference-stopping criterion, continuing to the next, larger classifier only if the current classifier is not confident enough in its prediction.", "Since confidence scores play an important role, we use calibration techniques to make them more reliable.", "Associating classifiers with different layers of the same network allows us to reuse the computation performed by the simple classifiers for the complex ones.", "See Figure 1 for an illustration.", "We experiment with three text classification benchmarks and two natural language inference (NLI) benchmarks.", "We consider each of our classifiers with different BERT layers as individual baselines.", "We find that using our method leads to a consistently better speed/accuracy tradeoff in almost all cases.", "In particular, in some cases, we obtain similar performance while being as much as five times faster than our strongest baseline (the original BERT-large mode with a single classification layer after the last layer).", "Our approach, while allowing substantially faster inference compared to the standard BERT-large model, is neither slower to fine-tune nor sig-nificantly larger in terms of parameters, requiring less than 0.005% additional parameters.", "Moreover, our method is quite flexible: unlike other approaches for inference speed-up such as model distillation or pruning, which require training a different model for each point along the speed/accuracy curve, our method only requires training a single model, and by setting a single variable at inference timethe confidence thresholdsupports each point along that curve.", "Finally, our method is orthogonal to compression methods such as model distillation (Hinton et al., 2014).", "Our experiments with a distilled version of BERT (Jiao et al., 2019) show that our method further improves the speed/accuracy curve on top of that model.", "We 4 For simplicity, we refer to these output layers as classifiers, though our method can also be applied to nonclassification tasks.", "Our goal in this paper is to make model inference more efficient.", "Our premise relies on two general observations: first, as NLP models become bigger (e.g., in number of parameters), they become both better (in terms of downstream task accuracy), and slower to run.", "This trend is consistently observed, most notably in recent contextual representations work that compares different variants of the same model (Devlin et al., 2019; Radford et al., 2019; Raffel et al., 2019, inter alia ).", "Second, inputs are not equally difficult.", "For example, instances differ in length and wealth of linguistic phenomena, which affects the amount of processing required to analyze them.", "Consider the examples below for the task of sentiment analysis: (1) The movie was awesome.", "(2) I can't help but wonder whether the plot was written by a 12 year-old or by an award-winning writer.", "Sentence 1 is short and simple to process.", "In contrast, Sentence 2 is long, contains misleading positive phrases (award-winning writer), and uses figurative speech (the plot was written by a 12 year-old).", "As a result, it is potentially harder to process.", "6 This work leverages these two observations by introducing a method to speed-up inference by matching simple instances with small models, and complex instances with large models.", "Motivation We assume a series of n trained models m 1 , . . . , m n for a given task, such that for each 1 < i n , m i is both more accurate than m i 1 (as measured by a performance on validation data) and more expensive to execute.", "Current practice in NLP, which favors accuracy rather than efficiency (Schwartz et al., 2019), would typically run m n on each test instance, as it would likely lead to the highest test score.", "However, many of the test instances could be solved by simpler (and faster) 5 github.com/allenai/sledgehammer 6 Note that simplicity is task-dependent.", "For example, in topic classification, models often accumulate signal across a document, and shorter inputs (with less signal) may be more difficult than longer ones.", "See Section 6.", "models; if we had an oracle that identifies the smallest model that solves a given instance, we could use it to substantially speed up inference.", "Our goal is to create an automatic measure which approximates the behavior of such an oracle, and identify the cheapest accurate model for each instance.", "BERT-large To demonstrate our approach, we consider the BERT-large model (Devlin et al., 2019), based on a transformer architecture (Vaswani et al., 2017) with 24 layers.", "To apply BERT-large to some downstream task, an output layer is typically added to the final layer of the model, and the model is fine-tuned on training data for that task.", "To make a prediction using the classifier on the final layer, the computation goes through all the layers sequentially, requiring more computation than a shallower model with fewer layers, which would suffice in some cases.", "Suite of models Our approach leverages BERT's multilayered structure by adding an output layer to intermediate layers of the model.", "For k < (cid:96) , the output layer after k BERT layers exits the model earlier than a deeper output layer (cid:96) , and therefore yields a more efficient (but potentially less accurate) prediction.", "Confidence scores for early exit decisions To make early exit decisions, we calculate the layer-wise BERT representations sequentially.", "As we reach a classification layer, we use it to make predictions.", "We interpret the label scores output by softmax as confidence scores .", "We use these confidence scores to decide whether to exit early or continue to the next (more expensive and more accurate) classifier.", "See Figure 1 for an illustration.", "Training details To train the model, we use the standard way of applying BERT to downstream tasksfine-tuning the pre-trained weights, while learning the weights of the randomly initialized classifier, where here we learn multiple classifiers instead of one.", "As our loss function, we sum the losses of all classification layers, such that lower layers are trained to both be useful as feature generators for the higher layers, and as input to their respective classifiers.", "This also means that every output layer is trained to perform well on all instances.", "Importantly, we do not perform early exits during training, but only during inference.", "Calibration Classifiers' confidence scores are not always reliable (Jiang et al., 2018).", "One way to mitigate this concern is to use calibration, which encourages the confidence level to correspond to the probability that the model is correct (DeGroot and Fienberg, 1983).", "In this paper we use temperature calibration, which is a simple technique that has been shown to work well in practice (Guo et al., 2017), in particular for BERT fine-tuning (Desai and Durrett, 2020).", "The method learns a single parameter, denoted temperature or T , and divides each of the logits { z i } by T before applying the softmax function: pred = arg max i exp( z i /T ) (cid:80) j exp( z j /T ) We select T to maximize the log-likelihood of the development dataset.", "Note that temperature calibration is monotonic and thus does not influence predictions.", "It is only used in our model to make early-exit decisions.", "Discussion Our approach has several attractive properties.", "First, if m i is not sufficiently confident in its prediction, we reuse the computation and continue towards m i +1 without recomputing the BERT layers up to m i .", "Second, while our model is larger in terms of parameters compared to the standard approach due to the additional classification layers, this difference is marginal compared to the total number of trainable parameters: our experiments used 4 linear output layers instead of 1, which results in an increase of 6K (binary classification) to 12K (4-way classification) parameters.", "For the BERT-large model with 335M trainable parameters, this is less than 0.005% of the parameters.", "Third, as our experiments show (Section 5), while presenting a much better inference time/accuracy tradeoff, fine-tuning our model is as fast as fine-tuning the standard model with a single output layer.", "Moreover, our model allows for controlling this tradeoff by setting the confidence threshold at inference time, allowing users to better utilize the model for their inference budget.", "7 We also considered feeding the output of previous classifiers as additional features to subsequent classifiers, known as stacking (Wolpert, 1992).", "Preliminary experiments did not yield any benefits, so we did not further pursue this direction.", "To test our approach, we experiment with three text classification and two natural language inference (NLI) tasks in English.", "NLI is a pairwise sentence classification task, where the goal is to predict whether a hypothesis sentence entails, contradicts or is neutral to a premise sentence (Dagan et al., 2005).", "Below we describe our datasets, our baselines, and our experimental setup.", "Datasets For text classification, we experiment with the AG news topic identification dataset (Zhang et al., 2015) and two sentiment analysis datasets: IMDB (Maas et al., 2011) and the binary Stanford sentiment treebank (SST; Socher et al., 2013).", "8 For NLI, we experiment with the SNLI (Bowman et al., 2015) and MultiNLI (MNLI; Williams et al., 2018) datasets.", "We use the standard train-development-test splits for all datasets except for MNLI, for which there is no public test set.", "As MNLI contains two validation sets (matched and mismatched), we use the matched validation set as our validation set and the mismatched validation set as our test set.", "See Table 1 for dataset statistics.", "Baselines We use two types of baselines: running BERT-large in the standard way, with a single output layer on top of the last layer, and three efficient baselines of increasing size (Figure 2).", "Each is a fine-tuned BERT model with a single output layer after some intermediate layer.", "Importantly, these baselines offer a speed/accuracy tradeoff, but not within a single model like our approach.", "As all baselines have a single output layer, they all have a single loss term, such that BERT layers 1 , . . . , k only focus on a single classification layer, rather than multiple ones as in our approach.", "As with our model, the single output layer in each of 8 For SST, we only used full sentences, not phrases.", "our baselines is given as input a learned weighted sum of all BERT layers up to the current layer.", "As an upper bound to our approach, we consider a variant of our model that uses the exact amount of computation required to solve a given instance.", "It does so by replacing the confidence-based early-exit decision function with an oracle that returns the fastest classifier that is able to solve that instance, or the fastest classifier for instances that are not correctly solved by any of the classifiers.", "Experimental setup We experiment with BERT-large-uncased (24 layers).", "We add output layers to four layers: 0, 4, 12 and 23.", "9 We use the first three layer indices for our efficient baselines (the last one corresponds to our standard baseline).", "See Appendix A for implementation details.", "For training, we use the largest batch size that fits in our GPU memory for each dataset, for both our baselines and our model.", "Our approach relies on discrete early-exit decisions that might differ between instances in a batch.", "For the sake of simplicity, we use a batch size of 1 during inference.", "This is useful for production setups where instances arrive one by one.", "Larger batch sizes can be applied using methods such as budgeted batch classification (Huang et al., 2018), which specify a budget for the batch and select a subset of the instances to fit that budget, while performing early exit for the rest of the instances.", "We defer the technical implementation of this idea to future work.", "To measure efficiency, we compute the average runtime of a single instance, across the test set.", "We repeat each validation and test experiment five times and report the mean and standard deviation.", "At prediction time, our method takes as an input a threshold between 0 and 1, which is applied to each confidence score to decide whether to exit early.", "Lower thresholds result in earlier exits, with 0 implying the most efficient classifier is always used.", "A threshold of 1 always uses the most expensive and accurate classifier.", "A better speed/accuracy tradeoff.", "Figure 3 presents our test results.", "10 The blue line shows our model, where each point corresponds to an increasingly large confidence threshold.", "The leftmost 9 Preliminary experiments with other configurations, including ones with more layers, led to similar results.", "10 For increased reproduciblity (Dodge et al., 2019a), we also report validation results in Appendix B. Prediction Layer 0 Layer i Layer k Layer n Input Layer l Layer j", "(rightmost) point is threshold 0 (1), with x -value showing the fraction of processing time relative to the standard baseline.", "Our first observation is that our efficient baselines constitute a fast alternative to the standard BERT-large model.", "On AG, a classifier trained on layer 12 of BERT-large is 40% faster and within 0.5% of the standard model.", "On SNLI and IMDB a similar speedup results in 2% loss in performance.", "Most notably, our approach presents a similar or better tradeoff in almost all cases.", "Our model is within 0.5% of the standard model while being 40% (IMDB) and 80% (AG) faster.", "For SST, our curve is strictly above two of the efficient baselines, while being below the standard one.", "In the two NLI datasets, our curve is slightly above the curve for the medium budgets, and below it for lower ones.", "Finally, the results of the oracle baseline indicate the further potential of our approach: in all cases, the oracle outperforms the original baseline by 1.8% (AG) to 6.9% (MNLI), while being 46 times faster.", "These results motivate further exploration of better early-exit criteria (see Section 6).", "They also highlight the diversity of the different classifiers.", "One might expect that the set of correct predictions by the smaller classifiers will be contained in the corresponding sets of the larger classifiers.", "The large differences between the original baseline and our oracle indicate that this is not the case, and motivate future research on efficient ensemble methods which reuse much of the computation across different models.", "Extreme case analysis Our results hint that combining the loss terms of each of our classifiers hurts their performance compared to our baselines, which use a single loss term.", "For the leftmost point in our graphsalways selecting the most efficient classifierwe observe a substantial drop in performance compared to the corresponding most efficient baseline, especially for the NLI datasets.", "For our rightmost point (always selecting the most accurate classifier), we observe a smaller drop, mostly in SST and MNLI, compared to the corresponding baseline, but also slower runtime, probably due to the overhead of running the earlier classifiers.", "These trends further highlight the potential of our method, which is able to outperform the baseline speed-accuracy curves despite the weaker starting point.", "It also suggests ways to further improve our method by studying more sophisticated methods to combine the loss functions of our classifiers, and encourage them to be as precise as our baselines.", "We defer this to future work.", "Similar training time Fine-tuning BERT-large with our approach has a similar cost to fine-tuning the standard BERT-large model, with a single output layer.", "Table 2 shows the fine-tuning time of our model and the standard BERT-large baseline.", "Our model is not slower to fine-tune in four out of five cases, and is even slightly faster in three of them.", "11 This property makes our approach appealing compared to other approaches for reducing runtime such as pruning or model distillation (Section 7).", "These require, in addition to training the full model, also training another model for each point along the speed/accuracy curve, therefore substantially increasing the overall training time required to gen-11 We note that computing the calibration temperature requires additional time, which ranges between 3 minutes (SST) to 24 minutes (MNLI).", "erate a full speed/accuracy tradeoff.", "In contrast, our single model allows for full control over this tradeoff by adjusting the confidence threshold, without increasing the training time compared to the standard, most accurate model.", "Combination with model distillation A key property of our approach is that it can be applied to any multi-layer model.", "Particularly, it can be combined with other methods for making models more efficient, such as model distillation.", "To demonstrate this, we repeat our IMDB experiments with tinyBERT (Jiao et al., 2019), which is a distilled version of BERT-base.", "12 We experiment with the tinyBERT v2 6-layer-768dim version.", "13 Figure 4 shows our IMDB results.", "Much like for BERT-large, our method works well for tinyBERT, providing a better speed/accuracy tradeoff compared to the standard tinyBERT baseline and the efficient tinyBERT baselines.", "Second, while tinyBERT is a distilled version of BERTbase , its speed-accuracy tradeoff is remarkably similar to our BERTlarge efficient baselines, which hints that our efficient baselines are a simpler alternative to tinyBERT, and as effective for model compression.", "Finally, our method applied to BERT-large provides the best overall speed-accuracy tradeoff, especially with higher budgets.", "Our approach is motivated by the inherent variance in the level of complexity of text instances, and leverages this variance to obtain a better", "12 While we experimented with BERT-large and not BERT-base, the point of this experiment is to illustrate the potential of our method to be combined with distillation, and not to directly compare to our main results.", "13 Jiao et al. (2019) also suggested a task-specific version of tinyBERT which distills the model based on the downstream task.", "For consistency with our BERT-large experiments, we use the general version.", "speed/accuracy tradeoff compared to our baselines.", "Our method also automatically identifies instances on which smaller models are highly confident in their predictions.", "Here we analyze our data using other definitions of difficulty.", "Perhaps surprisingly, we find that the various definitions are not strongly correlated with ours.", "The results we observe below, combined with the performance of our oracle baseline (Section 5), motivate further study on more advanced methods for early exiting, which could potentially yield even larger computational gains.", "Shorter is easier?", "We first consider the length of instances: is our model more confident in its decisions on short documents compared to longer ones?", "To address this we compute Spearman's correlation between the confidence level of our most efficient classifier and the document's length.", "The results in Table 3 show that the correlations across all datasets are generally low ( | | < 0 . 2 ).", "Moreover, as expected, across four out of five datasets, the (weak) correlation between confidence and length is negative; our model is somewhat more confident in its prediction on shorter documents.", "The fifth dataset (AG), shows the opposite trend: confidence is positively correlated with length.", "This discrepancy might be explained by the nature of the tasks we consider.", "For instance, IMDB and SST are sentiment analysis datasets, where longer texts might include conflicting evidence and thus be harder to classify.", "In contrast, AG is a news topic detection dataset, where a con-flict between topics is uncommon, and longer documents provide more opportunities to find the topic.", "Consistency and difficulty Our next criterion for difficulty is the consistency of model predictions.", "Toneva et al. (2019) proposed a notion of unforgettable training instances, which once the model has predicted correctly, it never predicts incorrectly for the remainder of training iterations.", "Such instances can be thought of as easy or memorable examples.", "Similarly, Sakaguchi et al. (2019) defined test instances as predictable if multiple simple models predict them correctly.", "Inspired by these works, we define the criterion of consistency: whether all classifiers in our model agree on the prediction of a given instance, regardless of whether it is correct or not.", "Table 3 shows Spearman's correlation between the confidence of the most efficient classifier and this measure of consistency.", "Our analysis reveals a medium correlation between confidence and consistency across all datasets ( 0 . 37 0 . 47 ), which indicates that the measure of confidence generally agrees with the measure of consistency.", "Comparison with hypothesis-only criteria Gu-rurangan et al. (2018) and Poliak et al. (2018) showed that some NLI instances can be solved by only looking at the hypothesisthese were artifacts of the annotation process.", "They argued that such instances are easier for machines, compared to those which required access to the full input, which they considered harder.", "Table 4 shows the correlation between the confidence of each of our classifiers on the SNLI and MNLI dataset with the confidence of a hypothesis-only classifier.", "Simi-Layer SNLI MNLI", "larly to the consistency results, we see that the confidence of our most efficient classifier is reasonably correlated with the predictions of the hypothesis-only classifier.", "As expected, as we move to larger, more accurate classifiers, which presumably are able to make successful predictions on harder instances, this correlation decreases.", "Inter-annotator consensus Both NLI datasets include labels from five different annotators.", "We treat the inter-annotator consensus (IAC) as another measure of difficulty: the higher the consensus is, the easier the instance.", "We compute IAC for each example as the fraction of annotators who agreed on the majority label, hence this number ranges from 0.6 to 1.0 for five annotators.", "Table 4 shows the correlation between the confidence of our classifiers with the IAC measure on SNLI and MNLI.", "The correlation with our most efficient classifiers is rather weak, only 0.08 and 0.14.", "Surprisingly, as we move to larger models, the correlation increases, up to 0.32 for the most accurate classifiers.", "This indicates that the two measures perhaps capture a different notion of difficulty.", "Confidence across labels Figure 5 shows the proportion of instances in our validation set that are predicted with high confidence by our calibrated model (90% threshold) for each dataset, label, and model size.", "We first note that across all datasets, and almost all model sizes, different labels are not predicted with the same level of confidence.", "For instance, for AG, the layer 0 model predicts the tech label with 87.8% average confidence, compared to 96.8% for the sports label.", "Moreover, in accordance with the overall performance, across almost all datasets and model sizes, the confidence levels increase as the models get bigger in size.", "Finally, in some cases, as we move towards larger models, the gaps in confidence close (e.g., IMDB and SST), although the relative ordering hardly ever changes.", "Two potential explanations come up when observing these results; either some labels are easier to predict than others (and thus the models are more confident when predicting them), or the models are biased towards some classes compared to others.", "To help differentiate between these two hypotheses, we plot in Figure 6 the average confidence level and the average F 1 score of the most efficient classifier across labels and datasets.", "The plot indicates that both hypotheses are correct to some degree.", "Some labels, such as sports for AG and positive for IMDB, are both predicted with high confidence, and solved with high accuracy.", "In contrast, our model is overconfident in its prediction of some labels ( business for AG, positive for SST), and underconfident in others ( tech for AG, entailment for MNLI).", "These findings might indicate that while our method is designed to be globally calibrated, it is not necessarily calibrated for each label individually.", "Such observations relate to existing concerns regarding fairness when using calibrated classifiers (Pleiss et al., 2017).", "Methods for making inference more efficient have received considerable attention in NLP over the years (Eisner and Satta, 1999; Goldberg and El-hadad, 2010, inter alia ).", "As the field has converged on deep neural architecture solutions, most efforts focus on making models smaller (in terms of model parameters) in order to save space as well as potentially speed up inference.", "In model distillation (Hinton et al., 2014) a smaller model (the student) is trained to mimic the behavior or structure of the original, larger model (the teacher).", "The result is typically a student that is as accurate as the teacher, but smaller and faster (Kim and Rush, 2016; Jiao et al., 2019; Tang et al., 2019; Sanh et al., 2019).", "Pruning (LeCun et al., 1990) removes some of the weights in the network, resulting in a smaller, potentially faster network.", "The basic pruning approach removes individual weights from the network (Swayamdipta et al., 2018; Gale et al., 2019).", "More sophisticated approaches induce structured sparsity, which removes full blocks (Michel et al., 2019; Voita et al., 2019; Dodge et al., 2019b).", "Liu et al. (2018) and Fan et al. (2020) pruned deep models by applying dropout to different layers, which allows dynamic control of Figure 5: Instances with different labels are predicted with different degrees of confidence.", "the speed/accuracy tradeoff of the model without retraining.", "Our method also allows for controlling this tradeoff with a single training pass, and yields computational savings in an orthogonal manner: by making early exit decisions.", "Quantization is another popular method to decrease model size, which reduces the numerical precision of the model's weights, and therefore both speeds up numerical operations and reduces model size (Wrobel et al., 2018; Shen et al., 2019; Zafrir et al., 2019).", "Some works introduced methods to allocate fewer resources to certain parts of the input (e.g., certain words), thereby potentially reducing training and/or inference time (Graves, 2016; Seo et al., 2018).", "Our method also puts less resources into some of the input, but does so at the document level rather than for individual tokens.", "A few concurrent works have explored similar ideas for dynamic early exits in the transformer model.", "Elbayad et al. (2020) and Dabre et al. (2020) introduced early stopping for sequence-to-sequence tasks (e.g., machine translation).", "Bapna et al. (2020) modify the transformer architecture with control symbols which determine whether components are short-circuited to optimize budget.", "Finally, Liu et al. (2020) investigated several inference-time cost optimizations (including early stopping) in a multilingual setting.", "introduced a method for dynamically skipping convolutional layers.", "Bolukbasi et al. (2017) and Huang et al. (2018) learned early exit policies for computer vision architectures, observing substantial computational gains.", "We presented a method that improves the speed/accuracy tradeoff for inference using pre-trained language models.", "Our method makes early exits for simple instances that require less processing, and thereby avoids running many of the layers of the model.", "Experiments with BERT-large on five text classification and NLI datasets yield substantially faster inference compared to the standard approach, up to 80% faster while maintaining similar performance.", "Our approach requires neither additional training time nor significant number of additional parameters compared to the standard approach.", "It also allows for controlling the speed/accuracy tradeoff using a single model, without retraining it for any point along the curve.", "The authors thank the members of Noah's ARK at the University of Washington, the researchers at the Allen Institute for AI, and the anonymous reviewers for their valuable feedback." ]
[ "abstain", "objective", "result", "objective", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "other", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "result", "abstain", "abstain", "result", "abstain", "other" ]
[ "One of the main challenges in conversational question answering (CQA) is to resolve the conversational dependency, such as anaphora and ellipsis.", "However, existing approaches do not explicitly train QA models on how to resolve the dependency, and thus these models are limited in understanding human dialogues.", "In this paper, we propose a novel framework, EXCORD ( Ex plicit guidance on how to resolve Co nve r sational D ependency) to enhance the abilities of QA models in comprehending conversational context.", "EXCORD first generates self-contained questions that can be understood without the conversation history, then trains a QA model with the pairs of original and self-contained questions using a consistency-based regularizer.", "In our experiments, we demonstrate that EXCORD significantly improves the QA models' performance by up to 1.2 F1 on QuAC (Choi et al., 2018), and 5.2 F1 on CANARD (Elgohary et al., 2019), while addressing the limitations of the existing approaches.", "1 1 Introduction Conversational question answering (CQA) involves modeling the information-seeking process of humans in a dialogue.", "Unlike single-turn question answering (QA) tasks (Rajpurkar et al., 2016; Kwiatkowski et al., 2019), CQA is a multi-turn QA task, where questions in a dialogue are context-dependent; 2 hence they need to be understood with the conversation history (Choi et al., 2018; Reddy et al., 2019).", "As illustrated in Figure 1, to answer Corresponding author 1 Our models and code are available at: https://github.com/dmis-lab/excord 2 While the term context usually refers to the evidence document from which the answer is extracted, in CQA, it refers to conversational context.", "the current question Was he close with anyone else?, a model should resolve the conversational dependency, such as anaphora and ellipsis, based on the conversation history.", "A line of research in CQA proposes the end-to-end approach, where a single QA model jointly encodes the evidence document, the current question, and the whole conversation history (Huang et al., 2018; Yeh and Chen, 2019; Qu et al., 2019a).", "In this approach, models are required to automatically learn to resolve conversational dependencies.", "However, existing models have limitations to do so without explicit guidance on how to resolve these dependencies.", "In the example presented in Figure 1, models are trained without explicit signals that he refers to Leonardo da Vinci, and anyone else can be more elaborated with other than his pupils, Salai and Melzi .", "Another line of research proposes a pipeline approach that decomposes the CQA task into question rewriting (QR) and QA, to reduce the complexity of the task (Vakulenko et al., 2020).", "Based on the conversation history, QR models first generate self-contained questions by rewriting the original questions, such that the self-contained questions can be understood without the conversation history.", "For instance, the current question q 3 is reformulated as the self-contained question q 3 by a QR model in Figure 1.", "After rewriting the question, QA models are asked to answer the self-contained questions rather than the original questions.", "In this approach, QA models are trained to answer relatively simple questions whose dependencies have been resolved by QR models.", "Thus, this limits reasoning abilities of QA models for the CQA task, and causes QA models to rely on QR models.", "In this paper, we emphasize that QA models can be enhanced by using both types of questions with explicit guidance on how to resolve the conversational dependency.", "Accordingly, we propose EXCORD ( Ex plicit guidance on how to Resolve Co nve r sational D ependency), a novel training framework for the CQA task.", "In this framework, we first generate self-contained questions using QR models.", "We then pair the self-contained questions with the original questions, and jointly encode them to train QA models with consistency regularization (Laine and Aila, 2016; Xie et al., 2019).", "Specifi-cally, when original questions are given, we encourage QA models to yield similar answers to those when self-contained questions are given.", "This training strategy helps QA models to better understand the conversational context, while circumventing the limitations of previous approaches.", "To demonstrate the effectiveness of EXCORD, we conduct extensive experiments on the three CQA benchmarks.", "In the experiments, our framework significantly outperforms the existing approaches by up to 1.2 F1 on QuAC (Choi et al., 2018) and by 5.2 F1 on CANARD (Elgohary et al., 2019).", "In addition, we find that our framework is also effective on a dataset CoQA (Reddy et al., 2019) that does not have the self-contained questions generated by human annotators.", "This indicates that the proposed framework can be adopted on various CQA datasets in future work.", "We summarize the contributions of this work as follows: We identify the limitations of previous approaches and propose a unified framework to address these.", "Our novel framework improves QA models by incorporating QR models, while reducing the reliance on them.", "Our framework encourages QA models to learn how to resolve the conversational dependency via consistency regularization.", "To the best of our knowledge, our work is the first to apply the consistency training framework to the CQA task.", "We demonstrate the effectiveness of our framework on three CQA benchmarks.", "Our framework is model-agnostic and systematically improves the performance of QA models.", "In CQA, a single instance is a dialogue, which consists of an evidence document d , a list of questions q = [ q 1 , ..., q T ] , and a list of answers for the questions a = [ a 1 , ..., a T ] , where T represents the number of turns in the dialogue.", "For the t -th turn, the question q t and the conversation history H t = [( q 1 , a 1 ) , ..., ( q t 1 , a t 1 )] are given, and a model should extract the answer from the evidence document as: a t = arg max a t P( a t | d, q t , H t ) (1) where P( ) represents a likelihood function over all the spans in the evidence document, and a t is the predicted answer.", "Unlike single-turn QA, since the current question q t is dependent on the conversation history H t , it is important to effectively encode the conversation history and resolve the conversational dependency in CQA.", "A naive approach in solving CQA is to train a model in an end-to-end manner (Figure 2a).", "Since standard QA models generally are ineffective in the CQA task, most studies attempt to develop a QA model structure or mechanism for encoding the conversation history effectively (Huang et al., 2018; Yeh and Chen, 2019; Qu et al., 2019a,b).", "Although Answer QA Loss Evidence Document QuestionAnswering Current Question Conversation History", "these efforts improved performance on the CQA benchmarks, existing models remain limited in understanding conversational context.", "In this paper, we emphasize that QA models can be further improved with explicit guidance using self-contained questions effectively.", "Recent studies decompose the task into two subtasks to reduce the complexity of the CQA task.", "The first sub-task, question rewriting, involves generating self-contained questions by reformulating the original questions.", "Neural-net-based QR models are commonly used to obtain self-contained questions (Lin et al., 2020; Vakulenko et al., 2020).", "The QR models are trained on the CANARD dataset (Elgohary et al., 2019), which consists of 40K pairs of original QuAC questions and their self-contained versions that are generated by human annotators.", "After generating the self-contained questions, the next sub-task, question answering, is carried out.", "Since it is assumed that the dependencies in the questions have already been resolved by QR models, existing works usually use standard QA models (not specialized to CQA); however conversational QA models can also be used (the dotted line in Figure 2b).", "We formulate the process of predicting the answer in the pipeline approach as: P( a t | d, q t , H t ) P rewr ( q t | q t , H t ) P read ( a t | d, q t ) (2) where P rewr ( ) and P read ( ) are the likelihood functions of QR and QA models, respectively.", "q t is a self-contained question rewritten by the QR model.", "The main limitation of the pipeline approach is that QA models are never trained on the original questions, which limits their abilities to understand the conversational context.", "Moreover, this approach makes QA models dependent on QR models; hence QA models suffer from the error propagation from QR models.", "3 On the other hand, our framework enhances QA models' reasoning abilities for CQA by jointly utilizing original and self-contained questions.", "In addition, QA models in our framework do not rely on QR models at inference time and thus do not suffer from error propagation.", "We introduce a unified framework that jointly encodes the original and self-contained questions as", "illustrated in Figure 2c.", "Our framework consists of two stages: (1) generating self-contained questions using a QR model (3.1) and (2) training a QA model with the original and self-contained questions via consistency regularization (3.2).", "Similar to the pipeline approach, we utilize a QR model to obtain self-contained questions.", "We use the obtained questions for explicit guidance in the next stage.", "As shown in Equation 2, the QR task is to generate a self-contained question given an original question and a conversation history.", "Following Lin et al. (2020), we adopt a T5-based sequence generator (Raffel et al., 2020) as our QR model, which achieves comparable performance with that of humans in QR.", "4 For training and evaluating the QR model, we use the CANARD dataset following previous works on QR (Lin et al., 2020; Vakulenko et al., 2020).", "During inference, we utilize the top-k random sampling decoding based on beam search with the adjustment of the softmax temperature (Fan et al., 2018; Xie et al., 2019).", "Our goal is to enhance the QA model's ability to understand conversational context.", "Accordingly, we use consistency regularization (Laine and Aila, 2016; Xie et al., 2019), which enforces a model to make consistent predictions in response to transformations to the inputs.", "We encourage the model's predicted answers from the original questions to be similar to those from the self-contained questions (3.1).", "Our consistency loss is defined as: L cons t = KL (P read ( a t | d, q t , H t ) || P read ( a t | d, q t , H t )) (3) where KL ( ) represents the KullbackLeibler divergence function between two probability distributions.", "is the model's parameters, and depicts a fixed copy of .", "With the consistency loss, QA models are regularized to make consistent predictions, regardless of whether the given question is self-contained or not.", "In order to output an answer distribution that is closer to P read ( a t | d, q t , H t ) , QA models should treat original questions as if they were rewritten into self-contained questions by referring to the 4 On CANARD, our QR model achieved comparable performance with the human performance in preliminary experiments.", "conversation history.", "Through this process, our consistency regularization method serves as explicit guidance that encourages QA models to resolve the conversational dependency.", "In our framework, P read ( a t | ) is the answer span distribution over all evidence document tokens.", "In contrast to Asai and Hajishirzi (2020), by using all probability values in the answer distributions, the signals of self-contained questions can be effectively propagated to the QA model.", "In addition to using all probability values, we also sharpened the target distribution P read ( a t | d, q t , H t ) by adjusting the temperature (Xie et al., 2019) to strengthen the QA model's training signal.", "Finally, we calculate the final loss as: L = L orig + 1 L self + 2 L cons (4) where 1 and 2 are hyperparameters.", "L orig and L self are calculated by the negative log-likelihood between the predicted answers and gold standards given the original and self-contained questions, respectively.", "Comparison with previous works Consistency training has mainly been studied as a method for regularizing model predictions to be invariant to small noises that are injected into the input samples (Sajjadi et al., 2016; Laine and Aila, 2016; Miyato et al., 2016; Xie et al., 2019).", "The intuition behind consistency training is to push noisy inputs closer towards their original versions.", "Therefore, only the original parameters (i.e., ) are updated, while the copied model parameters (i.e., ) are fixed.", "In contrast to the original concept of consistency training, our goal is to go in the opposite direction and update the original parameters.", "Thus, we fix the parameters with self-contained questions, and soley update for each training step as shown in Equation 3.", "QuAC QuAC (Choi et al., 2018) comprises 100k QA pairs in information-seeking dialogues, where a student asks questions based on a topic with background information provided, and a teacher provides the answers in the form of text spans in", "Wikipedia documents.", "Since the test set is only available in the QuAC challenge, we evaluate models on the development set.", "5 For validation, we use a subset of the original training set of QuAC, which consists of questions that correspond to the self-contained questions in CANARD's development set.", "The remaining data is used for training.", "CANARD CANARD (Elgohary et al., 2019) consists of 31K, 3K, and 5K QA pairs for training, development, and test sets, respectively.", "The questions in CANARD are generated by rewriting a subset of the original questions in QuAC.", "We use the training and development sets for training and validating QR models, and the test set for evaluating QA models.", "CoQA CoQA (Reddy et al., 2019) consists of 127K QA pairs and evidence documents in seven domains.", "In terms of the question distribution, CoQA significantly differs from QuAC (see 5.3).", "We use CoQA to test the transferability of EXCORD, where a QR model trained on CANARD generates the self-contained questions in a zero-shot manner.", "Subsequently, we train a QA model by using the original and synthetic questions.", "Similar to QuAC, the test set of CoQA is soley available in the CoQA challenge.", "6 Therefore, we randomly sample 5% of the QA dialogues in the training set and adopt them as our development set.", "Following Choi et al. (2018), we use the F1, HEQ-Q, and HEQ-D for QuAC and CANARD.", "HEQ-Q measures whether a model finds more accurate answers than humans (or the same answers) in a given question.", "HEQ-D measures the same thing, but in a given dialog instead of a question.", "For CoQA, we report the F1 scores for each domain (children's story, literature from Project Gutenberg, middle and high school English exams, news articles from CNN, Wikipedia) and the overall F1 score, as suggested by Reddy et al. (2019).", "Note that the baseline approaches and our framework do not limit the structure of QA models.", "For a fair comparison of the baseline approaches and EXCORD, we test the same QA models in all approaches.", "The selected QA models are commonly used and have been proven to be effective in CQA.", "BERT BERT (Devlin et al., 2019) is a contextualized word representation model that is pretrained on large corpora.", "BERT also works well on CQA datasets, although it is not designed for CQA.", "It receives the evidence document, current question, and conversation history of the previous turn as input.", "BERT+HAE BERT+HAE is a BERT-based QA model with a CQA-specific module.", "Following Qu et al. (2019a), we add the history answer embedding (HAE) to BERT's word embeddings.", "HAE encodes the information of the answer spans from the previous questions.", "RoBERTa RoBERTa (Liu et al., 2019) improves BERT by using pretraining techniques to obtain the robustly optimized weights on larger corpora.", "In our experiments, we found that RoBERTa performs well in CQA, achieving comparable performance with the previous SOTA model, HAM (Qu et al., 2019b), on QuAC.", "Thus, we adopt RoBERTa as our main baseline model owing to its simplicity and effectiveness.", "It receives the same input as BERT, otherwise specified.", "The CANARD training set provides 31,527 self-contained questions from the original QuAC questions.", "Therefore, we can obtain 31,527 pairs of original and self-contained questions without question rewriting.", "For the rest of the original questions, we automatically generate self-contained questions by using our QR model.", "Finally, we obtain 83,568 question pairs and use them in our consistency training.", "We denote the original questions, self-contained questions generated by humans, and self-contained questions generated by a QR model as Q , Q human , and Q syn , respectively.", "Additional implementation details are described in Appendix B 4.5 Results Table 1 presents the performance comparison of the baseline approaches to our framework on QuAC and CANARD.", "Compared to the end-to-end approach, EXCORD consistently improves the performance of QA models on both datasets.", "Also, these improvements are significant: EXCORD improves the performance of the RoBERTa by absolutely 1.2 and 2.3 F1 scores and BERT by 1.2 and 5.2 F1 scores on QuAC and CANARD, respectively.", "From these results, we conclude that the consistency training with original and self-contained QA Model Approach QuAC CANARD F1 HEQ-Q HEQ-D F1 HEQ-Q HEQ-D BERT End-to-end 61.5 57.1 5.0 57.4 52.9 3.2 Pipeline 61.2 (0.3) 56.8 (0.3) 5.0 () 62.2 (+ 4.8) 57.8 (+ 4.9) 6.0 (+ 2.8) Ours 62.7 (+ 1.2) 58.4 (+ 1.3) 6.0 (+ 1.0) 62.6 (+ 5.2) 58.2 (+ 5.3) 6.4 (+ 3.2) BERT+HAE End-to-end 62.0 57.3 5.5 58.2 53.5 5.5 Pipeline 61.1 (0.9) 56.3 (1.0) 5.0 (0.5) 62.4 (+ 4.2) 57.8 (+ 4.3) 6.0 (+ 0.5) Ours 63.2 (+ 1.2) 58.9 (+ 1.6) 5.7 (+ 0.2) 63.1 (+ 4.9) 58.4 (+ 4.9) 5.7 (+ 0.2) RoBERTa End-to-end 66.5 62.4 7.2 65.8 62.2 7.1 Pipeline 65.2 (1.3) 60.9 (1.5) 7.1 (0.1) 66.9 (+ 1.1) 63.2 (+ 1.0) 7.3 (+ 0.2) Ours 67.7 (+ 1.2) 64.0 (+ 1.6) 9.3 (+ 2.1) 68.1 (+ 2.3) 64.2 (+ 2.0) 8.4 (+ 1.3) Table 1: Comparison in performance of the baseline approaches and our framework on QuAC and CANARD.", "On QuAC, the pipeline approach underperforms the end-to-end approach in all baseline models.", "This indicates that training a QA model soley with self-contained questions is ineffective when human rewrites are not given at the inference phase.", "On the other hand, EXCORD improves QA models by using both types of questions.", "As presented in Table 1, our framework significantly outperforms the baseline approaches on QuAC.", "On CANARD, the pipeline approach is significantly more effective than the end-to-end approach.", "Since QA models are trained with self-contained questions in the pipeline approach, they perform well on CANARD questions.", "Nevertheless, EXCORD still outperforms the pipeline approach in most cases.", "Compared to the pipeline approach, our framework improves the performance of RoBERTa by absolutely 1.2 F1 score.", "We elaborate on analyses regarding component ablation and transferability.", "We also describe a case study carried out to highlight such differences between our and baseline approaches.", "In this section, we comprehensively explore the factors contributing to this improvement in detail: (1) using self-contained questions that are rewritten by humans ( Q human ) as additional data, (2) using self-contained questions that are synthetically generated by the QR model ( Q syn ), and (3) training a QA model with our consistency framework.", "In Table 2, we present the performance gaps when each component is removed from our framework.", "We use RoBERTa on QuAC in this experiment.", "We first explore the effects of Q human and Q syn .", "As shown in Table 2, excluding Q human degrades the performance of RoBERTa in our framework.", "Although automatically generated, Q syn contributes to the performance improvement.", "Therefore, both types of self-contained questions are useful in our framework.", "To investigate the effect of our framework, we simply augment Q human and Q syn to Q orig , which is called Question Augment (question data augmen-tation).", "We find that Question Augment slightly improves the performance of RoBERTa on CANARD, whereas it degrades the performance on QuAC.", "This shows that simply augmenting the questions is ineffective and does not guarantee improvement.", "On the other hand, our consistency training approach significantly improves performance, showing that EXCORD is a more optimal way to utilizing self-contained questions.", "We analyze several cases that the baseline approaches answered incorrectly, but our framework answered correctly.", "We also explore how our framework improves the reasoning ability of QA models, compared to the baseline approaches.", "These cases Error case # 1 Title : Montgomery Clift Section Title : Film career Document d : His second movie was The Search .", "Clift was unhappy with the quality of the script, and edited it himself.", "The movie was awarded a screenwriting Academy Award for the credited writers.", "The first case in Table 3 shows the predictions of the two RoBERTa models trained in the end-to-end approach and our framework, respectively.", "Note that the film in the current question does not refer to The Search", "(red box)", "in the document d , but Red River", "(blue box)", "in a 1 .", "When trained in the end-to-end approach, the model failed to comprehend the conversational context and misunderstood what the film refers to, resulting in an incorrect prediction.", "On the other hand, when trained in EXCORD, the model predicted the correct answer because it enhances the ability to resolve conversational dependency.", "In the second case, we compare the pipeline approach to EXCORD.", "In this case, the QR model misunderstood my in the current question as a pronoun and replaced it with the band's name, Train's.", "Consequently, the QA model received the erroneous self-contained question, resulting in an incorrect prediction.", "On the other hand, the QA model trained in our framework predicted the correct answer based on the original question q 6 .", "We train a QR model to rewrite QuAC questions into CANARD questions.", "Then, self-contained questions can be generated for the samples that do not have human rewrites.", "This results in the improvement of QA models' performance on QuAC and CANARD", "(4.5).", "However, it is questionable whether the QR model can successfully rewrite questions when the original questions significantly differ from those in QuAC.", "To answer this, we test our framework on another CQA dataset, CoQA.", "We first analyze how the question distributions of QuAC and CoQA differ.", "We found that question types in QuAC and CoQA are significantly different, such that QR models could suffer from the gap of question distributions between two datasets.", "(See details in Appendix A).", "To test the transferability of EXCORD, we compare the end-to-end approach to our framework on the CoQA dataset.", "Using a QR model trained on QA model CoQA", "CANARD, we generate the self-contained questions for CoQA and train QA models with our framework.", "As presented in Table 4, our framework performs well on CoQA.", "The improvement in BERT is 0.5 based on the overall F1, and the performance of RoBERTa is also improved by an overall F1 of 0.6.", "Improvements are also consistent in most of the documents' domains.", "Therefore, we conclude that our framework can be simply extended to other datasets and improve QA performance even when question distributions are significantly different.", "We plan to improve the transferability of our framework by fine-tuning QR models on target datasets in future work.", "Conversational Question Answering Recently, several works introduced CQA datasets such as QUAC", "(Choi et al., 2018)", "and COQA", "(Reddy et al., 2019).", "We classified proposed methods to solve the datasets into two approaches:", "(1)", "end-to-end and", "(2)", "pipeline.", "Most works based on the end-to-end approach focused on developing a model structure", "(Zhu et al., 2018; Ohsugi et al., 2019; Qu et al., 2019a,b)", "or training strategy such as multitask with rationale tagging", "(Ju et al., 2019)", "that are specialized in the CQA task or datasets.", "Several works demonstrated the effectiveness of the flow mechanism in CQA", "(Huang et al., 2018; Chen et al., 2019; Yeh and Chen, 2019).", "With the advent of a dataset consisting of self-contained questions rewritten by human annotators", "(Elgohary et al., 2019), the pipeline approach has drawn attention as a promising method for CQA in recent days", "(Vakulenko et al., 2020).", "The approach is particularly useful for the open-domain CQA or passage-retrieval", "(PR)", "tasks", "(Dalton et al., 2019; Ren et al., 2020; Anantha et al., 2020; Qu et al., 2020)", "since self-contained questions can be fed into existing non-conversational search engines such as BM25.", "Note that our framework can be used jointly with the pipeline approach in the open-domain setting because our framework can improve QA models' ability to find the answers from the retrieved documents.", "We will test our framework in the open-domain setting in future work.", "Question Rewriting QR has been studied for augmenting training data", "(Buck et al., 2018; Sun et al., 2018; Zhu et al., 2019; Liu et al., 2020)", "or clarifying ambiguous questions", "(Min et al., 2020).", "In CQA, QR can be viewed as a task of simplifying difficult questions that include anaphora and ellipsis in a conversation.", "Elgohary et al.", "(2019)", "first proposed the question rewriting task as a sub-task of CQA and the CANARD dataset for the task, which consists of pairs of original and self-contained questions that are generated by human annotators.", "Vakulenko et al.", "(2020)", "used a coreference-based model", "(Lee et al., 2018)", "and GPT-2", "(Radford et al., 2019)", "as QR models and tested the models in the QR and PR tasks.", "Lin et al.", "(2020)", "conducted the QR task using T5", "(Raffel et al., 2020)", "and achieved on performance comparable to humans on CANARD.", "Following Lin et al.", "(2020), we use T5 in our experiments to generate high-quality questions for enhancing QA models.", "Consistency Training Consistency regularization", "(Laine and Aila, 2016; Sajjadi et al., 2016)", "has been mainly explored in the context of semi-supervised learning", "(SSL)", "(Chapelle et al., 2009; Oliver et al., 2018), which has been adopted in the textual domain as well", "(Miyato et al., 2016; Clark et al., 2018; Xie et al., 2020).", "However, the consistency training framework is also applicable when only the labeled samples are available", "(Miyato et al., 2018; Jiang et al., 2019; Asai and Hajishirzi, 2020).", "The consistency regularization requires adding noise to the sample, which can be either discrete", "(Xie et al., 2020; Asai and Hajishirzi, 2020)", "or continuous", "(Miyato et al., 2016; Jiang et al., 2019).", "Existing works regularize the predictions of the perturbed samples to be equivalent to be that of the originals'.", "On the other hand, our method encourages the models' predictions for the original asnwers to be similar to those from the rewritten questions, i.e., synthetic ones.", "We propose a consistency training framework for conversational question answering, which enhances QA models' abilities to understand conversational context.", "Our framework leverages both the original and self-contained questions for explicit guidance on how to resolve conversational dependency.", "In our experiments, we demonstrate that our framework significantly improves the QA model's performance on QuAC and CANARD, compared to the existing approaches.", "In addition, we veri-fied that our framework can be extended to CoQA.", "In future work, the transferability of our framework can be further improved by fine-tuning the QR model on target datasets.", "Furthermore, future work would include applying our framework to the open-domain setting.", "We thank Sean S. Yi, Miyoung Ko, and Jinhyuk Lee for providing valuable comments and feedback.", "This research was supported by the MSIT", "(Min-istry of Science and ICT), Korea, under the ICT Creative Consilience program", "(IITP-2021-2020-0-01819)", "supervised by the IITP", "(Institute for Information communications Technology Planning Evaluation).", "This research was also supported by National Research Foundation of Korea", "(NRF-2020R1A2C3010638)." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "method", "method", "abstain", "objective", "result", "result", "abstain", "objective", "objective", "method", "objective", "objective", "result", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "method", "method", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "method", "method", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "result", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "objective", "abstain", "objective", "objective", "result", "method", "other", "other", "other", "other", "other", "other", "other", "other" ]
[ "We propose a simple and accurate model for coordination boundary identification.", "Our model decomposes the task into three subtasks during training; finding a coordinator, identifying inside boundaries of a pair of conjuncts, and selecting outside boundaries of it.", "For inference, we make use of probabilities of coordinators and conjuncts in the CKY parsing to find the optimal combination of coordinate structures.", "Experimental results demonstrate that our model achieves state-of-the-art results, ensuring that the global structure of coordinations is consistent.", "Coordination is a frequently occurring structure that consists of conjuncts joined by a coordinator word.", "Since conjunct spans are one of the major ambiguities, identifying them is difficult, even for humans.", "For instance, in the sentence Toshiba's line of portables, for example, features the T-1000, which is in the same weight class but is much slower and has less memory, and the T-1600, which also uses a 286 microprocessor, but which weighs almost twice as much and is three times the size , we cannot find correct conjuncts for each coordinator at a glance.", "The presence of coordination makes a sentence more ambiguous and longer, resulting in errors in syntactic parsing.", "To identify the conjuncts of a given coordinator, previous studies have explored two properties of coordinate structures: (1) similarity conjuncts tend to be similar; (2) replaceability conjuncts can be replaced.", "Ficler and Goldberg (2016b) combine the syntactic parser and neural networks to compute the similarity and replaceability features of conjuncts.", "Teranishi et al. (2017) also exploit the two properties without deploying any syntactic parser, and achieve state-of-the-art results.", "Although both approaches outperform the similarity-based approaches (Shimbo and Hara, 2007; Hara et al., 2009), they cannot handle more than two conjuncts in a coordination, and multiple coordinations in a sentence at one time.", "Hence, their systems may produce coordinations that con-flict with each other.", "In contrast, Hara et al. (2009) define production rules for coordination in order to output consistent coordinate structures.", "Here, we propose a new framework for coordination boundary identification.", "We generalize a scoring function that takes a pair of spans with a coordinator and returns a higher score when the two spans appear to be coordinated.", "Using this function in the CKY parsing with production rules for coordination, our system produces globally consistent coordinations in a given sentence.", "To obtain such a function, we decompose the task into three independent subtasks finding a coordinator, identifying the inner boundaries of a pair of conjuncts and delineating its outer boundaries.", "We use three different neural networks for the tasks, and the networks are trained on the basis of their local decisions.", "Our method is inspired by recent successes with locally-trained models for structured inference problems such as constituency parsing (Teng and Zhang, 2018) and dependency parsing (Dozat and Manning, 2017) without globally-optimized training.", "Experimental results reveal that our model outperforms existing systems and our strong baseline, an extension of Teranishi et al. (2017), and ensures that the global structure of the coordinations is consistent.", "We propose a simple framework that trains a generalized scoring function of a pair of conjuncts and uses it for inference.", "We decompose the task and use three local models that interoperate for the CKY parsing.", "We establish a system that can accommodate more than two conjuncts in a sentence.", "Our system outperforms existing ones, particularly because it produces globally consistent coordinate structures.", "A coordinate structure or coordination is a syntactic structure in which two or more elements, known as conjuncts , are linked by coordinator (s).", "In addition to coordinating words, such as and, or, or but, some punctuation marks function secondarily to connect two conjuncts.", "We refer to those punctuation marks as sub-coordinator s.", "Sub-coordinators cannot independently conjoin phrases to form a coordinate structure.", "The presence of a coordination is usually signaled by the appearance of a coordinator; however, coordinating words do not always lead to coordinations.", "For instance, but is not a coordinator when it functions as a preposition.", "In this paper, we refer to a word that can be a coordinator or sub-coordinator as a coordinator key .", "The task of coordination boundary identification is to find conjunct spans of a given coordinating word.", "If a coordinating word does not act as a coordinator, a system must return NONE ; denot-ing the absence of a coordinate structure.", "The difficulties in this task arise when there are multiple coordinate structures in a sentence or more than two conjuncts in a single coordinate structure.", "If there is more than one coordinate structure in a sentence, each coordinate structure must be isolated from the others or integrated into the other(s).", "In other words, coordinate structures cannot be partially overlapped.", "When there are more than two conjuncts in a coordinate structure, it has to be ascertained whether the punctuation marks are sub-coordinators that bring one more conjunct, and if so, which coordinate structure they belong to.", "Thus, we must identify how many conjuncts a coordinate structure contains and the location of those conjuncts in the coordinate structure whether it is nested in or isolated from other coordinate structures.", "Invoking Shimbo and Hara (2007), we use a tree to represent the coordinate structures in a sentence.", "We call this tree a coordinate tree .", "Figure 1 shows an example of a coordinate tree.", "Tree structures are particularly suitable because the ranges of coordinate structures are always consistent, and conjuncts are shown as nodes without being limited by the frequency of their occurrence.", "Our system produces a coordinate tree using the CKY algorithm and then retrieves well-formed coordinate structures from the tree.", "In this work, we focus on how to learn the scoring function that assigns higher scores to probable pairs of conjuncts for the CKY parsing.", "Our proposed model consists of three parts: a coordinator classifier and the inner and outer-boundary scoring models.", "Figure 2 is the overview of our framework.", "The coordinator classifier is a binary classifier that ascertains whether a word functions as a coordinator or not.", "The inner-boundary scoring model computes the score for a pair of conjuncts on the basis of their boundaries that are in proximity to a coordinator.", "This means that the model produces a score based on the end of the left conjunct and the beginning of the right conjunct.", "Similarly, the outer-boundary scoring model assigns a score to a pair of the beginning of the left conjunct and the end of the right conjunct.", "Using the inner and outer-boundary scoring models, our model calculates all possible combinations of the four boundaries, and then produces their probabilities.", "Given the local probabilities, we run the CKY algorithm to find the globally optimal coordinate structures in the sentence.", "In this section, we formulate our model based on the details of the neural networks' architecture; afterward, we describe the parsing method.", "Given a sentence that consists of N words w 1: N = w 1 , . . . , w N with the corresponding part-of-speech (POS) tags p 1: N = p 1 , . . . , p N , our model outputs a set of coordinate structures {(cid:104) c, { [ b 1 , e 1 ] , . . . , [ b n , e n ] }(cid:105)} ( n 2) where c is a coordinator and [ b k , e k ] is the k -th conjunct spanning from the b k -th word to the e k -th word.", "Although we cannot know the number of coordinate structures and conjuncts in each coordinate structure, we can use coordinator keys as clues to find pairs of conjuncts.", "Our model tries to find pairs of conjuncts, rather than coordinate structures, in a sentence.", "X = { w 1: N , p 1: N , C } C = { t | w t S cc S sub cc } Y = {(cid:104) y ckeyt , y pairt (cid:105)| t C } (1) where y ckeyt is a label that indicates whether w t is the actual coordinator ( y ckeyt = 1 ) or not ( y ckeyt = 0 ), and y pairt is a pair of conjunct spans.", "y pairt = when y ckeyt = 0 .", "When t = 1 or t = N , y ckeyt = 0 because it does not form a coordinate structure within the sentence.", "In this paper, we define S cc and S sub cc as { and, or, but, nor, and/or } and { ,, ;, : } , respectively.", "We use two different models to identify inner and outer boundaries of y pairt , because enumerating all possible inner and outer boundaries of y pairt requires time complexity O ( N 2 ) + O ( N 2 ) = O ( N 2 ) , whereas enumerating all possible y pairt requires time complexity O ( N 4 ) 1 .", "The inner-boundary scoring model assigns a score to a pair of conjunct spans on the basis of inner boundaries.", "We use b l , e l , b r , e r to denote the beginning of a left conjunct, the end of the left conjunct, the beginning of a right conjunct, and the end of the right conjunct, respectively.", "The score 1 For division of four boundaries, two beginnings and two ends or left span and right span can be chosen instead.", "In preliminary experiments, left span and right span models perform poorly, and two beginnings and two ends models perform well, but worse than inner and outer-boundary models.", "of the inner-boundary pair ( e l , b r ) for a coordinator key w t is calculated as follows: SCORE inner ( e l , b r , w t ) = f inner ( e l , b r , w t ) (4) The probabilities of the inner boundaries are normalized distributions over all possible inner boundary pairs: I w t = { (1 , t + 1) , (1 , t + 2) , . . . , (1 , N ) , (2 , t + 1) , . . . , ( t 1 , N ) } (5) P ( y pairt = ([ , e l ] , [ b r , ]) | w t , ) = exp ( SCORE inner ( e l , b r , w t )) (cid:80) ( e (cid:48) l ,b (cid:48) r ) I wt exp ( SCORE inner ( e (cid:48) l , b (cid:48) r , w t )) (6) (cid:96) inner ( X, Y ) = (cid:88) (cid:104) y ckeyt ,y pairt (cid:105) Y y ckey t log P ( y pair t | w t , ) (7) The term y ckeyt log P ( y pairt | w t , ) means the cross-entropy loss is activated only for positive coordinator keys ( y ckeyt = 1 ) and is disabled otherwise ( y ckeyt = 0 ).", "Similarly to the inner-boundary scoring model, we define the probability P ( y pairt = ([ b l , ] , [ , e r ]) | w t , ) based on the set of all the outer-boundary pairs O w t ; the loss is defined as (cid:96) outer using the scoring function SCORE outer ( b l , e r , w t ) = f outer ( b l , e r , w t ) .", "Note that I w t and O w t are identical because their possible pairs are the same.", "Based on the inner pair probability P ( y pairt = ([ , e l ] , [ b r , ]) | w t , ) and the outer pair probability P ( y pair t = ([ b l , ] , [ , e r ]) | w t , ) , the most probable pair is produced by: y pairt = arg max ( e l , b r ) P (([ , e l ] , [ b r , ]) | w t , ) arg max ( b l , e r ) P (([ b l , ] , [ , e r ]) | w t , ) (8) 3.2 CKY Parsing Our three models predict coordinators including sub-coordinators, and the inner and outer boundaries of their coordinating conjuncts.", "Such local predictions may cause conflicts between different coordinate structures.", "Furthermore, two conjuncts Non-terminals COORD Coordination CONJ Conjunct CC Coordinating conjunction CC-SUB Sub-coordinator W Word N Non-coordination S Sentence Rules for coordinations (1) COORD CONJ N?", "linked by a sub-coordinator must be embedded in another coordinate structure formed by a coordinator.", "To overcome these limitations, we use the CKY algorithm to find the optimal coordinations in a sentence.", "In particular, we define the CFG rules to produce a coordinate tree, as used in Hara et al. (2009).", "Our CFG rules, distinct from those of Hara et al. (2009) 2 , are shown in Table 1.", "Based on these rules, we can map a coordinate tree to the one-to-one corresponding syntactic tree, covering 99.5% coordinations in the Penn Treebank 3 .", "We give scores only to coordination nodes denoted as COORD, and pre-terminals.", "When scoring pre-terminals, we assign log P ( w k = 1) to CC and CC-SUB, and log( P ( w k = 0)) to W if w k S cc S sub cc , otherwise 0 .", "When scoring the CO-2 Our rules can produce coordinate structures that contain arbitrary length phrase(s) around coordinators, while conjuncts always appear next to coordinators in their rules.", "3 Most of the non-derivable coordinations are in the form like A and B and C where a coordinating word is regarded as a sub-coordinator.", "Even so, this expression can be parsed as a nested coordinate structure by the rules.", "ORD, we take the left conjunct and the right conjunct which are linked by the CC.", "Thus, in the rule (2), the conjunct pair linked by a CC-SUB is the incoming CONJ and the leftmost CONJ in the child COORD.", "Using a coordinator and its pair of conjuncts, we assign log P (([ i, j ] , [ l, m ])) = log P (([ , j ] , [ l, ])) + log P (([ i, ] , [ , m ])) to the COORD.", "The best scoring coordinate tree can be found efficiently using dynamic programming with time complexity O ( N 3 ) .", "We use neural networks as instantiations of f ckey , f inner , and f outer that we have introduced in this section.", "To get sentence-level representations for a sequence of words and POS tags, we use bidirectional long short-term memories (BiLSTMs) (Hochreiter and Schmidhuber, 1997).", "The dimensionality of each resulting vector h t is 2 d hidden .", "For the BiLSTMs inputs, we use f input to map words and POS tags onto their representations.", "We can use different word representations including a pretrained word model, ELMo (Pe-ters et al., 2018), BERT (Devlin et al., 2018) or character-level LSTMs/convolutional neural networks (CharCNNs).", "We demonstrate the differences between the different choices in Section 4.", "The entire network consisting of f input and BiLSTMs is referred to as the encoder; it is shared by the three neural networks in the higher layer.", "We use a linear transformation of the sentence-level representation of a coordinator key for f ckey .", "f ckey ( w t ) = W ckey h t + b ckey (10) where W ckey R 2 2 d hidden and b ckey R 2 are the model parameters of the classifier.", "From the sentence-level representations produced by the encoder, the inner-boundary scoring model concatenates two representations of inner boundaries, and then feeds the produced vector into a multilayered perceptron (MLP).", "where W in 1 R d in 4 d hidden , b in 1 R d in , w in 2 R d in and b in 2 R 1 are the parameters of the inner-boundary scoring model.", "Using sentence-level representations, the outer-boundary scoring model takes two vectors that are calculated by subtracting the adjacent vectors to the coordinator from the boundary vectors.", "These subtraction operations are intended to capture the semantic distance and relatedness between two spans (Teranishi et al., 2017).", "The model then passes the vector to a MLP.", "where W out 1 R d out 4 d hidden , b out 1 R d out w out 2 R d out and b out 2 R 1 are the parameters of the outer-boundary scoring model.", "To train the set of parameter of our neural networks, we minimize the following loss function:", "L ( ) = (cid:88) ( X, Y ) D (cid:0) (cid:96) ckey ( X, Y ) + (cid:96) inner ( X, Y ) + (cid:96) outer ( X, Y ) (cid:1) (14)", "Instead of learning the scoring functions on the basis of local decisions, we can directly train our models combined with the CKY parsing using a structured max-margin objective between the scores of the best predicted and gold trees.", "In preliminary experiments, however, such a global training requires careful hyperparameter tuning and is hard to optimize stably, resulting in slightly better performance than the method of Teranishi et al. (2017).", "We use the coordination-annotated Penn Treebank (Ficler and Goldberg, 2016a) (PTB) and Genia Treebank beta (Kim et al., 2003) (GENIA).", "Unlike the evaluation by Teranishi et al. (2017) and Ficler and Goldberg (2016b), we strip the PTB of all quotation marks () and () to normalize irregular coordinations such as (cid:104) . . . Daybreak, Daywatch, Newsday, and Newsnight, . . . (cid:105) .", "We follow the standard train/development/test split on the PTB.", "For the GENIA, we do not apply the preprocessing described above.", "We evaluate the model through a five-fold cross-validation, as in Hara et al. (2009).", "We use pretrained word vectors, POS tags, and character vectors produced by the CharCNN (Ma and Hovy, 2016), regarded as the default .", "We also investigate the performance of the model, using three different word representations for the encoder: (1) pretrained word embeddings; GloVe (Pennington et al., 2014) for the PTB, BioASQ (Tsatsaronis et al., 2012) for the GENIA, (2) contextualized sentence embeddings; ELMo, (3) randomly initialized word vectors.", "For the PTB, POS tags are obtained using the Stanford POS Tagger (Toutanova et al., 2003) with 10-way jackknifing.", "For the GENIA, we use the gold POS tags, as in Hara et al. (2009).", "To optimize the model parameters, we use Adam (Kingma and Ba, 2015).", "Other hyperparameters are described in Appendix A. 4.1.3 Baseline Model We adopt our implementation of Teranishi et al. (2017) as the baseline.", "The original model of Teranishi et al. (2017) predicts the beginning and the end of a coordinate structure, and then splits it into conjuncts by commas.", "Their model decides the boundary of a coordinate structure individually, which may cause conflicts with that of other coordinate structure(s).", "Thus, we extend their model to find the best combination of coordinate structures, greedily choosing most probable boundaries without conflicts 4 .", "For the baseline model, we use the same encoder as that of our default model.", "Hereinafter, we refer to this baseline model as Teranishi+17:+ext .", "We evaluate the systems on the basis of the ability to predict conjunct spans with the precision, recall, and F1 measures on the PTB.", "To compare the performance of our model with Teranishi et al. (2017), we adjudge the predicted conjuncts correct based on the following metrics.", "whole : matches at the beginning of the first conjunct and the end of the last conjunct.", "outer : matches in the first conjunct and the last conjunct.", "inner : matches in the two conjuncts adjacent to the coordinator.", "exact : matches in all the conjuncts.", "In addition, we pay particular attention to the evaluation of NP coordination.", "For the GENIA, we measure the recall values of coordinate structures by the aforementioned metrics; previous studies, on the other hand, evaluated their systems based only on the whole metric.", "Also, we evaluate the performance of our model based on syntactic categories.", "Tables 2 and 3 show the experimental results on the PTB and GENIA datasets.", "On the PTB, our model outperforms the baseline and existing methods for all metrics.", "We cannot compare its performance with that of existing methods because of its use of the preprocessing for quotation marks; nevertheless, our model achieves significant improvements.", "Our model is more accurate than the baseline because ours learns both the inner and outer boundaries of conjunct pairs including those of sub-coordinators, while the baseline learns only the coordination boundaries.", "On the GENIA, our model also outperforms the baseline on the exact metric.", "While our model has some limitations when it comes to predicting the beginning and the end of coordinations, it performs better on the inner metric.", "In contrast, Teranishi+17:+ext achieves the best results on the whole metric, whereas it performs poorly on the other metrics.", "This performance reflects the differences between the algorithms of the two systems.", "Our model builds a coordinate tree in a bottom-up manner and predicts inner conjuncts accurately.", "On the other hand, the baseline model predicts the entire span of a coordinate structure and splits them into conjuncts in a top-down fashion.", "That is why the base-Development Test All NP All NP P R F P R F P R F P R F Ours whole 78.60 78.41 78.51 79.26 78.71 78.98 76.88 77.16 77.02 78.75 78.50 78.62 outer 77.18 77.00 77.09 78.57 78.03 78.30 75.33 75.61 75.47 77.95 77.70 77.83 inner 79.19 79.00 79.10 80.64 80.09 80.36 77.60 77.88 77.74 80.19 79.93 80.06 exact 76.95 76.76 76.85 78.11 77.57 77.84 75.33 75.61 75.47 77.95 77.70 77.83 Teranishi+17 :+ext whole 78.78 77.94 78.36 78.52 77.80 78.16 77.36 76.52 76.94 78.72 78.34 78.53 outer 74.49 73.70 74.09 76.67 75.97 76.32 72.03 71.24 71.63 75.36 75.00 75.17 inner 76.04 75.23 75.63 77.82 77.11 77.47 74.14 73.33 73.74 77.44 77.07 77.25 exact 74.13 73.34 73.74 76.21 75.51 75.86 71.48 70.70 71.08 75.20 74.84 75.01 Teranishi+17* whole 75.92 72.87 74.36 77.90 75.05 76.45 ---outer 72.48 69.57 70.99 76.24 73.45 74.82 ---inner 74.07 71.10 72.56 77.43 74.59 75.99 73.46 72.16 72.81 75.87 74.76 75.31 exact 72.11 69.22 70.63 75.77 72.99 74.35 ---Ficler+16* inner 72.34 72.25 72.29 75.17 74.82 74.99 72.81 72.61 72.7 76.91 75.31 76.1 Table 2: Evaluation per coordination by the different metrics.", "line model cannot predict coordinated clauses labeled as S, that are likely to be longer and to contain non-coordinating commas.", "The shortcoming of our model is that our bottom-up parsing may cause errors due to wrong decisions in the early stage of the parsing; this is observed as poor performance in the whole metric.", "We investigate the ability of our system to predict all the coordinate structures in a sentence precisely.", "We categorize sentences into the following four groups 5 .", "All: All sentences that have any coordinate structure.", "Simple : Sentences that have only one coordinate structure consisting of two conjuncts.", "5 Consecutive and Multiple both contain sentences that are Consecutive and Multiple.", "Complex : Sentences that are categorized as Consecutive and/or Multiple.", "Consecutive : Sentences that have a coordinate structure consisting of more than two conjuncts.", "Multiple : Sentences that have multiple coordinate structures.", "Sentences categorized as All are the union of the mutually exclusive sets of Simple and Complex.", "Table 4 shows complete match rates on the PTB.", "Both on the development and test sets, our system records significant gain, in comparison to Teran-ishi+17:+ext, on Simple coordination sentences.", "It might be because the inner and outer-boundary scoring models learn to predict four boundaries of two spans, whereas the baseline model predicts only two outer boundaries on Simple coordination sentences.", "Since an appositive or adverbial phrase can appear between a coordinator and its conjunct, the assumption that two conjuncts must be next to Model Sentence Development Test Ours All 489 / 673 = 72.65 619 / 873 = 70.90 Simple 378 / 481 = 78.58 476 / 609 = 78.16 Complex 111 / 192 = 57.81 143 / 264 = 54.16 Consecutive 41 / 66 = 62.12 56 / 96 = 58.33 Multiple 79 / 146 = 54.10 96 / 197 = 48.73 Teranishi+17 :+ext All 468 / 673 = 69.53 577 / 873 = 66.09 Simple 358 / 481 = 74.42 444 / 609 = 72.90 Complex 110 / 192 = 57.29 133 / 264 = 50.37 Consecutive 40 / 66 = 60.60 48 / 96 = 50.00 Multiple 78 / 146 = 53.42 92 / 197 = 46.70 Table 4: Complete match rates of coordinations per sentence.", "a coordinator fails and causes errors.", "Our system also outperforms Teranishi+17:+ext on Consecutive and Multiple coordination sentences.", "Teran-ishi+17:+ext predicts a coordination span, and then splits it into conjunct spans.", "Therefore, it can mistakenly segment coordinations when false sub-coordinators appear in a sentence.", "In contrast, our approach ascertains whether sub-coordinating words are true sub-coordinators; thus, it can lead to more robust production of Consecutive sentences.", "We conduct an ablation study for our model.", "Table 5 shows the results.", "Without the POS tags, the model performs poorly.", "It is worthy of note that the pretrained word embedding is beneficial information for the task.", "On the other hand, the use of contextual embedding, ELMo, does not improve performance.", "We deduce that POS tags and morphological information, and not contextual word senses, are clues for shorter and similar coordinations such as NP coordinations.", "For the feature extraction function of the outer-boundary scoring model, the concat function that performs the same function as the inner-boundary scoring model does not achieve competitive advantage.", "The feature function described as Eq.", "12 is designed to capture the similarity and replaceability of two spans; while the concat function has only the contextual information of the outer boundaries of a pair.", "For the coordination identification task in Japanese, Kurohashi and Nagao (1994) used a chart to find the highest similarity pair of conjuncts using dynamic programming.", "Hogan (2007) developed a generative parsing model for coordinated noun phrases, incorporating symmetry in conjunct structures and head words.", "Shimbo and Hara (2007) proposed a discriminative model that computes scores based on the syntactic and morphological features assigned to edges and nodes in a sequence alignment.", "While their method focused on non-nested coordinations, Hara et al. (2009) extended their work to accommodate nested coordinations using CFG rules.", "A consistent global structure of coordinations is produced using discriminative functions based on the similarity of conjuncts with dynamic programming.", "Our concept of the CKY parsing is borrowed from their work; however, a key difference of our approach lies in how it computes the score of conjuncts and trains the score function.", "Hanamoto et al. (2012) used dual decomposition to combine HPSG parsing with the discriminative model developed by Hara et al. (2009).", "Kawahara and Kurohashi (2008) focused on resolving the ambiguities of coordinate structures without the use of any similarities.", "Their method relied on the dependency relations surrounding the conjuncts and the generative probabilities of phrases.", "Yoshimoto et al. (2015) extended the Eisner algorithm by adding new rules to accommodate coordinations during dependency parsing.", "Ficler and Goldberg (2016b) used neural networks for the coordination boundary identification task.", "They incorporated the replaceability property between conjuncts, in addition to the similarity property, in the computation of a score for a pair of conjuncts.", "They first used a binary classifier for coordinating words; then, they extracted probable candidate pairs of conjuncts using the Berkeley Parser (Petrov et al., 2006); afterward, they assigned scores to the pairs using neural networks.", "However, the shortcoming of their work is that it is highly dependent on the external parser.", "The work of Teranishi et al. (2017) developed an end-to-end model, as opposed to the pipeline approach of Ficler and Goldberg (2016b).", "They also used similarity and replaceability feature representations without information from a syntactic parser.", "While Ficler and Goldberg (2016b) cut off improbable pairs of conjuncts ahead of training, Teranishi et al. (2017) calculated scores for all possible pairs of the beginning and the end of coordinate structures instead of conjuncts.", "We apply the same strategy to the inner-boundary pairs and the outer-boundary pairs because assigning low probabilities to improbable inner and outer pairs makes the model robust for the CKY parsing.", "We proposed a simple and accurate model for coordination boundary identification.", "Our system decomposes this task into three subtasks, and uses three different neural networks to tackle them.", "For inference, the CKY algorithm is applied using the CFG rules in order to produce globally consistent coordinate structures in a sentence.", "Experimental results demonstrated that our locally-trained models interoperate to obtain the optimal combination of coordinate structures and outperform existing systems and the strong baseline.", "Through empirical analysis, we found that our system performs better than the baseline in complete matches of sentences that contain more than two conjuncts and/or multiple coordinations.", "This work was partly supported by JST CREST Grant Number JPMJCR1513 and JSPS KAK-ENHI Grant Number 18K18109.", "We are grateful to the anonymous reviewers for their helpful insights and comments." ]
[ "objective", "result", "result", "objective", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "result", "method", "method", "result", "objective", "method", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "other", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "method", "abstain", "objective", "result", "other", "other" ]
[ "Recent entity and relation extraction works focus on investigating how to obtain a better span representation from the pre-trained encoder.", "However, a major limitation of existing works is that they ignore the interrelation between spans (pairs).", "In this work, we propose a novel span representation approach, named Packed Levitated Markers (PL-Marker), to consider the interrelation between the spans (pairs) by strategically packing the markers in the encoder.", "In particular, we propose a neighborhood-oriented packing strategy, which considers the neighbor spans integrally to better model the entity boundary information.", "Furthermore, for those more complicated span pair classification tasks, we design a subject-oriented packing strategy, which packs each subject and all its objects to model the interrelation between the same-subject span pairs.", "The experimental results show that, with the enhanced marker feature, our model advances baselines on six NER benchmarks, and obtains a 4.1%-4.3% strict relation F1 improvement with higher speed over previous state-of-the-art models on ACE04 and ACE05.", "Our code and models are publicly available at https://github.com/ thunlp/PL-Marker .", "Recently, pre-trained language models (PLMs) (De-vlin et al., 2019; Liu et al., 2019) have achieved significant improvements in Named Entity Recognition (NER, Luo et al. (2020); Fu et al. (2021)) and Relation Extraction (RE, Wadden et al. (2019); Zhou and Chen (2021)), two key sub-tasks of information extraction.", "Recent works (Wang et al., 2021c; Zhong and Chen, 2021) regard these two tasks as span classification or span pair classification, and thus focus on extracting better span representations from the PLMs.", "[O]David[/O] assaulted a pair of restaurant workers during a night out with national squad teammates in [S]Copenhagen[/S].", "David assaulted a pair of restaurant [O]workers[/O] during a night out with national squad teammates in [S]Copenhagen[/S].", "David assaulted a pair of restaurant workers during a night out with national squad [O]teammates[/O] in [S]Copenhagen[/S].", "Three span representation extraction methods are widely used: (1) T-Concat (Lee et al., 2017; Jiang et al., 2020) concatenates the representation of the span's boundary (start and end) tokens to obtain the span representation.", "It collects information at the token level but ignores the connection between boundary tokens of a span when they pass through the network; (2) Solid Marker (Soares et al., 2019; Xiao et al., 2020) explicitly insert two solid markers before and after the span to highlight the span in the input text.", "And it inserts two pair of markers to locate the subject and object of a span pair.", "However, the method cannot handle multiple span pairs at the same time because of its weakness in specifying the solid markers of a span pair from more than two pairs of markers in the sequence.", "(3) 4904 Levitated Marker (Zhong and Chen, 2021) first sets a pair of levitated markers to share the same position with the span's boundary tokens and then ties a pair of markers by a directional attention.", "To be specific, the markers within a pair are set to be visible to each other in the attention mask matrix, but not to the text token and other pairs of markers.", "Existing work (Zhong and Chen, 2021) simply replaces solid markers with levitated markers for an efficient batch computation, but sacrifices the model performance.", "As the RE example shown in Figure 1, to correctly identify that David, workers and teammates are located_in Copenhagen, it is important to separate out that David attacked the restaurant workers and he had social relation with his teammates.", "However, prior works with markers (Zhong and Chen, 2021) independently processes the span pairs with different insertions of markers in the training phrase, and thus ignore interrelation between spans (pairs) (Sorokin and Gurevych, 2017; Luan et al., 2019; Wadden et al., 2019).", "In this work, we introduce Packed Levitated Marker (PL-Marker), to model the interrelation between spans (pairs) by strategically packing levitated markers in the encoding phase.", "A key challenge of packing levitated markers together for span classification tasks is that the increasing number of inserted levitated markers would exacerbate the complexity of PLMs quadratically (Ye et al., 2021).", "Thus, we have to divide spans into several groups to control the length of each input sequence for a higher speed and feasibility.", "In this case, it is necessary to consider the neighbor spans integrally, which could help the model compare neighbor spans, e.g. the span with the same start token, to acquire a more precise entity boundary.", "Hence, we propose a neighborhood-oriented packing strategy, which packs the spans with the same start token into a training instance as much as possible to better distinguish the entity boundary.", "For the more complicated span pair classification tasks, an ideal packing scheme is to pack all the span pairs together with multiple pairs of levitated markers, to model all the span pairs integrally.", "However, since each pair of levitated markers is already tied by directional attention, if we continue to apply directional attention to bind two pairs of markers, the levitated marker will not be able to identify its partner marker of the same span.", "Hence, we adopt a fusion of solid markers and levitated markers, and use a subject-oriented packing strategy to model the subject with all its related objects integrally.", "To be specific, we emphasize the subject span with solid markers and pack all its candidate object spans with levitated markers.", "Moreover, we apply an object-oriented packing strategy for an intact bidirectional modeling (Wu et al., 2020).", "We examine the effect of PL-Marker on two typical span (pair) classification tasks, NER and end-to-end RE.", "The experimental results indicate that PL-Marker with neighborhood-oriented packing scheme performs much better than the model with random packing scheme on NER, which shows the necessity of considering the neighbor spans integrally.", "And our model also advances the T-Concat model on six NER benchmarks, which demonstrates the effectiveness of the feature obtained by span marker.", "Moreover, compared with the previous state-of-the-art RE model, our model gains a 4.1%-4.3% strict relation F1 improvement with higher speed on ACE04 and ACE05 and also achieves better performance on SciERC, which shows the importance of considering the interrelation between the subject-oriented span pairs.", "In recent years, span representation has attracted great attention from academia, which facilitates various NLP applications, such as named entity recognition (Ouchi et al., 2020), relation and event extraction (Luan et al., 2019), coreference resolution (Lee et al., 2017), semantic role labeling (He et al., 2018) and question answering (Lee et al., 2016).", "Existing methods to enhance span representation can be roughly grouped into three categories: Span Pre-training The span pre-training approaches enhance the span representation for PLMs via span-level pre-training tasks.", "Sun et al. (2019); Lewis et al. (2020); Raffel et al. (2020) mask and learn to recover random contiguous spans rather than random tokens.", "Joshi et al. (2020) further learns to store the span information in its boundary tokens for downstream tasks.", "Knowledge Infusion This series of methods focuses on infusing external knowledge into their models.", "Zhang et al. (2019); Peters et al. (2019); Wang et al. (2021a) learn to use the external entity embedding from the knowledge graph or the synonym net to acquire knowledge.", "Soares et al. (2019); Xiong et al. (2020); Wang et al. (2021b); 4905 Yamada et al. (2020) conduct specific entity-related pre-training to incorporate knowledge into their models with the help of Wikipeidia anchor texts.", "Structural Extension The structural extension methods add reasoning modules to the existing models, such as biaffine attention (Wang et al., 2021d), graph propagation (Wadden et al., 2019) and memory flow (Shen et al., 2021).", "With the support of modern pre-training encoders ( e.g. BERT), the simple model with solid markers could achieve state-of-art results in RE (Zhou and Chen, 2021; Zhong and Chen, 2021).", "However, it is hard to specify the solid markers of a span pair from more than two pairs of markers in the sequence.", "Hence, previous work (Zhong and Chen, 2021) has to process span pairs independently, which is time-consuming and ignores the interrelation between the span pairs.", "In this work, we introduce the neighborhood-oriented and the subject-oriented packing strategies to take advantage of the levitated markers to provide an integral modeling on spans (pairs).", "To our best knowledge, we are the first to apply the levitated markers on the NER.", "On the RE, the closest work to ours is the PURE (Approx.) (Zhong and Chen, 2021), which independently encodes each span pair with two pairs of levitated markers in the training phase and batches multiple pairs of markers to accelerate the inference process.", "Compared to their work, our model adopts a fusion subject-oriented packing scheme and thus handle multiple span pairs well in both the training and inference process.", "We detail the differences between our work and PURE in Section 4.4.2 and explain why our model performs better.", "In this section, we first introduce the architecture of the levitated marker.", "Then, we present how we pack the levitated marker to obtain the span representation and span pair representation.", "Levitated marker is used as an approximation of solid markers, which allows models to classify multiple pairs of entities simultaneously to accelerate the inference process (Zhong and Chen, 2021).", "A pair of levitated markers, associated with a span, consists of a start token marker and an end token marker.", "These two markers share the same position embedding with the start and end tokens of the corresponding span, while keeping the position id of original text tokens unchanged.", "In order to specify multiple pairs of levitated markers in parallel, a directional attention mask matrix is applied.", "Specifically, each levitated marker is visible to its partner marker within pair in the attention mask matrix, but not to the text tokens and other levitated markers.", "In the meantime, the levitated markers are able to attend to the text tokens to aggregate information for their associated spans.", "Benefiting from the parallelism of levitated markers, we can flexibly pack a series of related spans into a training instance.", "In practice, we append multiple associated levitated markers to an input sequence to conduct a comprehensive modeling on each span.", "However, even though the entity length is restricted, some of the span classification tasks still contain a large number of candidate spans.", "Hence, we have to group the markers into several batches to equip the model with higher speed and feasibility in practice.", "To better model the connection between spans with the same start tokens, we adopt a neighborhood-oriented packing scheme.", "As shown in Figure 2, we first sort the pairs of levitated markers by taking the position of start marker as the first keyword and the position of end marker as the second keyword.", "After that, we split them into groups of size up to K and thus gather adjacent spans into the same group.", "We packs each groups of markers and dispersedly process them in multiple runs.", "Formally, given a sequence of N text tokens, X = { x 1 , . . . , x N } and a maximum span length L , we define the candidate spans set as S ( X ) = { (1 , 1) ,", ".., (1 , L ) , ..., ( N, N L ) ,", ".., ( N, N )) } .", "We first divide S ( X ) into multiple groups up to the size of K in order.", "For example, we cluster K spans, { (1 , 1) , (1 , 2) , ..., ( (cid:100) KL (cid:101) , K (cid:98) K 1 L (cid:99) L ) } , into a group S 1 .", "We associate a pair of levitated markers to each span in S 1 .", "Then, we provide the combined sequence of the text token and the inserted levitated markers to the PLM ( e.g. BERT) to obtain the contextualized representations of the start token marker H ( s ) = { h ( s ) i } and that of the end token marker H ( e ) = { h ( e ) i } .", "Here, h ( s ) a and h ( e ) b are associated with the span s i = ( a, b ) , for which we obtain the span representations: ( s i ) = [ h ( s ) a ; h ( e ) b ] (1) where [ A ; B ] denotes the concatenation operation 4906 David Green and are doctors in [S] [/S] wife [O2] [/O2] [O3] [/O3] Dallas PER-SOC PHYS [O1] [/O1] his NA Bank of China is open 1 2 3 4 5 [O1] [O2] [O3] [O4] [O5] [/O1] [/O2] [/O3] [/O4] [/O5] 1 1 1 1 1 1 2 3 4 5 [O6] [O7] [O8] [O9] [O10] [/O6] [/O7] [/O8] [/O9] [/O10] 2 2 2 2 2 3 3 4 5 3 [O11] [O15] [/O11] [/O15] 3 4 5 5 Neighborhood-oriented Packing for Span Subject-oriented Packing for Span Pair GPE PER (his) PER (wife) GPE (Dallas) Position ID: PER Figure 2: An overview of our neighborhood-oriented packing and subject-oriented packing strategies.", "For instance, we apply the levitated marker to a typical overlapping span classification task, NER, which aims to assign an entity type or a non-entity type to each possible span in a sentence.", "We obtain the span representation from the PLM via the packed levitated markers and then combine the features of PL-Marker and T-Concat to better predict the entity type of the cadidate span.", "To obtain a span pair representation, a feasible method is to adopt levitated markers to emphasize a series of the subject and object spans simultaneously.", "Commonly, each pair of levitated markers is tied by the directional attention.", "But if we continue to apply directional attention to bind two pairs of markers, the levitated marker will not be able to identify its partner marker of the same span.", "Hence, as shown in Figure 2, our span pair model adopts a fusion subject-oriented packing scheme to offer an integral modeling for the same-subject spans.", "Formally, given an input sequence X , a subject span, s i = ( a, b ) and its candidate object spans ( c 1 , d 1 ) , ( c 2 , d 2 ) , ... ( c m , d m ) , We insert a pair of solid markers [S] and [/S] before and after the subject span.", "Then, we apply levitated markers [O] and [/O] to all candidate object spans, and pack them into an instance.", "Let X denotes this modified sequence with inserted markers: X = ... [S] , x a , ..., x b , [/S] , ..., x c 1 [O1] , ..., x d 1 [/O1] , ..., x c 2 [O2] , ..., x d 2 [/O2] ..., where the tokens jointed by the symbol share the same position embedding.", "We apply a pretrained encoder on X and finally obtain the span pair representation for s i = ( a, b ) and s j = ( c, d ) : ( s i , s j ) = [ h a 1 ; h b +1 ; h ( s ) c ; h ( e ) d ] (2) where [ ; ] denotes the concatenation operation.", "h a 1 and h b +1 denote the contextualized representation of the inserted solid markers for s i ; h ( s ) c and h ( e ) d are the contextualized representation of the inserted levitated markers for s j .", "Compared to the method that applies two pairs of solid markers on the subject and object respectively (Zhong and Chen, 2021), our fusion marker scheme replaces the solid markers with the levitated markers for the object span, which would impair the emphasis on the object span to some extent.", "To provide the supplemental information, we introduce an inverse relation from the object to the subject for a bidirectional prediction (Wu et al., 2020).", "For instance, we evaluate our model on a typical span pair classification task, end-to-end RE, which concentrates on identifying whether all span pairs are related and their relation types.", "Following Zhong and Chen (2021), we first use a NER model to filter candidate entity spans, and then acquire the 4907 span pair representation of the filtered entity span pairs to predict the relation between them.", "Moreover, to build the connection between entity type and relation type, we add an auxiliary loss for predicting the type of object entity (Zhou and Chen, 2021; Han et al., 2021).", "Dominated by the large feed-forward network, the computation of PLM rises almost linearly with the increase in small sequence length (Dai et al., 2020; Ye et al., 2021).", "Gradually, as the sequence length continues to grow, the computation dilates quadratically due to the Self-Attention module (Vaswani et al., 2017).", "Obviously, the insertion of levitated markers extends the length of input sequence.", "For the span pair classification tasks, the number of candidate spans is relatively small , thus the increased computation is limited.", "For the span classification tasks, we group the markers into several batches, which can control the sequence length within the interval in which the complexity increases nearly linearly.", "For the NER, we enumerate candidate spans in a small-length sentence and then use its context words to expand the sentence to 512 tokens, for which the number of candidate spans in a sentence is usually less than the context length in practice.", "Hence, with a small number of packing groups, the complexity of PL-Marker is still near-linearly to the complexity of previous models.", "Moreover, to further alleviate the inference cost, we adopt PL-Marker as a post-processing module of a two-stage model, in which it is used to identify entities from a small number of candidate entities proposed by a simpler and faster model.", "For the NER task, we conduct experiments on both flat and nested benchmarks.", "Firstly, on the flat NER, we adopt CoNLL03 (Sang and Meulder, 2003), OntoNotes 5.0 (Pradhan et al., 2013) and Few-NERD (Ding et al., 2021).", "Then, on the nested NER, we use ACE04 (Doddington et al., 2004), ACE05 (Walker et al., 2006) and SciERC (Luan et al., 2018).", "The three nested NER datasets are also used to evaluate the end-to-end RE.", "We follow Luan et al. (2019) to split ACE04 into 5 folds and split ACE05 into train, development, and test Dataset #Sents #Ents (#Types) #Rels (#Types) CoNLL03 22.1k 35.1k (4) OntoNotes 5.0 103.8k 161.8k (18) Few-NERD 188.2k 491.7k (66) ACE05 14.5k 38.3k (7) 7.1k (6) ACE04 8.7k 22.7k (7) 4.1k (6) SciERC 2.7k 8.1k (6) 4.6k (7) Table 1: The statistics of the adopted datasets.", "sets.", "For other datasets, we adopt the official split.", "Table 1 shows the statistics of each dataset.", "For NER task, we follow a span-level evaluation setting, where the entity boundary and entity type are required to correctly predicted.", "For the end-to-end RE, we report two evaluation metrics: (1) Boundaries evaluation (Rel) requires the model to correctly predict the boundaries of the subject entity and the object entity, and the entity relation; (2) Strict evaluation (Rel+) further requires the model to predict the entity types on the basis of the requirement of the boundary prediction.", "Moreover, following Wang et al. (2021d), we regard each symmetric relational instance as two directed relational instances.", "We adopt bert-base-uncased (Devlin et al., 2019) and albert-xxlarge-v1 (Lan et al., 2020) encoders for ACE04 and ACE05.", "For SciERC, we use the in-domain scibert-scivocab-uncased (Beltagy et al., 2019) encoder.", "For flat NER, we adopt roberta-large encoder.", "We also leverage the cross-sentence information (Luan et al., 2019; Luoma and Pyysalo, 2020), which extends each sentence by its context and ensures that the original sentence is located in the middle of the expanded sentence as much as possible.", "As discussed in Section 4.4.1, for the packing scheme on NER, we set the group size to 256 to improve efficiency.", "We run all experiments with 5 different seeds and report the average score.", "See the appendix for the standard deviations and the detailed training configuration.", "Our packing scheme allows the model to apply the levitated markers to process massive span pairs and to our best knowledge, we are the first to apply the levitated markers on the NER task.", "We compare our neighborhood-oriented packing scheme with 4908 Model CoNLL03 OntoN5 F-NERD Ma and Hovy (2016) 91.0 86.3 Devlin et al. (2019) 92.8 89.2 68.9 Li et al. (2020) 93.0 91.1 Yu et al. (2020) 93.5 91.3 Yan et al. (2021) 93.2 90.4 SeqTagger (Our impl.) 93.6 91.2 69.0 T-Concat (Our impl.) 93.0 91.7 70.6 Random Packing 93.9 91.8 61.5 PL-Marker (Our model) 94.0 91.9 70.9 Table 2: Micro F1 on the test set for the flat NER.", "the Random Packing , which randomly packs the candidate spans into groups.", "We adopt two common NER models: (1) SeqTagger (Devlin et al., 2019) regards NER as a sequence tagging task and applies a token-level classifier to distinguish the IOB2 tags for each word (Sang and Veenstra, 1999).", "(2) T-Concat (Jiang et al., 2020; Zhong and Chen, 2021) assigns an entity type or a non-entity type to each span based on its T-Concat span representation.", "Note that solid markers cannot deal with the overlapping spans simultaneously, thus it is too inefficient to apply solid markers independently on the NER task.", "We show the flat NER results in the Table 2 and the nested NER results in the Ent column of Table 3, where PURE (Zhong and Chen, 2021) applies the T-Concat feature on its NER module.", "As follow, some observations are summarized from the experimental results: (1) The model with our neighborhood-oriented packing strategy outperforms the model with random packing strategy on all three flat NER datsets, especially obtaining a 9.4% improvement on Few-NERD.", "Few-NERD contains longer sentences and thus includes 325 candidate spans on average, while CoNLL03 and OntoNotes 5.0 only contain 90 and 174 respectively.", "It shows that the neighborhood-oriented packing strategy can well handle the dataset with longer sentences and more groups of markers, to better model the interrelation among neighbor spans.", "(2) With the same large pre-trained encoder, PL-Marker achieves an absolute F1 improvement of +0.1%-1.1% over T-Concat on all six NER benchmarks, which shows the advantage of levitated markers in aggregating span-wise representation for the entity type prediction; (3) PL-Marker outperforms SeqTagger by an absolute F1 of +0.4%, +0.7%, +1.9% in CoNLL03, OntoNote 5.0 and Few-NERD respectively, where CoNLL03, OntoNote 5.0 and Few-NERD contain 4, 18 and 66 entity types respectively.", "Such improvements prove the effectiveness of PL-Marker in handling diverse interrelation between entities of diverse types.", "For the end-to-end RE, we compare our model, PL-Marker, with a series of state-of-the-art models.", "Here, we introduce two of the most representative works with T-Concat and Solid Markers span representation: (1) DyGIE++ (Wadden et al., 2019) first acquires the T-Concat span representation, and then iteratively propagates coreference and relation type confidences through a span graph to refine the representation; (2) PURE (Zhong and Chen, 2021) adopts independent NER and RE models, where the RE model processes each possible entity pair in one pass.", "In their work, PURE (Full) adopts two pairs of solid markers to emphasize a span pair and the PURE (Approx) employs two pairs of levitated markers to underline the span pair.", "As shown in Table 3, with the same BERTBASE encoder, our approach outperforms previous methods by strict F1 of +1.7% on ACE05 and +2.5% on ACE04.", "With the SciBERT encoder, our approach also achieves the best performance on SciERC.", "Using a larger encoder, ALBERTXXLARGE , both of our NER and RE models are further improved.", "Compared to the previous state-of-the-art model, PURE (Full), our model gains a substantially +4.1% and +4.3% strict relation F1 improvement on ACE05 and ACE04 respectively.", "Such improvements over PURE indicate the effectiveness of modeling the interrelation between the same-subject or the same-object entity pairs in the training process.", "In this section, we compare the models' inference speed on an A100 GPU with a batch size of 32.", "We use the BASE size encoder for ACE05 and SciERC in the experiments and the LARGE size encoder for flat NER models.", "PL-Marker with different group size K on CoNLL03 and Few-NERD.", "We also evaluate a cascade Two-stage 4909 Model Encoder Rep ACE05 ACE04 SciERC Type Ent Rel Rel+ Ent Rel Rel+ Ent Rel Rel+ Li and Ji (2014) -80.8 52.1 49.5 79.7 48.3 45.3 -SPtree (Miwa and Bansal, 2016) LSTM T 83.4 -55.6 81.8 -48.4 -DYGIE (Luan et al., 2019) (cid:51) ELMo T 88.4 63.2 -87.4 59.7 -65.2 41.6 -Multi-turn QA (Li et al., 2019) BERTL -84.8 -60.2 83.6 -49.4 -OneIE (Lin et al., 2020) T 88.8 67.5 ----DYGIE++ (Wadden et al., 2019) (cid:51) BERTB / SciBERT T 88.6 63.4 ---TriMF (Shen et al., 2021) (cid:51) T 87.6 66.5 62.8 --70.2 52.4 UniRE (Wang et al., 2021d) (cid:51) T 88.8 -64.3 87.7 -60.0 68.4 -36.9 PURE-F (Zhong and Chen, 2021) (cid:51) S 90.1 67.7 64.8 89.2 63.9 60.1 68.9 50.1 36.8 PURE-A (Zhong and Chen, 2021) (cid:51) L -66.5 ---48.1 -PL-Marker (Our Model) (cid:51) S&L 89.8 69.0 66.5 88.8 66.7 62.6 69.9 53.2 41.6 TableSeq (Wang and Lu, 2020) ALBXXLT 89.5 67.6 64.3 88.6 63.3 59.6 -UniRE (Wang et al., 2021d) (cid:51) T 90.2 -66.0 89.5 -63.0 --PURE-F (Zhong and Chen, 2021) (cid:51) S 90.9 69.4 67.0 90.3 66.1 62.2 --PL-Marker (Our Model) (cid:51) S&L 91.1 73.0 71.1 90.4 69.7 66.5 -Table 3: Overall entity and relation F1 scores on the test sets of ACE04, ACE05 and SciERC.", "model, which uses a fast BASE -size T-Concat model to filter candidate spans for our model.", "As shown in Table 4, PL-Marker achieves a 0.4 F1 improvement on CoNLL03 but sacrifices 60% speed compared to the SeqTagger model.", "And we observe that our proposed Two-stage model achieves similar performance to PL-Marker with 3.1x speedup on Few-NERD, which shows it is more efficient to use PL-Marker as a post-processing module to elaborate the coarse prediction from a simple model.", "In addition, when the group size grows to 512, PL-Marker slows down due to the increased complexity of the Transformer.", "Hence, we choose a group size of 256 in practice.", "We apply the subject-oriented and the object-oriented packing strategies on levitated markers for RE.", "Here, we compare our model with the other two marker-based models.", "Firstly, PURE (Full) (Zhong and Chen, 2021) applies solid markers to process each entity pair independently.", "Secondly, PURE (Approx.) packs the levitated markers of all entity pairs into an instance for batch computation.", "Since the performance and the running time of the above methods rely on the quality and the number of predicted entities, for a fair comparison, we adopt the same entity input from the entity model of PURE on all the RE models.", "Table 5 shows the relation F1 scores and the inference speed of the above three methods.", "On both datasets, our RE model, PL-Marker, achieves the best performance and PURE (Approx.) has highest efficiency 4910 Named Entity Recognition Text : This is the Cross Strait program on CCTV International Channel.", "in the inference process.", "Compared to the PURE (Full), our model obtains a 2.2x-2.8x speedup and better performance on ACE05 and SciERC.", "Compared to PURE (Approx.), our model achieves a 2.8%-4.0% relation F1 (boundaries) improvement on ACE05 and SciERC, which again demonstrates the effectiveness of our fusion markers and packing strategy.", "Overall, our model, with a novel subject-oriented packing strategy for markers, has been proven effective in practice, with satisfactory accuracy and affordable cost.", "We show several cases to compare our span model with T-Concat and to compare our span pair model with PURE (Full).", "As shown in Table 6, our span model could collect contextual information, such as Taiwan and mainland , for underlined span, Cross Strait , assisting in predicting its type as organization rather than work of art.", "Our span model learns to integrally consider the interrelation between the same-object relational facts in training phase, so as to successfully obtain the fact that both Liana and her parents are located in Manhattan .", "In this section, we conduct ablation studies to investigate the contribution of different components to our RE model, where we apply BASE size encoder in the experiments.", "Two pairs of Levitated Markers We evaluate the w/o solid marker baseline, which applies two pairs of levitated markers on the subject and object respectively and packs all the span pairs into an instance.", "As shown in Table 7, compared to PL-Model ACE05 SciERC gold e2e gold e2e PL-Marker 74.0 69.0 72.5 53.2 w/o.", "Marker, the model without solid markers drops a huge 2.0%-3.8% F1 on ACE05 and SciERC when the golden entities are given.", "The result demonstrates that it is sub-optimal to continue to apply directional attention to bind two pairs of levitated markers, since a pair of levitated marker is already tied by the directional attention.", "Inverse Relation We establish an inverse relation for each asymmetric relation for a bidirectional prediction.", "We evaluate the model without inverse relation, which replaces the constructed inverse relation with a non-relation type and adopts a unidirectional prediction.", "As shown in Table 7, the model without inverse relation drops 0.9%-1.1% F1 on both datasets with the gold entities given, indicating the significance of modeling the information from the object entity to the subject entity in our asymmetric framework.", "Entity Type We add an auxiliary entity type loss to RE model to introduce the entity type information.", "As shown in Table 7, when the gold entities are given, the model without entity type loss drops 0.4%-0.7% F1 on both datasets, which shows the importance of entity type information in RE.", "Moreover, we try to apply the type markers (Zhong and Chen, 2021), such as [Subject:PER] and [Ob-ject:GPE] , to inject entity type information predicted by the NER model into the RE model.", "We find the RE model with type marker performs slightly worse than the model with entity type loss in the end-to-end setting.", "It shows that the entity type prediction error from the NER model may be propagated to the RE model if we adopt the type markers as input features.", "Finally, we discuss when to use the entity type prediction from the RE model to refine the NER prediction in the Appendix and we finally refine entity type for ACE04 and ACE05 except SciERC according to their dataset statistic.", "In this work, we present a novel packed levitated markers, with a neighborhood-oriented packing strategy and a subject-oriented packing strategy, to obtain the span (pair) representation.", "Considering the interrelation between spans and span pairs, our model achieves the state-of-the-art F1 scores and a promising efficiency on both NER and RE tasks across six standard benchmarks.", "In future, we will further investigate how to generalize the marker-based span representation to more NLP tasks.", "This work is supported by the National Key R&D Program of China (No. 2020AAA0106502), Institute Guo Qiang at Tsinghua University, and International Innovation Center of Tsinghua University, Shanghai, China.", "We thank Chaojun Xiao and other members of THUNLP for their helpful discussion and feedback.", "Deming Ye conducted the experiments.", "Deming Ye, Yankai Lin, Xiaojun Xie and Peng Li wrote the paper.", "Maosong Sun provided valuable advices to the research." ]
[ "abstain", "abstain", "objective", "objective", "method", "result", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "abstain", "method", "method", "objective", "method", "abstain", "abstain", "objective", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "objective", "other", "method", "method", "method", "method", "other", "other", "other", "other", "other", "other", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "other", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "objective", "result", "objective", "other", "other", "other", "other", "other" ]
[ "We present AGGGEN (pronounced again' ), a data-to-text model which re-introduces two explicit sentence planning stages into neural data-to-text systems: input ordering and input aggregation.", "In contrast to previous work using sentence planning, our model is still end-to-end: AGGGEN performs sentence planning at the same time as generating text by learning latent alignments (via semantic facts) between input representation and target text.", "Experiments on the WebNLG and E2E challenge data show that by using fact-based alignments our approach is more interpretable, expressive, robust to noise, and easier to control, while retaining the advantages of end-to-end systems in terms of fluency.", "Our code is available at https://github.com/XinnuoXu/ AggGen .", "Recent neural data-to-text systems generate text end-to-end (E2E) by learning an implicit mapping between input representations (e.g. RDF triples) and target texts.", "While this can lead to increased fluency, E2E methods often produce repetitions, hallucination and/or omission of important content for data-to-text (Du sek et al., 2020) as well as other natural language generation (NLG) tasks (Cao et al., 2018; Rohrbach et al., 2018).", "Traditional NLG systems, on the other hand, tightly control which content gets generated, as well as its ordering and aggregation.", "This process is called sentence planning (Reiter and Dale, 2000; Duboue and McKeown, 2001, 2002; Konstas and Lapata, 2013; Gatt and Krahmer, 2018).", "Figure 1 shows two different ways to arrange and combine the representations in the input, resulting in widely different generated target texts.", "In this work, we combine advances of both paradigms into a single system by reintroducing William Anders dateOfRetirement 1969-09-01 Apollo 8 commander Frank Borman William Anders member_of Apollo 8 Apollo 8 backup_pilot Buzz Aldrin Apollo 8 operator NASA Input DBpedia Triples William Anders served as a crew member on Apollo 8 operated by nasa.", "The backup pilot was Buzz Aldrin.", "Frank Borman was also an Apollo 8 commander.", "William Anders retired on September 1st, 1969.", "William Anders retired on 1969-09-01.", "He was a crew member of nasa 's Apollo 8.", "Frank Borman was also a commander with Buzz Aldrin as the backup pilot.", "operator backup_pilot commander dateOfRetirement dateOfRetirement operator backup_pilot commander dateOfRetirement commander member_of backup_pilot operator member_of member_of William Anders, who retired on September 1st, 1969, was a crew member on Apollo 8 and served under commander Frank Borman.", "Apollo 8 was operated by NASA with Buzz Aldrin as backup pilot.", "Human-authored Text Sentence Plan 1: Generated Target Text 1: Sentence Plan 2: Generated Target Text 2: Figure 1: Two different sentence plans with their corresponding generated target texts from our model on the WebNLG dataset.", "sentence planning into neural architectures.", "We call our system AGGGEN (pronounced again' ).", "AGGGEN jointly learns to generate and plan at the same time.", "Crucially, our sentence plans are interpretable latent states using semantic facts 1 (ob-tained via Semantic Role Labelling (SRL)) that align the target text with parts of the input representation.", "In contrast, the plan used in other neural plan-based approaches is usually limited in terms of its interpretability, control, and expressivity.", "For example, in (Moryossef et al., 2019b; Zhao et al., 2020) the sentence plan is created independently, incurring error propagation; Wiseman et al. (2018) use latent segmentation that limits interpretability; Shao et al. (2019) sample from a latent variable, not allowing for explicit control; and Shen et al. (2020) aggregate multiple input representations which limits expressiveness.", "1 Each fact roughly captures who did what to whom.", "target text, using a separate inference algorithm based on dynamic programming.", "Crucially, this enables us to directly evaluate and inspect the model's planning and alignment performance by comparing to manually aligned reference texts.", "We demonstrate this for two data-to-text generation tasks: the E2E NLG (Novikova et al., 2017) and the WebNLG Challenge (Gardent et al., 2017a).", "We work with a triple-based semantic representation where a triple consists of a subject, a predicate and an object.", "2 For instance, in the last triple in Figure 1, Apollo 8 , operator and NASA are the subject, predicate and object respectively.", "Our contributions are as follows: We present a novel interpretable architecture for jointly learning to plan and generate based on modelling ordering and aggregation by aligning facts in the target text to input representations with an HMM and Transformer encoder-decoder.", "We show that our method generates output with higher factual correctness than vanilla encoder-decoder models without semantic information.", "We also introduce an intrinsic evaluation framework for inspecting sentence planning with a rigorous human evaluation procedure to assess factual correctness in terms of alignment, aggregation and ordering performance.", "Factual correctness is one of the main issues for data-to-text generation: How to generate text according to the facts specified in the input triples without adding, deleting or replacing information?", "The prevailing sequence-to-sequence (seq2seq) architectures typically address this issue via rerank-ing (Wen et al., 2015a; Dusek and Jurccek, 2016; Juraska et al., 2018) or some sophisticated training techniques (Nie et al., 2019; Kedzie and McKeown, 2019; Qader et al., 2019).", "For applications where structured inputs are present, neural graph encoders (Marcheggiani and Perez-Beltrachini, 2018; Rao et al., 2019; Gao et al., 2020) or decoding of explicit graph references (Logan et al., 2019) are applied for higher accuracy.", "Recently, large-scale pretraining has achieved SoTA results on WebNLG by fine-tuning T5 (Kale and Rastogi, 2020).", "Several works aim to improve accuracy and controllability by dividing the end-to-end architecture into sentence planning and surface realisation .", "Castro Ferreira et al. (2019) feature a pipeline with multiple planning stages and Elder et al. (2019) introduce a symbolic intermediate representation in multi-stage neural generation.", "Moryossef et al. (2019b,a) use pattern matching to approximate the required planning annotation (entity mentions, their order and sentence splits).", "Zhao et al. (2020) use a planning stage in a graph-based model the graph is first reordered into a plan; the decoder conditions on both the input graph encoder and the linearized plan.", "Similarly, Fan et al. (2019) use a pipeline approach for story generation via SRL-based sketches.", "However, all of these pipeline-based approaches either require additional manual annotation or depend on a parser for the intermediate steps.", "Other works, in contrast, learn planning and realisation jointly.", "For example, Su et al. (2018) introduce a hierarchical decoding model generating different parts of speech at different levels, while filling in slots between previously generated tokens.", "Puduppully et al. (2019) include a jointly trained content selection and ordering module that is applied before the main text generation step.The model is trained by maximizing the log-likelihood of the gold content plan and the gold output text.", "Li and Rush (2020) utilize posterior regularization in a structured variational framework to induce which input items are being described by each token of the generated text.", "Wiseman et al. (2018) aim for better semantic control by using a Hidden Semi-Markov Model (HSMM) for splitting target sentences into short phrases corresponding to templates, which are then concatenated to produce the outputs.", "However it trades the controllability for fluency.", "Similarly, Shen et al. (2020) explicitly segment target text into fragment units, while aligning them with their corresponding input.", "Shao et al. (2019) use a Hierarchical Variational Model to aggregate input items into a sequence of local latent variables and realize sentences conditioned on the aggregations.", "The aggregation strategy is controlled by sampling from a global latent variable.", "In contrast to these previous works, we achieve input ordering and aggregation, input-output alignment and text generation control via interpretable states , while preserving fluency.", "We jointly learn to generate and plan by aligning facts in the target text with parts of the input representation.", "We model this alignment using a Hidden Markov Model (HMM) that follows a hierarchical structure comprising two sets of latent states, corresponding to ordering and aggregation.", "The model is trained end-to-end and all intermediate steps are learned in a unified framework.", "Let x = { x 1 , x 2 , . . . , x J } be a collection of J input triples and y their natural language description (human written target text).", "We first segment y into a sequence of T facts y 1: T = y 1 , y 2 , . . . , y T , where each fact roughly captures who did what to whom in one event.", "We follow the approach of Xu et al. (2020), where facts correspond to predicates and their arguments as identified by SRL (See Appendix B for more details).", "For example: William Anders, who retired in 1969, was a crew member on Apollo 8.", "Each fact y t consists of a sequence of tokens y 1 t , y 2 t , . . . , y N t t .", "Unlike the text itself, the planning information, i.e. input aggregation and ordering, is not directly observable due to the absence of labelled datasets.", "AGGGEN therefore utilises an HMM probabilistic model which assumes that there is an underlying hidden process that can be modeled by a first-order Markov chain.", "At each time step, a latent variable (in our case input triples) is responsible for emitting an observed variable (in our case a fact text segment).", "The HMM specifies a joint distribution on the observations and the latent variables.", "Here, a latent state z t emits a fact y t , representing the group of input triples that is verbalized in y t .", "We write the joint likelihood as: p ( z 1: T , y 1: T | x ) = p ( z 1: T | x ) p ( y 1: T | z 1: T , x ) = (cid:34) p ( z 1 | x ) T (cid:89) t =2 p ( z t | z t 1 , x ) (cid:35) (cid:34) T (cid:89) t =1 p ( y t | z t , x ) (cid:35) .", "i.e., it is a product of the probabilities of each latent state transition ( transition distribution ) and the probability of the observations given their respective latent state ( emission distribution ).", "Latent State.", "A latent state z t represents the input triples that are verbalized in the observed fact y t .", "It is not guaranteed that one fact always verbalizes only one triple (see bottom example in Figure 1).", "Thus, we represent state z t as a sequence of latent variables o 1 t , . . . , o L t t , where L t is the number of triples verbalized in y t .", "Figure 2 shows the structure of the model.", "Let o lt Q = { 1 , . . . , K } be a set of possible latent variables, then KL t is the size of the search space for z t .", "If o lt maps to unique triples, the search space becomes intractable for a large value of K .", "To make the problem tractable, we decrease K by representing triples by their predicate.", "Q thus stands for the collection of all predicates appearing in the corpus.", "To reduce the search space for z t further, we limit L t < L , where L = 3 .", "3 Transition Distribution.", "distribution between latent variables (T1 in Figure", "2) is a K K matrix of probabilities, where each row sums to 1.", "We define this matrix as p (cid:16) o lt | o ( l 1) t , x (cid:17) = softmax ( AB (cid:12) M ( q )) (1) where (cid:12) denotes the Hadamard product.", "A RK m and B R m K are matrices of predicate embeddings with dimension m .", "q = { q 1 , q 2 , . . . , q J } is the set of predicates of the input triples x , and each q j Q is the predicate of the triple x j .", "M ( q ) is a K K masking matrix, where M ij = 1 if i q and j q , otherwise 3 By aligning the triples to facts using a rule-based aligner (see Section 5), we found that the chance of aggregating more than three triples to a fact is under 0.01% in the training set of both WebNLG and E2E datasets.", "M ij = 0 .", "We apply row-wise softmax over the resulting matrix to obtain probabilities.", "The probability of generating the latent state z t (T2 in Figure", "2) can be written as the joint distribution of the latent variables o 1 t , . . . , o L t t .", "Assuming a first-order Markov chain, we get: p ( z t | x ) = p (cid:16) o 0 t , o 1 t , o 2 t , . . . , o L t t | x (cid:17) = p (cid:0) o 0 t | x (cid:1) (cid:34) L t (cid:89) l =1 p (cid:16) o lt | o ( l 1) t , x (cid:17)(cid:35) , where o 0 t is a marked start-state.", "On top of the generation probability of the latent states p ( z t | x ) and p ( z t 1 | x ) , we define the transition distribution between two latent states (T3 in Figure", "2) as: p ( z t | z t 1 , x ) = p (cid:16) o 0( t 1) , . . . , o L t 1 ( t 1) | x (cid:17) p (cid:16) o 1 t | o L t 1 ( t 1) , x (cid:17) p (cid:16) o 0 t , . . . , o L t t | x (cid:17) , where o L t 1 ( t 1) denotes the last latent variable in latent state z t 1 , while o t 1 denotes the first latent variable (other than the start-state) in latent state z t .", "We use two sets of parameters { A in , B in } and { A out , B out } to describe the transition distribution between latent variables within and across latent states, respectively.", "Emission Distribution.", "The emission distribution p ( y t | z t , x ) (T4 in Figure", "2) describes the generation of fact y t conditioned on latent state z t and input triples x .", "We define the probability of generating a fact as the product over token-level probabilities, p ( y t | z t , x ) = p ( y 1 t | z t , x ) N t (cid:89) i =2 p ( y it | y 1:( i 1) t , z t , x ) .", "The first and last token of a fact are marked fact-start and fact-end tokens.", "We adopt Transformer (Vaswani et al., 2017) as the model's encoder and decoder.", "Each triple is linearized into a list of tokens following the order: subject, predicate, and object.", "In order to represent individual triples, we insert special [SEP] tokens at the end of each triple.", "A special [CLS] token is inserted before all input triples, representing the beginning of the entire input.", "An example where the encoder produces a contextual embedding for the tokens of two input triples is shown in Figure 6 in Appendix E. At time step t , the decoder generates fact y t token-by-token autoregressively, conditioned on both the contextually-encoded input and the latent state z t .", "To guarantee that the generation of y t conditions only on the input triples whose predicate is in z t , we mask out the contextual embeddings of tokens from other unrelated triples for the encoder-decoder attention in all Transformer layers.", "Autoregressive Decoding.", "Autoregressive Hidden Markov Model (AR-HMM) introduces extra links into HMM to capture long-term correlations between observed variables, i.e., output tokens.", "Following Wiseman et al. (2018), we use AR-HMM for decoding, therefore allowing the interdependence between tokens to generate more fluent and natural text descriptions.", "Each token distribution depends on all the previously generated tokens, i.e., we define the token-level probabilities as p ( y it | y 1: N t 1:( t 1) , y 1:( i 1) t , z t , x ) instead of p ( y it | y 1:( i 1) t , z t , x ) .", "During training, at each time step t , we teacher-force the generation of the fact y t by feeding the ground-truth history, y 1:( t 1) , to the word-level Transformer decoder.", "However, since only y t depends on the current hidden state z t , we only calculate the loss over y t .", "We apply the backward algorithm (Rabiner, 1989) to learn the parameters introduced in Section 3.2, where we maximize p ( y | x ) , i.e., the marginal likelihood of the observed facts y given input triples x , over all the latent states z and o on the entire dataset using dynamic programming.", "Following Murphy (2012), and given that the latent state at time t is C , we define a conditional likelihood of future evidence as: t ( C ) (cid:44) p ( y t +1: T | z t = C, x ) , (2) where C denotes a group of predicates that are associated with the emission of y .", "The size of C ranges from 1 to L and each component is from the collection of predicates Q (see Section 3.2).", "Then, the backward recurrences are: t 1 (cid:0) C (cid:48) (cid:1) = p (cid:0) y t : T | z t 1 = C (cid:48) , x (cid:1) = (cid:88) C t ( C ) p ( y t | z t = C, x ) p (cid:0) z t = C | z t 1 = C (cid:48) , x (cid:1) with the base case T ( C ) = 1 .", "In Equation 2, the size of the search space for C is (cid:80) L =1 K , where K = | Q | , i.e., the number of unique predicates appearing in the dataset.", "The 0 1 0 0 1 Fact1 Fact2 Input Ordering Predicates of Input triples Input Aggregation Text Generation Sentence Planning Figure 3: The inference process (Section 3.4) problem can still be intractable due to high K , despite the simplifications explained in Section 3.2 (cf. predicates).", "To tackle this issue and reduce the search space of C , we: (1) only explore permutations of C that include predicates appearing on the input; (2) introduce a heuristic based on the overlap of tokens between a triple and a fact if a certain fact mentions most tokens appearing in the predicate and object of a triple we hard-align it to this triple.", "4 As a result, we discard the permutations that do not include the aligned predicates.", "After the joint learning process, the model is able to plan, i.e., order and aggregate the input triples in the most likely way, and then generate a text description following the planning results.", "Therefore, the joint prediction of ( y, z ) is defined as: ( y, z ) = arg max ( y (cid:48) , z (cid:48) ) , z (cid:48) { z ( i ) } p (cid:0) y (cid:48) , z (cid:48) | x (cid:1) = arg max ( y (cid:48) , z (cid:48) ) , z (cid:48) { z ( i ) } p ( y (cid:48) | z (cid:48) , x ) p ( z (cid:48) | x ) , (3) where { z ( i ) } denotes a set of planning results, y is the text description, and z is the planning result that y is generated from.", "The entire inference process (see Figure 3) includes three steps: input ordering, input aggregation, and text generation.", "The first two steps are responsible for the generation of { z ( i ) } together with their probabilities { p ( z ( i ) | x ) } , while the last step is for the text generation p ( y (cid:48) | z ( i ) , x ) .", "Planning: Input Ordering.", "The aim is to find the top-k most likely orderings of predicates appearing in the input triples.", "In order to make the search process more efficient, we apply left-to-right beam-4 This heuristic is using the rule-based aligner introduced in Section 5 with a threshold to rule out alignments in which the triples are not covered over 50%, since our model emphasises more on precision.", "Thus, not all triples are aligned to a fact .", "search 5 based on the transition distribution introduced in Equation 1.", "Specifically, we use a transition distribution between latent variables within latent states, calculated with predicate embeddings A in and B in (see Section 3.2).", "To guarantee that the generated sequence does not suffer from omission and duplication of predicates, we constantly update the masking matrix M ( q ) by removing generated predicates from the set q .", "The planning process stops when q is empty.", "Planning: Input Aggregation.", "The goal is to find the topn most likely aggregations for each result of the Input Ordering step.", "To implement this process efficiently, we introduce a binary state for each predicate in the sequence: 0 indicates wait and 1 indicates emit (green squares in Figure 3).", "Then we list all possible combinations 6 of the binary states for the Input Ordering result.", "For each combination, the aggregation algorithm proceeds left-to-right over the predicates and groups those labelled as emit with all immediately preceding predicates labelled as wait.", "In turn, we rank all the combinations with the transition distribution introduced in Equation 1.", "In contrast to the Input Ordering step, we use the transition distribution between latent variables across latent states, calculated with predicate embeddings A out and B out .", "That is, we do not take into account transitions between two consecutive predicates if they belong to the same group.", "Instead, we only consider consecutive predicates across two connected groups, i.e., the last predicate of the previous group with the first predicate of the following group.", "Text Generation.", "The final step generates a text description conditioned on the input triples and the planning result (obtained from the Input Aggregation step).", "We use beam search and the planning-conditioned generation process described in Section 3.2 (Emission Distribution).", "While the jointly learnt model is capable of fully automatic generation including the planning step (see Section 3.4), the discrete latent space allows direct access to manually control the planning component, which is useful in settings which require", "5 We use beam search since Viterbi decoding aims at getting z = arg max z ( z 1: T | y 1: T ) , but y 1: T is not available at this", "stage.", "6 We assume that each fact is comprised of L triples at most.", "To match this assumption, we discard combinations containing a group that aggregates more than L predicates.", "increased human supervision and is a unique feature of our architecture.", "The plans (latent variables) can be controlled in two ways: (1) hyperparameter.", "Our code offers a hyperparameter that can be tuned to control the level of aggregation: no aggregation, aggregate one, two triples, etc.", "The model can predict the most likely plan based on the input triples and the hyperparameter and generate a corresponding text description; (2) the model can directly adopt human-written plans, e.g. using the notation [eatType][near customer-rating] , which translates to: first generate eatType' as an independent fact and then aggregate the predicates near' and customer-rating' in the following fact and generate their joint description.", "We tested our approach on two widely used data-to-text tasks: the E2E NLG (Novikova et al., 2017) and WebNLG 7 (Gardent et al., 2017a).", "Compared to E2E, WebNLG is smaller, but contains more predicates and has a larger vocabulary.", "Statistics with examples can be found in Appendix C. We followed the original training-development-test data split for both datasets.", "Generation Evaluation focuses on evaluating the generated text with respect to its similarity to human-authored reference sentences.", "To compare to previous work, we adopt their associated metrics to evaluate each task.", "The E2E task is evaluated using BLEU (Papineni et al., 2002), NIST (Dod-dington, 2002), ROUGE-L (Lin, 2004), METEOR (Lavie and Agarwal, 2007), and CIDEr (Vedantam et al., 2015).", "WebNLG is evaluated in terms of BLEU, METEOR, and TER (Snover et al., 2006).", "Factual Correctness Evaluation tests if the generated text corresponds to the input triples (Wen et al., 2015b; Reed et al., 2018; Dusek et al., 2020).", "We evaluated on the E2E test set using automatic slot error rate (SER), 8 i.e., an estimation of the occurrence of the input attributes (predicates) and their values in the outputs, implemented by Dusek et al. 7 Since we propose exploring sentence planning and increasing the controllability of the generation model and do not aim for a zero-shot setup, we only focus on the seen category in WebNLG.", "8 SER is based on regular expression matching.", "Since only the format of E2E data allows such patterns for evaluation, we only evaluate factual correctness on the E2E task.", "To evaluate the contributions of the planning component, we choose the vanilla Transformer model (Vaswani et al., 2017) as our baseline, trained on pairs of linearized input triples and target texts.", "In addition, we choose two types of previous works for comparison: (1) best-performing models reported on the WebNLG 2017 (seen) and E2E dataset, i.e. T5 (Kale and Rastogi, 2020), PlanEnc (Zhao et al., 2020), ADAPT (Gardent et al., 2017b), and TGen (Dusek and Jurccek, 2016); (2) models with explicit planning, i.e. TILB-PIPE (Gardent et al., 2017b), NTemp+AR (Wiseman et al., 2018) and Shen et al. (2020).", "To make our HMM-based approach converge faster, we initialized its encoder and decoder with the baseline model parameters and fine-tuned them during training of the transition distributions.", "Encoder and decoder parameters were chosen based on validation results of the baseline model for each task (see Appendix D for details).", "Table 1 shows the generation results on the WebNLG seen category (Gardent et al., 2017b).", "Our model outperforms TILB-PIPE and Transformer, but performs worse than T5, PlanEnc and ADAPT.", "However, unlike these three models, our approach does not rely on large-scale pretraining, extra annotation, or heavy pre-processing using external resources.", "Table 2 shows the results when training and testing on the original E2E set.", "AGGGEN outperforms NTemp+AR and is comparable with Shen et al. (2020), but performs slightly Model BLEU NIST MET R-L CIDer Add Miss Wrong SER TGen (cid:7) 66.41 8.5565 45.07 69.17 2.2253 00.14 04.11 00.03 04.27 NTemp+AR (cid:7) 59.80 7.5600 38.75 65.01 1.9500 Shen et al. (2020) (cid:7) 65.10 45.50 68.20 2.2410 Transformer 68.23 8.6765 44.31 69.88 2.2153 00.30 04.67 00.20 05.16 AGGGEN 64.14 8.3509 45.13 66.62 2.1953 00.32 01.66 00.71 02.70 AGGGEN OD 58.90 7.9100 43.21 62.12 1.9656 01.65 02.99 03.01 07.65 AGGGEN AG 44.00 6.0890 43.75 58.24 0.8202 08.74 00.45 00.92 10.11 Table 2: Evaluation of Generation (middle) and Factual correctness (right) trained/tested on the original E2E data (Section 5 for metrics description).", "However, the results in Table 3 demonstrate that our model does outperform the baselines on most surface metrics if trained on the noisy original E2E training set and tested on clean E2E data (Dusek et al., 2019).", "This suggests that the previous performance drop was due to text references in the original dataset that did not verbalize all triples or added information not present in the triples that may have down-voted the fact-correct generations.", "9 This also shows that AGGGEN produces correct outputs even when trained on a noisy dataset.", "Since constructing high-quality data-to-text training sets is expensive and labor-intensive, this robustness towards noise is important.", "The results for factual correctness evaluated using SER on the original E2E test set are shown in Table 2.", "The SER of AGGGEN is the best among all models.", "Especially, the high Miss scores for TGen and Transformer demonstrate the high chance of information omission in vanilla seq2seq-based generators.", "In contrast, AGGGEN shows much better coverage over the input triples while keeping a low level of hallucination (low Add 9 We also trained and tested models on the cleaned E2E data.", "The full results (including the factual correctness evaluation) are shown in Table 8 in Appendix F: there is a similar trend as in results in Table 3, compared to Transformer.", "and Wrong scores).", "To explore the effect of input planning on text generation, we introduced two model variants: AGGGEN OD , where we replaced the Input Ordering with randomly shuffling the input triples before input aggregation, and AGGGEN AG , where the Input Ordering result was passed directly to the text generation and the text decoder generated a fact for each input triple individually.", "The generation evaluation results on both datasets (Table 1 and Table", "2) show that AGGGEN outperforms AGGGEN OD and AGGGEN AG substantially, which means both Input Ordering and Input Aggregation are critical.", "Table 2 shows that the factual correctness results for the ablative variants are much worse than full AGGGEN , indicating that planning is essential for factual correctness.", "An exception is the lower number of missed slots in AGGGEN AG .", "This is expected since AGGGEN AG generates a textual fact for each triple individually, which decreases the possibility of omissions at the cost of much lower flu-ency.", "This strategy also leads to a steep increase in added information.", "Additionally, AGGGEN AG performs even worse on the E2E dataset than on the WebNLG set.", "This result is also expected, since input aggregation is more pronounced in the E2E dataset with a higher number of facts and input triples per sentence (cf. Appendix C).", "We manually examined a sample of 100 outputs (50 from each dataset) with respect to their factual correctness and fluency.", "For factual correctness, we follow the definition of SER and check whether there are hallucinations, substitutions or omissions in generated texts.", "For fluency, we check whether the generated texts suffer from grammar mistakes, redundancy, or contain unfinished sentences.", "Fig-the cricketers is a chinese restaurant near all bar one in the city centre .", "it is children friendly and has an average customer rating .", "william anders birthplace british hong kong apollo 8 backup_pilot buzz aldrin apollo 8 crewmembers frank borman apollo 8 operator nasa william anders was born in british hong kong and served as a crew member on apollo 8.", "frank borman was a crewman aboard the nasa operated apollo 8 mission.", "the backup pilot was buzz aldrin.", "william anders retired on september 1st , 1969 .", "william anders (born in british hong kong) was a crew member of nasa's apol 8 alongside frank borman.", "william anders retired on september 1st, 1969 .", "[birthPlace] [crew_member] [operator crewMembers] [backup_pilot] [Retirement] [eatType priceRange] [food customerrating] [familyFriendly area near] the cricketers area city centre the cricketers customerrating average the cricketers eattype restaurant the cricketers familyfriendly yes the cricketers food chinese the cricketers near all bar one the cricketers pricerange high I npu t s T r a n s A gg G e n the cricketers is a chinese restaurant with a high price range.", "it has an average customer rating and is children friendly near all bar one in the city centre.", "william anders retirement 1969-09-01 william anders crew_member apollo 8 I npu t s T r a n s A gg G e n Figure 4: Examples of input and system-generated target text for E2E (top) and WebNLG (bottom).", "ure 4 shows two examples of generated texts from Transformer and AGGGEN (more examples, including target texts generated by AGGGEN OD and AGGGEN AG , are shown in Table 6 and Table 7 in Appendix A).", "We observe that, in general, the seq2seq Transformer model tends to compress more triples into one fluent fact, whereas AGGGEN aggregates triples in more but smaller groups, and generates a shorter/simpler fact for each group.", "Therefore, the texts generated by Transformer are more compressed, while AGGGEN 's generations are longer with more sentences.", "However, the planning ensures that all input triples will still be mentioned.", "Thus, AGGGEN generates texts with higher factual correctness without trading off fluency.", "10 6 Intrinsic Evaluation of Planning We now directly inspect the performance of the planning component by taking advantage of the readability of SRL-aligned facts.", "In particular, we investigate: (1) Sentence planning performance.", "We study the agreement between model's planning and reference planning for the same set of input triples; (2) Alignment performance we use AGGGEN as an aligner and examine its ability to align segmented facts to the corresponding input triples.", "Since both studies require ground-truth triple-to-fact alignments, which are not part of the WebNLG and E2E data, we first introduce a human annotation process in Section 6.1.", "10 The number of fluent generations for Transformer and AGGGEN among the examined 100 examples are 96 and 95 respectively.", "The numbers for AGGGEN OD and AGGGEN AG are 86 and 74, which indicates that both Input Ordering and Input Aggregation are critical for generating fluent texts.", "We asked crowd workers on Amazon Mechanical Turk to align input triples to their fact-based text snippets to derive a reference plan for each target text.", "11 Each worker was given a set of input triples and a corresponding reference text description, segmented into a sequence of facts.", "The workers were then asked to select the triples that are verbalised in each fact.", "12 We sampled 100 inputs from the WebNLG 13 test set for annotation.", "Each input was paired with three reference target texts from WebNLG.", "To guarantee the correctness of the annotation, three different workers annotated each input-reference pair.", "We only consider the alignments where all three annotators agree.", "Using Fleiss Kappa (Fleiss, 1971) over the facts aligned by each judge to each triple, we obtained an average agreement of 0.767 for the 300 input-reference pairs, which is considered high agreement.", "We now check the agreement between the model-generated and reference plans based on the top-1 Input Aggregation result (see Section 3.4).", "We introduce two metrics: Normalized Mutual Information (NMI) (Strehl and Ghosh, 2002) to evaluate aggregation.", "We represent each plan as a set of clusters of triples, where a cluster contains the triples sharing the same fact verbalization.", "Using NMI we measure mutual information between two clusters, normalized into the 0-1 range, where 0 and 1 denote no mutual information and perfect correlation, respectively.", "Kendall's tau ( ) (Kendall, 1945) is a ranking based measure which we use to evaluate both ordering and aggregation.", "We represent each plan as a ranking of the input triples, where the rank of each triple is the position of its associated fact verbalization in the target text.", "measures rank correlation, ranging from -1 (strong disagreement) to 1 (strong agreement).", "In the crowdsourced annotation (Section 6.1), each set of input triples contains three reference texts with annotated plans.", "We fist evaluate the correspondence among these three reference plans by 11 The evaluation requires human annotations, since anchor-based automatic alignments are not accurate enough (86%) for the referred plan annotation.", "See Table 5 (RB) for details.", "12 The annotation guidelines and an example annotation task are shown in Figure 7 in Appendix G. 13 We chose WebNLG over E2E for its domain and predicate diversity.", "calculating NMI and between one plan and the remaining two.", "In the top row of Table 4, the high average and maximum NMI indicate that the reference texts' authors tend to aggregate input triples in similar ways.", "On the other hand, the low average shows that they are likely to order the aggregated groups differently.", "Then, for each set of input triples, we measure NMI and of the top-1 Input Aggregation result (model's plan) against each of the corresponding reference plans and compute average and maximum values (bottom row in Table 4).", "Compared to the strong agreement among reference plans on the input aggregation, the agreement between model's and reference plans is slightly weaker.", "Our model has slightly lower agreement on aggregation (NMI), but if we consider aggregation and ordering jointly ( ), the agreement between our model's plans and reference plans is comparable to the agreement among reference plans.", "In this study, we use the HMM model as an aligner and assess its ability to align input triples with their fact verbalizations on the human-annotated set.", "Given the sequence of observed variables, a trained HMM-based model is able to find the most likely sequence of hidden states z = arg max z ( z 1: T | y 1: T ) using Viterbi decoding.", "Similarly, given a set of input triples and a factoid segmented text, we use Viterbi with our model to align each fact with the corresponding input triple(s).", "We then evaluate the accuracy of the model-produced alignments against the crowdsourced alignments.", "The alignment evaluation results are shown in Table 5.", "We compare the Viterbi (Vtb) alignments with the ones calculated by a rule-based aligner (RB) that aligns each triple to the fact with the greatest word overlap.", "The precision of the Viterbi aligner is higher than the rule-based aligner.", "However, the Viterbi aligner tends to miss triples, which leads to a lower recall.", "Since HMMs are locally optimal, the model cannot guarantee to annotate input triples once and only once.", "We show that explicit sentence planning, i.e., input ordering and aggregation, helps substantially to produce output which is both semantically correct as well as naturally sounding.", "Crucially, this also enables us to directly evaluate and inspect both the model's planning and alignment performance by comparing to manually aligned reference texts.", "Our system outperforms vanilla seq2seq models when considering semantic accuracy and word-overlap based metrics.", "Experiment results also show that AGGGEN is robust to noisy training data.", "We plan to extend this work in three directions: Other Generation Models.", "We plan to plug other text generators, e.g. pre-training based approaches (Lewis et al., 2020; Kale and Rastogi, 2020), into AGGGEN to enhance their interpretability and controllability via sentence planning and generation.", "Zero/Few-shot scenarios.", "Kale and Rastogi (2020)'s work on low-resource NLG uses a pre-trained language model with a schema-guided representation and hand-written templates to guide the representation in unseen domains and slots.", "These techniques can be plugged into AGGGEN , which allows us to examine the effectiveness of the explicit sentence planning in zero/few-shot scenarios.", "Including Content Selection.", "In this work, we concentrate on the problem of faithful surface realization based on E2E and WebNLG data, which both operate under the assumption that all input predicates have to be realized in the output.", "In contrast, more challenging tasks such as RotoWire (Wiseman et al., 2017), include content selection before sentence planning.", "In the future, we plan to include a content selection step to further extend AGGGEN 's usability.", "This research received funding from the EPSRC project AISec (EP/T026952/1), Charles University project PRIMUS/19/SCI/10, a Royal Society research grant (RGS/R1/201482), a Carnegie Trust incentive grant (RIG009861).", "This research also received funding from Apple to support research at Heriot-Watt University and Charles University.", "We thank Alessandro Suglia, Jindrich Helcl, and Henrique Ferrolho for their suggestions.", "We thank the anonymous reviewers for their helpful comments." ]
[ "method", "objective", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "other", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "result", "method", "result", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "other", "other", "other", "other" ]
[ "While neural networks produce state-of-the-art performance in several NLP tasks, they generally depend heavily on lexicalized information, which transfer poorly between domains.", "We present a combination of two strategies to mitigate this dependence on lexicalized information in fact verification tasks.", "We present a data distillation technique for delexicalization, which we then combine with a model distillation method to prevent aggressive data distillation.", "We show that by using our solution, not only does the performance of an existing state-of-the-art model remain at par with that of the model trained on a fully lexicalized data, but it also performs better than it when tested out of domain.", "We show that the technique we present encourages models to extract transferable facts from a given fact verification dataset.", "Neural networks have matched, and in several cases even surpassed, human performance in several supervised learning problems.", "However, such successes come at a cost.", "These neural networks typically need a great deal of human support in the form of man power required for curating domain specific datasets.", "Further, it has been shown (Gu-rurangan et al., 2018; Poliak et al., 2018; Thorne and Vlachos, 2020) that several such models depend heavily on certain statistical nuances found in these datasets, information that transfers poorly between domains.", "The ideal solution to this problem is the creation of models that do not rely on such statistical nuances in the given datasets, but instead encode the true underlying semantics of the task, that are in turn transferable to other domains.", "Fact verification is the task of verifying the truthfulness of claims by estimating their assertions against credible evidences.", "Specifically, given a pair of claim and evidence statements, they have to be classified into one of the 3 class labels, agree , disagree , or neutral .", "Fact verification datasets, which often constitute real life news articles, have the added advantage of being used in practical problems such as fake news detection.", "More recently, several neural network models (Nie et al., 2020; Liu et al., 2020, inter alia) built on top of the transformers (Vaswani et al., 2017), have achieved excellent performance in fact verification tasks.", "However these methods are not devoid of the shortcomings that besiege other neural networks in natural language processing tasks.", "It has been shown that these approaches depend heavily on lexical artifacts that transfer poorly between domains (Panenghat et al., 2020; Karimi Mahabadi et al., 2020; Schuster et al., 2019).", "For example, Suntwal et al. (2019) observed that out of all the statements containing the phrase American Author' in the FEVER dataset (Thorne et al., 2018), 91% of them belonged to one class label.", "Further, they demonstrated that neural methods put unnecessary emphasis on such lexical artifacts, which limits their transfer to other fact verification datasets such as the Fake News Challenge (FNC) (Pomerleau and Rao, 2017).", "To mitigate the dependency on such artifacts, Suntwal et al. (2019) proposed a data distillation (or delexicalization) approach, which replaces some lexical artifacts such as named entities with their type and a unique id to indicate occurrence of the same artifact in claim and evidence.", "While promising, the risk of this direction is discarding too much information through the delexicalization process.", "For example, replacing China with its named entity (NE) type ( COUNTRY ) in an evidence sentence discards the fact that the text is about an Asian country which might be relevant in the context.", "In this work we propose a solution that combines data distillation with model distillation to reduce the risk of over delexicalization.", "In particular, we introduce a teacher-student architecture inspired from that of (Tarvainen and Valpola, 2017).", "In our architecture, the student model is trained on delexicalized data (to take advantage of data distillation), but is also guided by a teacher trained on the original lexicalized data (as a form of model distillation) to mitigate the possibility of discarding too much lexical information.", "The contributions of our work are as follows: (1) To our knowledge, we are the first to explore the combination of data and model distillation as a strategy to improve domain transfer of fact verification methods.", "Note that while our training process is more costly due to the combination of the student and teacher models, the output is a single individual model (the student), which has the same runtime cost as an individual classifier.", "Further, our approach is classifier agnostic, and can be coupled with any fact verification method.", "(2) We investigate the domain transfer of our method between two fact verification tasks (FNC and FEVER), where we train on one and test on the other.", "For these experiments we couple our method with the state of the art fact verification approach based on transformers (Vaswani et al., 2017).", "Our results indicate that our method achieves a cross-domain accuracy of 73.17% in one of the experiments and 74.58% in the other, outperforming other methods that do not use the data distillation-model distillation combination.", "All the software for our proposed approach is open-source and publicly available on GitHub at: https://github.", "com/clulab/releases/tree/master/ naacl2021-student-teacher .", "Suntwal et al. (2019) demonstrated that named entities are most prone to overfitting for fact verification.", "Based on this observation, we also replace named entities with their type (and a unique id).", "However, unlike their work, we have observed in early experiments that more fine-grained NE types yield better models.", "In particular, we utilize the FIGER named entity recognizer (NER) (Ling and Weld, 2012) to detect and replace named entities with their most specific label returned by the NER.", "Further, we also process the text with the CoreNLP NER (Manning et al., 2014) to delexicalize additional NER classes not covered by FIGER.", "We include in this list mentions of date, time, money, number, and ordinal.", "Next, we align the named entities between the claim and the evidence.", "That is, any named entity that appears first in the claim is assigned an id postfixed with #C n ; if an entity mention appears only in evidence then it is postfixed with #E n , where C indicates that the entity appeared first in the claim, E indicates that the entity first appeared in the evidence, and n indicates the n th observed entity.", "Table 1 shows an example output for this data distillation process.", "We propose a model distillation strategy to mitigate the risk of overly aggressive data distillation.", "In particular, we introduce a teacher-student architecture (shown in figure", "1) (Hinton et al., 2015; Tarvainen and Valpola, 2017; Laine and Aila, 2016; Sajjadi et al., 2016), where the teacher is trained on the original, lexicalized data, and the student is trained on the data delexicalized with the approach described in the previous sub-section.", "The intuition behind our model distillation approach is that the proposed teacher model will pull the student model towards the original underlying semantics, which are partially obscured to the student due to the delexicalization of its training data.", "More formally, this is captured through a consistency loss that minimizes the difference in predicted label distributions between the student and the teacher.", "The consistency loss is implemented as a mean squared error between the label scores predicted by the student and the teacher.", "Additionally, both the student and the teacher components include a regular classification loss on their respective data, which is implemented using cross entropy.", "This encourages both the student and the teacher to Figure 1: The teacher-student architecture for model distillation.", "We experiment with a state-of-the-art method for fact verification, transformers (Vaswani et al., 2017), which has achieved state-of-the-art results not only in the task of fact verification but in several other NLP tasks.", "Specifically, we use the PyTorch implementation of BERT (Devlin et al., 2019) from huggingface (Wolf et al., 2019).", "We experimented with several pre-trained BERT-base models and found that the one which gave the highest performance was the BERT-cased model when used with a sequence length of 128.", "Further, to distinguish the vocabulary of the delexicalized data from the lexicalized data we augment the base vocabulary of BERT with tokens specific to the delexicalized data.", "For example, as mentioned before, during delexicalization we use personC1 to denote the first occurence of the named entity in the claim paragraph.", "However, to ensure that the BERT BasicTokenizer does split personC1 into person and C1 , we added the token C1 to the BERT vocabulary.", "Tokenizers for each of the lexicalized and delexicalized dataset are initially created using BERT BasicTokenizer, but then use the aforementioned vocabulary created for the specific data type.", "We use two distinct fact verification datasets for our experiments, FEVER (Thorne et al., 2018) and", "FNC (Pomerleau and Rao, 2017).", "The Fact Extraction and Verification (FEVER) dataset : This dataset consists of 145,449 data points each having a claim and evidence pair.", "These claim-evidence pairs typically contain one or more sentences compiled from Wikipedia using an information retrieval (IR) module and are classified into three classes: supports, refutes and not enough info .", "The evidence for data points that had the gold label of not enough info were retrieved (using a task-provided IR component) either by finding the nearest neighbor to the claim or randomly.", "Even though the training partition of the FEVER dataset was publicly released, the gold test labels used in the final shared task were not.", "We therefore built our own test partition by dividing the randomized training partition into 80% (119,197 data points) and 20% (26,252 data points).", "The Fake News Challenge (FNC) dataset : This dataset comprises claim-evidence pairs that were divided into four classes, agree, disagree, discuss and unrelated .", "These claim-evidence pairs were created using the headlines and content section of real news articles respectively.", "While the training partition of the publicly available dataset comprised 49,972 data points, the testing partition had 25,413 data points.", "We further divided the training partition into 40,904 data points for training and 9,068 data points for development.", "Cross-domain labels: In order to evaluate the proposed methods in a cross-domain setting, we modified the label space of the source domain to match that of the target domain.", "In particular, when training on FEVER and testing on FNC, the data points in FEVER that belong to the class supports were relabeled as agree , and those in refutes as disagree .", "Further, the data points belonging to the third class not enough info (NEI) were divided into discuss and unrelated .", "Specifically, of all the claim-evidence pairs that belonged to the NEI class, the ones whose evidences were retrieved using the nearest neighbor technique component of FEVER, were labeled to now belong to the discuss class since they were more likely to be topically relevant to the claim.", "The rest were assigned the label unrelated .", "Similarly in the other direction, i.e., when Configuration Train Domain FEVER FEVER FNC FNC Eval Domain FEVER FNC FNC FEVER BERT Lex 94.15% 68.93% 96.39% 73.21% BERT Delex (OA-NER) 82.31% 53.59% 65.85% 46.47% BERT Delex (OA-NER + SS) 75.26% 46.71% 45.51% 51.77% BERT Delex (FIGER) 91.97% 54.27% 96.22% 62.99% BERT TS (FIGER) 89.42% 73.14%* 98.89% 74.58%* Table 2: In-domain and cross-domain accuracies for various methods.", "training on FNC and testing on FEVER, the data points that had the labels of discuss and unrelated were combined and given the label of not enough info .", "In all the experiments, the performance of the underlying model on the respective lexicalized data is considered as the baseline.", "For example when training a teacher-student model on FEVER, the baseline is the model that was trained using the original text of the FEVER dataset.", "In the baseline model, we use the default hyper parameters set in the huggingface repository (Wolf et al., 2019).", "We focus our analysis on cross-domain evaluation, i.e., we train all models on one dataset (e.g., FEVER) and evaluate their accuracy on the other dataset (e.g., FNC).", "Table 2 summarizes the results of our experiments with various models tested in-domain and cross-domain.", "All scores reported are averaged across three random seeds.", "We use BERT Lex' as the baseline model which is the stand alone model trained on the original lexicalized data.", "BERT Delex' denotes the standalone models trained on delexicalized data, along with the corresponding delexicalization techniques used.", "OA-NER uses the Overlap Aware Named Entity Recognizer for delexicalization of data and SS uses Super Sense tags (Suntwal et al., 2019).", "FIGER delexicalizes the data using a fine-grained named entity recognizer (Ling and Weld, 2012).", "BERT TS' denotes the student in the proposed teacher-student architecture.", "Since the delexicalization used by the best performing BERT Delex' models in the cross-domain setting was FIGER, we chose it as the preferred delexicalization technique for this student.", "Note that the lexicalized models, which perform well in-domain, tend to transfer poorly to a new domain.", "For example, the BERT model trained on lexicalized FEVER data, gave an accuracy of 94.15% when tested on FEVER, but reduced to 68.93% when tested on FNC.", "This verifies our findings that the signal the model learns from unmasked text does not generalize well.", "In contrast, in all our experiments, the student models trained under the teacher-student architecture outperform the other models trained using lexicalized data, in a cross-domain setting.", "For example, the student model of the teacher-student architecture trained on FEVER, gave an accuracy of 89.42% when tested on FEVER and an accuracy of 73.14% when tested on FNC.", "Similarly in the other direction, when the same model was trained on FNC, it gave an accuracy of 98.89% when tested on FNC, and an accuracy of 74.58% when tested on FEVER.", "Note that in both the directions the accuracy of the student model of the teacher-student architecture surpasses the corresponding accuracy of the model trained on lexicalized data in a cross-domain setting.", "These experiments were repeated under a bootstrap resampling test with 1,000 samples, and p -value < 0.035 to ensure statistical sig-nificance.", "We believe that the improved performance of the student model in the TS architecture is due to the", "fact that the TS architecture provides additional information over the ground labels.", "The key addition of our TS approach is that the delexicalized student learns to mimic the label probability distributions of the teacher through the consistency loss.", "As discussed earlier, we conjecture that this pulls the student model closer to the teacher.", "Another possible interpretation is that the model distillation has a regularization effect since the consistency loss essentially averages the behavior of both models.", "Importantly, our results indicate that too much delexicalization risks discarding useful information.", "We believe this is why the standalone delexicalized model performs worse out of domain, and why the TS delexicalized student performs better.", "Understanding how much delexicalization to apply given a task opens up interesting avenues for future research.", "Nevertheless, overall this paper demonstrates that data distillation and model distillation can be combined as a strategy to improve domain transfer of fact verification methods.", "Lastly, we also inspected the word-level attention weights (Bahdanau et al., 2014) to further understand what these models are learning.", "Specifi-cally, we analyze the weights assigned by the last attention head in the last layer of the respective transformer models.", "Table 3 shows the tokens that were assigned highest weights by the model trained on lexicalized data and the teacher-student model.", "1 It can be seen that the tokens that were given the highest weights by the model trained on lexicalized data contain more named entities (e.g., Apple , State ).", "This suggests potential overfitting, since the specific named entities should not be relevant for 1 Stop words and other BERT specific tokens like [SEP], [CLS], [PAD], etc., are removed from this list.", "On the other hand, the tokens that were given the highest weights by the teacher-student model contain more generic named entity labels (e.g., country, person).", "Also we found that out of all the attention weights assigned by the model trained on lexicalized data, 15.60% were given to named entities.", "Further, in the TS student model only 7.44% was assigned to named entity labels.", "These findings demonstrate that by using the data distillation and model distillation techniques we are able to reduce the importance that models place on lexical artifacts.", "This not only helps them achieve accuracies at par with their counterparts trained on plain text data in an in-domain setting, but also outperform them in a cross-domain setting.", "We present a new strategy to improve domain transfer of fact verification methods, which combines data distillation and model distillation.", "We show that the performance of existing state-of-the-art models degrades significantly on a cross-domain setting, hence motivating the necessity of robust data distillation techniques such as delexicalization to minimize overfitting on lexical artifacts.", "We further combine delexicalization with a teacher-student architecture as a form of model distillation to reduce the risk of over-delexicalization.", "We hope that this solution will encourage the development of architectures capable of reducing the dependency of models on lexical artifacts in an effort to learn domain transferable knowledge in the task of fact verification.", "This work was supported by the Defense Advanced Research Projects Agency (DARPA) under the World Modelers program, grant number W911NF1810014, and by the Bill and Melinda Gates Foundation HBGDki Initiative.", "Mihai Surdeanu declares a financial interest in lum.ai.", "This interest has been properly disclosed to the University of Arizona Institutional Review Committee and is managed in accordance with its conflict of interest policies.", "The authors would also like to thank Becky Sharp and Marco Valenzuela-Escrcega for all their valuable comments and reviews." ]
[ "abstain", "method", "method", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "objective", "abstain", "abstain", "objective", "abstain", "result", "other", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "method", "other", "other", "other", "other" ]
[ "Transformer-based pre-trained language models like BERT, though powerful in many tasks, are expensive in both memory and computation, due to their large number of parameters.", "Previous works show that some parameters in these models can be pruned away without severe accuracy drop.", "However, these redundant features contribute to a comprehensive understanding of the training data and removing them weakens the model's representation ability.", "In this paper, we propose GhostBERT, which generates more features with very cheap operations from the remaining features.", "In this way, GhostBERT has similar memory and computational cost as the pruned model, but enjoys much larger representation power.", "The proposed ghost module can also be applied to unpruned BERT models to enhance their performance with negligible additional parameters and computation.", "Empirical results on the GLUE benchmark on three backbone models (i.e., BERT, RoBERTa and ELECTRA) verify the efficacy of our proposed method.", "Recently, there is a surge of research interests in compressing the transformer-based pre-trained language models like BERT into smaller ones using various compression methods, i.e., knowledge distillation (Sanh et al., 2019; Sun et al., 2019; Jiao et al., 2020), pruning (Michel et al., 2019; Fan et al., 2019), low-rank approximation (Lan et al., 2020), weight-sharing (Lan et al., 2020), dynamic networks with adaptive depth and/or width (Liu et al., 2020; Hou et al., 2020; Xin et al., 2020; Zhou et al., 2020), and quantization (Shen et al., 2020; Fan et al., 2020; Zhang et al., 2020; Bai et al., 2021).", "Previous works show that there are some redundant features in the BERT model, and unimportant attention heads or neurons can be pruned away = = = = = = = = = = = = = = = = = = = = Figure 1: Average GLUE development accuracy versus #params and FLOPs with the (pruned) BERT and our GhostBERT.", "without severe accuracy degradation (Michel et al., 2019; Hou et al., 2020).", "However, for computer vision (CV) tasks, it is shown in (Han et al., 2020) that redundant features in convolutional neural networks also contribute positively to the performance, and using cheap linear operations to generate more ghost feature maps enhances the performance with few additional parameters.", "On the other hand, it is shown in (Voita et al., 2019; Kovaleva et al., 2019; Rogers et al., 2020) that many attention maps in pre-trained language models exhibit typical positional patterns, e.g., diagonal or vertical, which can be easily generated from other similar ones using operations like convolution.", "Based on the above two aspects, in this paper, we propose to use cheap ghost modules on top of the remaining important attention heads and neurons to generate more features, so as to compensate for the pruned ones.", "Considering that the convolution operation (1) encodes local context dependency, as a complement of the global self-attention in Transformer models (Wu et al., 2020); and (2) can generate some BERT features like positional attention maps from similar others, in this work, we propose to use the efficient 1-Dimensional Depthwise Separable Convolution (Wu et al., 2019) as the basic operation in the ghost module.", "To ensure the generated ghost features have similar scales or ghost features original features DWConv [CLS]the the mat [SEP] catsaton [CLS]the themat[SEP] cat sat on N d (pruned) MHA (pruned) FFN Input G-MHA Softmax G-FFN", "as the original ones, we use a softmax function to normalize the convolution kernel.", "Afterwards, we fine-tune the parameters in both the BERT backbone model and the added ghost modules.", "Note that the ghost modules are not necessarily applied to pruned models.", "They can also be directly applied to pre-trained language models for better performance while with negligible additional parameters and floating-point operations (FLOPs).", "Figure 1 summarizes the average accuracy versus parameter size and FLOPs on the GLUE benchmark, where adding ghost modules to both the unpruned ( m = 12 / 12 ) and pruned ( m < 1 ) BERT models perform better than the counterparts without ghost modules.", "More experiments on the GLUE benchmark show that with only 0 .", "4% more parameters and 0 .", "9% more FLOPs, the proposed ghost modules improve the average accuracy of BERT-base, RoBERTa-base and ELECTRA-small by 0 .", "9 , 0 .", "6 , 2 .", "4 points, respectively.", "When applying ghost modules to small or pruned models, the resultant models outperform other BERT compression methods.", "In this section, we first introduce where to add ghost modules in a BERT model (Section 2.1), and then discuss the components and optimization details of the ghost module (Section 2.2).", "The BERT model is built with Transformer layers, each of which contains a Multi-Head Attention (MHA) layer and a Feed-Forward Network (FFN), as well as skip connections and layer normalizations.", "Hou et al. (2020) show that the computations for attention heads of MHA and neurons in the intermediate layer of FFN can be performed in parallel.", "Thus the BERT model can be compressed in a structured manner by pruning parameters associated with these heads and neurons (Hou et al., 2020).", "In this paper, after pruning the unimportant heads and neurons, we employ cheap ghost modules upon the remaining ones to generate more ghost features to compensate for the pruned ones.", "For simplicity of notation, we omit the bias terms in linear and convolution operations where applicable in the rest of this work.", "Following (Hou et al., 2020), we divide the computation of MHA into the computation of each attention head.", "Specifically, suppose the sequence length and hidden state size are n and d , respectively.", "Each transformer layer consists of NH attention heads.", "For input matrix X R n d , the h -th attention head computes its output as H h ( X ) = Softmax (1 / d XW Qh WK (cid:62) h X (cid:62) ) XW Vh WO (cid:62) h , where W Qh , W Kh , W Vh , W Oh R d d h with d h = d/N H are the projection matrices associated with it.", "In multi-head attention, NH heads are computed in parallel to get the final output: MHA ( X ) = NH (cid:88) h =1 H h ( X ) .", "Given a width multiplier m 1 , we keep M = (cid:98) N h m (cid:99) heads and use them to generate F ghost features.", "The f th ghost feature is generated by G f ( X )= Nonlinear (cid:32) M (cid:88) h =1 G f,h ( H h ( X )) (cid:33) .", "where G f,h is the proposed cheap ghost module which generates features from the h th attention head's representation to the f th ghost feature.", "ReLU is used as the nonlinearity function.", "Thus the computation of MHA in the GhostBERT is: Ghost-MHA ( X )= M (cid:88) h =1 H h ( X )+ F (cid:88) f =1 G f ( X ) .", "Similar to the attention heads in MHA, the computation of FFN can also be divided into computations for each neuron in the intermediate layer of FFN (Hou et al., 2020).", "With a slight abuse of notation, we still use X R n d as the input to FFN.", "Denote the number of neurons in the intermediate layer as d ff , the computation of FFN can be written as: FFN ( X ) = (cid:80) d ff i =1 GeLU (cid:16) XW 1: ,i (cid:17) W 2 i, : , where W 1 , W 2 are the weights in FFN.", "For simplicity, we also use width multiplier m for FFN as MHA, and divide these neurons into NH folds, where each fold contains d f = d ff /N H neurons.", "For the h -th fold, its output can be computed as H h ( X ) = GeLU (cid:0) XW 1 h (cid:1) W 2 h where W 1 h = W 1: , ( h 1) d f : hd f and W 2 h = W 2( h 1) d f : hd f , : are the parameters associated with it.", "In FFN, NH folds are computed in parallel to get the output: FFN ( X ) = NH (cid:88) h =1 H h ( X ) .", "For width multiplier m , we keep M folds of neurons and use ghost modules to generate F ghost features as in Equation (2).", "Thus the computation of FFN in the GhostBERT can be written as: Ghost-FFN ( X )= M (cid:88) h =1 H h ( X )+ F (cid:88) f =1 G f ( X ) .", "In the previous section, we discussed where we insert the ghost modules in the Transformer layer.", "In this section, we elaborate on the components and normalization of the ghost modules.", "Generally speaking, any function can be used as the ghost module G in Equation (2).", "Considering that", "(i) convolution operation can encode local context dependency, as a compensation for the global self-attention (Wu et al., 2020; Jiang et al., 2020); and", "(ii) features like diagonal or vertical attention maps (Kovaleva et al., 2019; Rogers et al., 2020) can be easily generated by convolving similar others, we consider using convolution as the basic operation in the ghost module.", "With a slight abuse of notation, here we still use X R n d as the input to the convolution, i.e., the output H h of h th head in MHA or h th fold of neurons in FFN.", "Denote O R n d as the output of the convolution in the ghost module.", "1-Dimensional convolution (Conv1D) over the sequence direction encodes local dependency over contexts, and has shown remarkable performance for NLP tasks (Wu et al., 2019, 2020).", "To utilize the representation power of Conv1D without too much additional memory and computation, we choose 1-Dimensional Depthwise Separable Convolution (DWConv) (Wu et al., 2019) for the ghost module.", "Compared with Conv1D, DWConv performs a convolution independently over every channel, and reduces the number of parameters from d 2 k to dk (where k is the convolution kernel size).", "Denote the weight of the DWConv operation as W R d k .", "After applying DWConv, the output for the i th token and c th channel can be written as: O i,c = DWConv ( X : ,c , W c, : , i, c ) = k (cid:88) m =1 W c,m X i (cid:100) k +12 (cid:101) + m,c .", "Since the parameters of the BERT backbone model and the ghost modules can have quite different scales and optimization behaviors, we use a softmax function to normalize each convolution kernel W c, : across the sequence dimension as Softmax( W c, : ) before convolution as Wu et al. (2019).", "By softmax normalization, the weights in one kernel are summed up to 1, ensuring that the convolved output has a similar scale as the input.", "Thus after applying the ghost module, the output for the i th token and c th channel can be written as: O i,c = DWConv ( X : ,c , Softmax ( W c, : ) , i, c ) .", "To turn a pre-trained BERT model into a smaller-sized GhostBERT, we do the following three steps:", "Pruning.", "For a certain width multiplier m , we prune the attention heads in MHA and neurons in the intermediate layer of FFN from a pre-trained BERT-based model following (Hou et al., 2020).", "Distillation.", "Then we add ghost modules to the pruned model as in Section 2.1.", "Suppose there are L Transformer layers.", "We distill the knowledge from the embedding (i.e., the output of the embedding layer) E , hidden states M l after MHA and F l after FFN (where l = 1 , 2 , , L ) from the full-sized teacher model to E m , M ml , F ml of the student GhostBERT.", "Following (Jiao et al., 2020), we use the augmented data for distillation.", "Denote MSE as the mean squared error, the three loss terms are (cid:96) emb = MSE ( E m , E ) , (cid:96) mha = (cid:80) Ll =1 MSE ( M ml , M l ) , and (cid:96) ffn = (cid:80) Ll =1 MSE ( F ml , F l ) , respectively.", "Thus, the distillation loss function is: L distill = (cid:96) emb + (cid:96) mha + (cid:96) ffn .", "Note that instead of being applied to pruned models, the cheap ghost modules can also be directly applied to a pre-trained model for better performance while with negligible additional parameters and FLOPs.", "In this case, the training procedure contains only the distillation and fine-tuning steps.", "Empirically, to save memory and computation, we generate one ghost feature for each MHA or FFN (i.e., F = 1 in Equations (3) and (5)), and let all ghost modules G f,h share the same parameters with each other.", "As will be shown in Section 3, adding these simplified ghost modules already achieve clear performance gain empirically.", "In this section, we show the efficacy of the proposed method with (pruned) BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019) and ELECTRA (Clark et al., 2020) as backbone models.", "Experiments are performed on the GLUE benchmark (Wang et al., 2019), which consists of various natural language understanding tasks.", "More statistics about the GLUE datasets are in Appendix A.1.", "Following (Clark et al., 2020), we report Spearman correlation for STS-B, Matthews correlation for CoLA and accuracy for the other tasks.", "For MNLI, we report the results on the matched section.", "The convolution kernel size in the ghost module is set as 3 unless otherwise stated.", "The detailed hyperparameters for training the GhostBERT are in Appendix A.2.", "The model with the best development set performance is used for testing.", "For each method, we also report the number of parameters Model FLOPs(G) #params(M) MNLI QNLI QQP RTE SST-2 MRPC CoLA STS-B Avg.", "We compare our proposed method against the following methods:", "(i) baseline pre-trained language models: BERT-base (Devlin et al., 2019), RoBERTa-base (Liu et al., 2019) and ELECTRA-small (Clark et al., 2020);", "(ii) BERT compression methods: TinyBERT (Jiao et al., 2020), Con-vBERT (Jiang et al., 2020), and MobileBERT (Sun et al., 2020).", "The development set results of RoBERTa-base are from Hou et al. (2020).", "The test set results of ELECTRA, BERT-base and Con-vBERT are from Jiang et al. (2020).", "The others are from their original papers or repositories.", "Table 1 shows the GLUE development set results of the baseline pre-trained language models and our proposed method.", "When the cheap ghost modules are directly applied to these unpruned pre-trained models, better performances are achieved with only negligible additional parameters and FLOPs.", "Specifically, adding ghost modules to BERT-base, RoBERTa-base and ELECTRA-small increases the average development accuracy by 0 .", "9 , 0 .", "6 , 2 .", "4 points with only 55 .", "3 K more parameters, and 14 .", "2 M more FLOPs.", "For the test set, the average performance gains are 0 .", "8 , 1 .", "1 , 2 .", "4 points.", "Comparison with Baseline Models.", "From Table 1, when the ghost modules are applied to the pruned BERT (or RoBERTa) model with m < 1 , the proposed GhostBERT or GhostRoBERTa also achieves comparable performances as BERT-base or RoBERTa-base with fewer FLOPs.", "Specifically, GhostBERT ( m = 6/12) and GhostRoBERTa ( m = 9/12) perform similarly or even better than BERT-base and RoBERTa-base with only 50% and 75% FLOPs, respectively.", "In particular, when the compression ratio increases (i.e., m = 3/12, 1/12), we still achieve 99.6% performance (resp. 96.3%) with only 25% FLOPs (resp. 8%) of BERT-base model.", "Comparison with Other Compression Methods.", "Table 2 shows the comparison between the proposed method and other popular BERT compression methods.", "Under similar parameter sizes or FLOPs, the proposed GhostBERT performs comparably as the other BERT compression methods, while GhostRoBERTa often outperforms them.", "In particular, GhostELECTRA-small has over 1.5 points or higher accuracy gain than other similar-sized small models like ELECTRA-small, TinyBERT 4 and ConvBERT-small.", "In Table 3 and Figure 1, we also compare the pruned BERT with and without ghost modules.", "For fair comparison, for the pruned model without ghost module, we use the same training procedure as Section 2.3.", "As can be seen, adding the ghost modules achieves considerable improvement with negligible additional memory and computation.", "In this section, we perform ablation study in the", "(i) training procedure: including data augmentation (DA) and knowledge distillation (KD);", "(ii) ghost module: including convolution kernel size, softmax normalization over the convolution kernel and nonlinearity for each ghost feature in Equation (2).", "Training Procedure.", "Table 4 verifies the effectiveness of the Data Augmentation (DA) and Knowledge Distillation (KD) upon the GhostBERT model with width multiplier m { 3 / 12 , 1 / 12 } .", "The GhostBERT incurs severe accuracy drop without DA and KD.", "with a drop of 3 .", "5 and 6 .", "4 points on average, for m = 3 / 12 and 1 / 12 , respectively.", "Ghost Module.", "Table 4 also shows the effectiveness of the softmax normalization over the convolution kernel and ReLU nonlinearity in Equation (2).", "As can be seen, dropping the softmax normalization or ReLU nonlinearity reduces the average accuracy by 0 .", "8 and 1 .", "6 points respectively for m = 3 / 12 , and 0 .", "9 and 2 .", "2 points respectively for m = 1 / 12 .", "Further, we explore whether the kernel size plays an important role in the DWConv in the ghost module.", "Figure 3 shows the results of GhostBERT with width multipliers m { 3 / 12 , 1 / 12 } , with various convolution kernel sizes in DWConv.", "Average accuracy over five tasks is reported.", "Detailed results for each task can be found in Table 9 in Appendix B.1.", "As can be seen, the performance of GhostBERT increases first and then decreases gradually as the kernel size increases.", "For both width multipliers, kernel size 3 performs best and is used as the default kernel size in other experiments unless otherwise stated.", "In this section, we discuss about different choices of which type of convolution to use in the ghost module (Section 4.1), and where to posit the ghost modules in a BERT model (Section 4.2).", "Besides the DWConv in Section 2.2, in this section, we discuss more options for the convolution in the ghost module.", "We follow the notation in Section 2.2 and denote the input, output, kernel size of the convolution as X , W and k , respectively.", "1-Dimensional Convolution.", "If the kernel convolves input over the sequence direction (abbre-viated as Conv1D S), the number of input and output channel is d , and the weight W has shape W R d d k .", "After applying Conv1D S, the output for the i th token and c th channel is: O i,c = Conv1D S ( X , W c, : , : , i, c ) = d (cid:88) j =1 k (cid:88) m =1 W c,j,k X i (cid:100) k +12 (cid:101) + m,j .", "If the kernel convolves input over the feature direction (abbreviated as Conv1D F), the number of input and output channel is n , and the weight has shape W R n n k .", "After applying Conv1D F, the output for the i th token and c th channel is: O i,c = Conv1D F ( X , W i, : , : , i, c ) = n (cid:88) j =1 k (cid:88) m =1 W i,j,m X j,c (cid:100) k +12 (cid:101) + m .", "2-Dimensional Convolution (Conv2D).", "For Conv2D, the number of input and output channels are both 1, and thus the weight W has shape W R 1 1 k k .", "After applying Conv2D, the output for the i th token and c th channel is: O i,c = Conv2D ( X , W , i, c ) = k (cid:88) w =1 k (cid:88) h =1 W : , : ,h,w X i (cid:100) k +12 (cid:101) + h,c (cid:100) k +12 (cid:101) + w .", "Table 5 shows the comparison of using different convolutions for the ghost module.", "For 1-Dimensional convolution, Conv1D S performs better Conv1D F. This may because that convolving over the sequence urges the model to learn the dependencies among tokens.", "Though 2-Dimensional convolution (Conv2D) is quite successful in CV tasks, it performs much worse than Conv1D S here.", "This may because the two dimensions of feature maps in CV tasks encode similar information, while those of hidden states in Transformers encode quite different information (i.e., feature and sequence).", "Thus Conv2D results in worse performance than Conv1D S, though much fewer parameters and FLOPs are required.", "On the other hand, DWConv achieves comparable performance as Conv1D S, while being much more efficient in terms of number of parameters and FLOPs, by performing the convolution independently over every feature dimension.", "In this section, we explore more possible positions of adding the ghost module.", "For MHA, besides adding ghost module after the projection layer ( After O in Figure", "4(c)) as in Section 2.1.1, we can also add it right after calculating the attention score ( After QK in Figure", "4(a)), or after multiplying the attention score and the value layer ( After V in Figure", "4(b)).", "For FFN, besides adding the ghost module after the second linear layer ( After FFN2 in Figure", "4(e)) as in Section 2.1.1, we can also add it after the intermediate layer ( After FFN1 in Figure", "4(d)).", "Note that we use Conv2D as the ghost module for After QK because the attention map encodes attention probabilities in both dimensions.", "For After QK and After V , to match the dimension of other parameters, the number of input and output channels are M and NH M , respectively.", "Table 6 shows the results of adding one ghost module to the same position for each Transformer layer.", "As can be seen, adding ghost module upon the attention maps ( After QK ) performs best.", "However, since the parameters in the value and projection layer of MHA are left unpruned, After QK has much more parameters and FLOPs than the other positions.", "Adding ghost modules to the other four positions has similar average accuracy.", "Thus in this work, for MHA, we choose the most memoryand computation-efficient strategy After O .", "Similarly, for FFN, we also add ghost modules to the final output ( After FFN2 ).", "From Table 6, our way of adding ghost modules has comparable performance as After QK , while being much more efficient in parameter size and FLOPs.", "Pruning removes unimportant connections or neurons in the network.", "Compared with pruning connections (Yu et al., 2019; Gordon et al., 2020; Sanh Linear Linear Linear Input Mat Mul Scale & Softmax Ghost Mat Mul NonLinear Linear Input Ghost NonLinear Linear Input Ghost Linear Linear Linear Input Mat Mul Scale & Softmax Ghost Mat Mul Linear Linear Linear Input Mat Mul Scale & Softmax Ghost Mat Mul", "The first group of research works replaces the self-attention mechanism or feed-forward networks with simpler and more efficient convolution operations, while maintaining comparable results.", "Wu et al. (2019) introduce the token-based dynamic depth-wise convolution to compute the importance of context elements, and achieve better results in various NLP tasks.", "Iandola et al. (2020) replace all the feed-forward networks with grouped convolution.", "AdaBERT (Chen et al., 2020) uses differentiable neural architecture to search for more efficient convolution-based NLP models.", "et al., 2020), structured pruning prunes away a group of parameters without changing the model topology and is more favored for hardware and real inference speedup.", "In the width direction, Michel et al. (2019); Voita et al. (2019) retain the performance after pruning a large percentage of attention heads in a structured manner.", "Besides attention heads, McCarley et al. (2019) also prune the neurons and the embeddings.", "In the depth direction, pruning Transformer layers is proposed in LayerDrop (Fan et al., 2019) via structured dropout.", "Efficient choice of Transformer layers at inference via early exit are also proposed in (Liu et al., 2020; Xin et al., 2020; Zhou et al., 2020).", "Hou et al. (2020) perform structured pruning in both width and depth directions.", "The importance of attention heads and neurons in the intermediate layer of Feed-forward network is measured by their impact on the loss, and the least important heads and neurons are pruned away.", "Various methods have been proposed to use linear or convolution operations to enhance the representation of the Transformer layers.", "The second group uses linear or convolutional module along with the self-attention mechanism for more powerful representation.", "The new module can be incorporated though serial connection to the original self-attention mechanism (Mehta et al., 2020), or be used in parallel with the original self-attention mechanism (Wu et al., 2020; Jiang et al., 2020) to capture both local and global context dependency.", "Serial and parallel connections of these linear or convolution operations to Transformer layers are also extended to multi-task (Houlsby et al., 2019; Stickland and Murray, 2019) and multilingual tasks (Pfeiffer et al., 2020).", "Note that the proposed ghost modules are orthogonal to the above methods in that these modules are used to generate more features for the Transformer models and can be easily integrated into existing methods to boost their performance.", "In this paper, we propose GhostBERT to generate more features in pre-trained model with cheap operations.", "We use the softmax-normalized 1-Dimensional Convolutions as ghost modules and add them to the output of the MHA and FFN of each Transformer layer.", "Empirical results on BERT, RoBERTa and ELECTRA demonstrate that adding the proposed ghost modules enhances the representation power and boosts the performance of the original model by supplying more features.", "We thank MindSpore for the partial support of this work, which is a new deep learning computing framework.", "Given the superior performance of Huawei Ascend AI Processor and MindSpore computing framework, our code will be released based on MindSpore at ( https: //gitee.com/mindspore/mindspore/tree/ master/model_zoo/research/nlp/ghostbert )." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "other", "result", "abstain", "abstain", "abstain", "objective", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "other", "other", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "other", "other" ]
[ "Recent studies constructing direct interactions between the claim and each single user response to capture evidence have shown remarkable success in interpretable claim verification.", "Owing to different single responses convey different cognition of individual users, the captured evidence belongs to the perspective of individual cognition.", "However, individuals' cognition of social things is not always able to truly reflect the objective.", "There may be one-sided or biased semantics in their opinions on a claim.", "The captured evidence correspondingly contains some unobjective and biased information.", "In this paper, we propose a Dual-view model based on the views of Collective and Individual Cognition (CICD) for interpretable claim verification.", "For collective cognition, we not only capture the word-level semantics based on individual users, but also focus on sentence-level semantics (i.e., the overall responses) among all users to generate global evidence.", "For individual cognition, we select the topk articles with high degree of difference and interact with the claim to explore the local key evidence fragments.", "To weaken the bias of individual cognition-view evidence, we devise an inconsistent loss to suppress the divergence between global and local evidence for strengthening the consistent shared evidence between the both.", "Experiments on three benchmark datasets confirm the effectiveness of CICD.", "The problem of claim credibility has seriously affected the media ecosystem.", "Research (Allen et al., 2020) illustrates that the prevalence of fake news' has decreased trust in public institutions, and undermined democracy.", "Meanwhile, massive infodemic' during COVID-19 has taken a great toll on health-care systems and lives (Fleming, 2020).", "Therefore, how to verify the claims spread in networks has become a crucial issue.", "Current approaches on claim verification could be divided into two categories: 1) The first category reClaim: In such a hot and rainy season, we should pay attention to the prevention of dengue fever.", "lies on traditional machine learning and deep learning methods to capture semantics (Yang et al., 2019), sentiments (Ajao et al., 2019), writing styles (Przybyla, 2020), and stances (Kumar and Carley, 2019) from claim content, and meta-data features, such as user profiles (Shu et al., 2019; Wu et al., 2020b) for verification.", "Such approaches could improve verification performance, but they are hard to make reasonable explanations for the verified results, i.e., where false claims go wrong; 2) To tackle this issue, many researchers further focus on interpretable claim verification (the second category) by establishing interactive models between claims and each individual relevant article (or comment) to explore coherent (Ma et al., 2019; Wu et al., 2021), similar (Nie et al., 2019; Wu et al., 2020a), or conflicting (Zhou et al., 2020) semantics as evidence for verifying the false parts of claims.", "In interpretable claim verification, the majority of models construct interactions between claims and each single user response (i.e., a comment or a relevant article) to capture evidence, which could effectively learn some of errant aspects of false claims.", "Due to different single responses reflect the cognition of different individual users, the evidence captured by these models is usually confined to individual cognition.", "However, individuals' cognition of social things is not always able to truly reflect the objective (Greenwald et al., 1998; Boogert et al., 2018).", "Owing to individuals are affected by factors such as emotional tendency (Ji et al., 2019), traditional beliefs (Willard and Norenzayan, 2017), and selectively capturing information (Hoffman, 2018), there 60 are considerable differences in cognition of different individuals, and they are prone to cognitive bias, like primacy effect (Troyer, 2011) and halo effect (Gold-stein and Naglieri, 2011), there may be one-sided or biased semantics in their expressed opinions.", "Thus, the captured evidence also correspondingly contains some unobjective and biased evidence fragments, deteriorating task performance.", "For instance, as shown in Figure 1, facing a claim to be verified, different individual users (here, users are the normal users on social media, not journalists or professionals) have different reactions.", "R2 (i.e., response 2 or relevant article 2) and R3 released by users contain unreliable and biased information perceived by their individuals, which may lead to some misleading information being captured as evidence by existing interactive models.", "Therefore, how to explore users' collective cognition on claims is a major challenge for interpretable claim verification.", "To address the deficiencies, we propose a unified D ual-view model based on C ollective and I ndividual C ognition (CICD) for interpretable claim verification, which focuses on discovering global evidence and local key evidence, respectively, and then strengthens the consistent shared evidence between the both.", "Specifically, to explore users' collective cognition to capture global evidence, we design Collective cognition view-based Encoder-Decoder module (CED).", "CED develops claim-guided encoder that not only learns word-level semantics based on individual user, but also captures sentence-level semantics (i.e., the overall opinions) among all users.", "Here, a relevant article (a response) released by an individual user is usually a sentence sequence, so all sentence-level semantics convey the overall opinions of all users.", "Then, CED develops hierarchical attention decoder to generate global evidence by adjusting weights of word-level and sentence-level semantics.", "To further acquire the local key evidence based on individual cognition, we develop Individual cognition view-based Selected Interaction module (ISI) to screen representative topk articles with high difference and interact with the claim to gain local key evidence fragments.", "To weaken the bias of individual cognition view and strengthen the consistent shared evidence between global and local evidence, we project inconsistent loss to suppress the divergence.", "Experimental results not only reveal the effectiveness of CICD but also provide its interpretability.", "Our contributions are summarized: A novel framework integrating interdisciplinary knowledge on interpretable claim verification is explored, which discovers global and local evidence from the perspectives of collective and individual cognition to interpret verified results.", "Proposed CED captures word-level (individual) and sentence-level (holistic) opinions, and reasonably adjusts the proportion between them, which generates global evidence of the view of all users.", "Experiments on three competitive datasets demonstrate that CICD achieves better performance than other strong baselines.", "Automatic verification approaches rely on neural networks to extract content-based features, like semantics (Popat et al., 2018; Wu et al., 2019), sentiments (Nguyen et al., 2020), writing styles (Przybyla, 2020), etc., and metadata-based features, like user profiles-based (Kumar and Carley, 2019), comment-based (Bovet and Makse, 2019), etc., for verification.", "These methods could improve the accuracy of claim verification, but they are lack of interpretability for the verified results.", "To tackle this, interpretable claim verification has received great attention.", "Its basic principle is to obtain queried, corrected, and rumor-refuted semantics from the articles (or comments) related to claims to interpret the credibility of claims.", "At present, the methods for this task generally focus on direct interactions between claims and relevant articles to identify their matching degree (Nie et al., 2019), consistency (Ma et al., 2019), implication (Liu et al., 2019), conflict (Wu et al., 2020c), etc., to learn practical evidence.", "For instances, HAN (Ma et al., 2019) and EHIAN (Wu et al., 2020c) learned implication relationships between claims and relevant articles to capture semantic conflicts as evidence, which reflected a certain interpretability.", "However, since all relevant articles are involved, the captured conflicts may be affected by some low-quality articles with noisy semantics, easily resulting in the invalidation of the evidence.", "In our model, we design ISI module to screen all relevant articles to capture the valuable representative articles with differential semantics, so as to learn local key evidence fragments.", "In addition, some methods, such as GEAR (Zhou et al., 2019) and KGAT (Liu et al., 2020), relied on graph-based networks to conduct semantic aggregation and reasoning on relevant articles, so as to capture global evidence.", "Nevertheless, these models treat an entire article (at the sentence level) as a node and ignore the importance of word-level semantics in each article.", "To overcome these defects, our model constructs a hierarchical attention decoder to fuse sentence-level and word-level semantics for finely-grained generating global evidence.", "illustrated in Figure 2.", "Inputs and Outputs For cognitive input representations, the inputs of CED are a claim sequence and the concatenation of its all relevant articles with the number of N , while the inputs of ISI are a claim sequence and each relevant article.", "Given any a sequence of length l words X = { x 1 , x 2 , ..., x l } , where each word x i R d is a d -dimensional vector obtained by pre-trained BERT model (Devlin et al., 2019).", "Particularly, the length of each sequence in relevant articles is l and that of the claim sequence is p .", "Thus, we obtain the representations of the i -th relevant article and the claim as X r i R l d , X c R p d , respectively.", "For the outputs of the model, the outputs of CED are the generated global evidence sequence of length o words G = { g 1 , g 2 , ..., g o } , where g t is the representation of the t -th generated word and o is the length of G .", "The outputs of ISI are the integrated vector of topk local key evidence fragments I =[ I 1 ; I 2 , ; ... ; I k ] , where ; is the concatenation operation.", "To explore users' collective cognition on claims, we first rely on claim-guided encoder to capture word-level and sentence-level semantics from all relevant articles, and then adjust the proportion between the both by hierarchical attention decoder to generate global evidence.", "sequence encoding layer and a matching layer.", "Sequence Encoding Layer We rely on BiL-STMs to encode all relevant articles and the claim for their contextual representations.", "We utilize the produced hidden states H r = { h r 1 , h r 2 , ..., h rl all } (where l all means the total length of all articles) and H c = { h c 1 , h c 2 , ..., h cp } to denote the contextual representations of relevant articles and the claim, respectively, where h i (i.e., h ri or h ci ) is defined as follows: h i = [ h i ; h i ] (1) where h i R d h and h i R d h are the i -th hidden state of the forward and backward LSTMs for the word x i respectively.", "; is concatenation operation.", "Attention-based Matching Layer is engaged to aggregate the relevant information from the claim for each word within the context of relevant articles.", "The aggregation operation a i =attn(h ri , H c ) is as follows: a i = k c (cid:2) j =1 i,j h cj (2) i,j =exp( s i,j ) / p (cid:2) k =1 exp( s i,k ) (3) s i,j = (h ri ) (cid:2) W 1 h cj (4) where a i is the aggregated vector for the i -th word of the articles.", "i,j is the normalized attention score between h ri and h cj .", "Here, the purpose of adopting claim to guide the encoding of relevant articles includes two perspectives: 1) Strengthening the focus of consistent semantics associated with the claim in relevant articles, i.e., exploring how relevant articles evaluate the claim; and 2) Making the encoding semantics purer.", "We observe that there are some advertisements or useless information in relevant articles.", "This way is able to effectively filter the noise irrelevant to the claim from relevant articles, and consolidates the generation of relevant semantics in the decoder module.", "Furthermore, we output the hidden state corresponding to the last word encoded by each relevant article to form consistent sentence-level representations, where h si represents sentence-level representations of the i -th relevant article.", "Particularly, we apply word-level representations H r = { h r 1 , h r 2 , ..., h rl all } (which can also be represented in the form of different relevant articles, i.e., H r = { h r 1 , 1 , h r 1 , 2 , ..., h rN,l } , where l all = N l ) and sentence-level representations H rs = { h s 1 , h s 2 , ..., h sN } as memory bank for decoder generation.", "To capture the collective cognition-view evidence from relevant articles, we devise hierarchical attention decoder to consider the consistent semantics with different granularity of relevant articles to generate global evidence.", "Specifically, we employ unidirectional LSTM as the decoder, and at each decoding time-step, we calculate in parallel both sentence-level attention weight and word-level by: i = ( h si ) (cid:2) W 2 h dt i,j = ( h ri,j ) (cid:2) W 3 h dt (5) i,j = i,j i (cid:3) i,j i,j i (6) (cid:1860) (cid:2869)(cid:3045) (cid:1860) (cid:2919)(cid:3045) (cid:171) (cid:1860) (cid:3039) (cid:3276)(cid:3287)(cid:3287) (cid:3045) (cid:1860) (cid:2869)(cid:3030) (cid:1860) (cid:3043)(cid:3030) (cid:36)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81)(cid:16)(cid:69)(cid:68)(cid:86)(cid:72)(cid:71)(cid:3)(cid:80)(cid:68)(cid:87)(cid:70)(cid:75)(cid:76)(cid:81)(cid:74) (cid:1860) (cid:2869)(cid:3046) (cid:137) (cid:2869) (cid:137) (cid:3047) (cid:137) (cid:3042) (cid:485) (cid:485) (cid:171) (cid:171) (cid:171) (cid:1860) (cid:3015)(cid:3046) (cid:1860) (cid:2869)(cid:481)(cid:2869)(cid:3045) (cid:1860) (cid:2869)(cid:481)(cid:3039)(cid:3045) (cid:171) (cid:1860) (cid:3015)(cid:481)(cid:2869)(cid:3045) (cid:1860) (cid:3015)(cid:481)(cid:3039)(cid:3045) (cid:171) (cid:171) (cid:38)(cid:79)(cid:68)(cid:76)(cid:80)(cid:16)(cid:74)(cid:88)(cid:76)(cid:71)(cid:72)(cid:71)(cid:3)(cid:40)(cid:81)(cid:70)(cid:82)(cid:71)(cid:72)(cid:85) (cid:43)(cid:76)(cid:72)(cid:85)(cid:68)(cid:85)(cid:70)(cid:75)(cid:76)(cid:70)(cid:68)(cid:79)(cid:3)(cid:36)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81)(cid:3)(cid:39)(cid:72)(cid:70)(cid:82)(cid:71)(cid:72)(cid:85) (cid:55)(cid:75)(cid:72)(cid:3)(cid:74)(cid:72)(cid:81)(cid:72)(cid:85)(cid:68)(cid:87)(cid:72)(cid:71)(cid:3)(cid:86)(cid:72)(cid:84)(cid:88)(cid:72)(cid:81)(cid:70)(cid:72) (cid:38) (cid:82) (cid:79)(cid:79) (cid:72)(cid:70) (cid:87) (cid:76) (cid:89) (cid:72) (cid:3) (cid:70) (cid:82)(cid:74) (cid:81) (cid:76) (cid:87) (cid:76) (cid:82) (cid:81) (cid:3) (cid:89) (cid:76) (cid:72) (cid:90) (cid:16) (cid:69) (cid:68) (cid:86) (cid:72) (cid:71) (cid:3) (cid:40) (cid:81) (cid:70) (cid:82) (cid:71) (cid:72)(cid:85) (cid:16) (cid:39) (cid:72)(cid:70) (cid:82) (cid:71) (cid:72)(cid:85) (cid:3) (cid:11) (cid:38) (cid:40) (cid:39) (cid:12) (cid:1860) (cid:3037)(cid:3045)(cid:3046) (cid:38)(cid:82)(cid:16)(cid:44)(cid:81)(cid:87)(cid:72)(cid:85)(cid:68)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81) (cid:38)(cid:82)(cid:16)(cid:44)(cid:81)(cid:87)(cid:72)(cid:85)(cid:68)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81) (cid:1860) (cid:2869)(cid:3045)(cid:3046) (cid:1860) (cid:3015)(cid:3045)(cid:3046) (cid:1860) (cid:3036)(cid:3045)(cid:3046) (cid:1769) (cid:44)(cid:81)(cid:70)(cid:82)(cid:81)(cid:86)(cid:76)(cid:86)(cid:87)(cid:72)(cid:81)(cid:70)(cid:92)(cid:3)(cid:79)(cid:82)(cid:86)(cid:86)(cid:54)(cid:82)(cid:73)(cid:87)(cid:80)(cid:68)(cid:91) (cid:39)(cid:88)(cid:68)(cid:79)(cid:16)(cid:57)(cid:76)(cid:72)(cid:90)(cid:3)(cid:38)(cid:79)(cid:68)(cid:86)(cid:86)(cid:76)(cid:73)(cid:76)(cid:70)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:44) (cid:81)(cid:71) (cid:76) (cid:89) (cid:76) (cid:71)(cid:88) (cid:68) (cid:79) (cid:3) (cid:70) (cid:82)(cid:74) (cid:81) (cid:76) (cid:87) (cid:76) (cid:82) (cid:81) (cid:3) (cid:89) (cid:76) (cid:72) (cid:90) (cid:16) (cid:69) (cid:68) (cid:86) (cid:72) (cid:71) (cid:3) (cid:54) (cid:72) (cid:79) (cid:72)(cid:70) (cid:87) (cid:72) (cid:71) (cid:3) (cid:44) (cid:81) (cid:87) (cid:72)(cid:85) (cid:68) (cid:70) (cid:87) (cid:76) (cid:82) (cid:81) (cid:3) (cid:11) (cid:44) (cid:54) (cid:44) (cid:12) (cid:55)(cid:75)(cid:72)(cid:3)(cid:70)(cid:82)(cid:81)(cid:70)(cid:17)(cid:3)(cid:82)(cid:73)(cid:3)(cid:68)(cid:79)(cid:79)(cid:3)(cid:85)(cid:72)(cid:79)(cid:72)(cid:89)(cid:68)(cid:81)(cid:87)(cid:3)(cid:68)(cid:85)(cid:87)(cid:76)(cid:70)(cid:79)(cid:72)(cid:86) (cid:38)(cid:79)(cid:68)(cid:76)(cid:80) (cid:36)(cid:85)(cid:87)(cid:76)(cid:70)(cid:79)(cid:72)(cid:3)(cid:49) (cid:36)(cid:85)(cid:87)(cid:76)(cid:70)(cid:79)(cid:72)(cid:3)(cid:20) (cid:55) (cid:54)(cid:82)(cid:73)(cid:87)(cid:80)(cid:68)(cid:91) (cid:55) (cid:54)(cid:82)(cid:73)(cid:87)(cid:80)(cid:68)(cid:91) (cid:1769) (cid:1769) (cid:1769) (cid:1769) (cid:257) (cid:257) (cid:1870) (cid:2869) (cid:1870) (cid:3019) (cid:857) (cid:857) (cid:1870) (cid:2869) (cid:1870) (cid:3019) (cid:882)(cid:484)(cid:884)(cid:887)(cid:882)(cid:484)(cid:886) (cid:882)(cid:484)(cid:885)(cid:884) (cid:171) (cid:171) (cid:171) (cid:171) (cid:171) (cid:1860)(cid:4632) (cid:3047)(cid:3031) (cid:2010) (cid:1860) (cid:2869)(cid:3031) (cid:2011) (cid:1855) (cid:3047) (cid:54)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72)(cid:16)(cid:79)(cid:72)(cid:89)(cid:72)(cid:79)(cid:3)(cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81) (cid:58)(cid:82)(cid:85)(cid:71)(cid:16)(cid:79)(cid:72)(cid:89)(cid:72)(cid:79)(cid:3)(cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81) (cid:54)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72)(cid:16)(cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:58)(cid:82)(cid:85)(cid:71)(cid:16)(cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:485) (cid:882)(cid:484)(cid:886) (cid:882)(cid:484)(cid:884) (cid:882)(cid:484)(cid:883) (cid:1865) (cid:1827)(cid:4670)(cid:1865)(cid:481)(cid:1866)(cid:4671) (cid:1866) (cid:171) (cid:36)(cid:79)(cid:79)(cid:3)(cid:70)(cid:82)(cid:81)(cid:86)(cid:88)(cid:80)(cid:72)(cid:85)(cid:86) (cid:38)(cid:82)(cid:81)(cid:86)(cid:88)(cid:80)(cid:72)(cid:85)(cid:3)(cid:20) (cid:38)(cid:82)(cid:81)(cid:86)(cid:88)(cid:80)(cid:72)(cid:85)(cid:3)(cid:49) (cid:1860) (cid:2869)(cid:3046) (cid:1860) (cid:2870)(cid:3046) (cid:1860) (cid:3015)(cid:3046) (cid:485) (cid:1860) (cid:2869)(cid:481)(cid:3039)(cid:3045) (cid:1860) (cid:2869)(cid:481)(cid:2869)(cid:3045) (cid:1860)(cid:4632) (cid:2869)(cid:3031) (cid:485) (cid:1855) (cid:2869) (cid:171) (cid:171) (cid:171) (cid:171) (cid:43)(cid:76)(cid:72)(cid:85)(cid:68)(cid:85)(cid:70)(cid:75)(cid:76)(cid:70)(cid:68)(cid:79)(cid:3)(cid:36)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81) (cid:485) (cid:1860) (cid:3030)(cid:3046) (cid:54)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72)(cid:16)(cid:79)(cid:72)(cid:89)(cid:72)(cid:79)(cid:3)(cid:53)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86) (cid:54)(cid:72)(cid:79)(cid:72)(cid:70)(cid:87)(cid:72)(cid:71)(cid:3)(cid:48)(cid:72)(cid:70)(cid:75)(cid:68)(cid:81)(cid:76)(cid:86)(cid:80) (cid:38)(cid:82)(cid:16)(cid:44)(cid:81)(cid:87)(cid:72)(cid:85)(cid:68)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81)(cid:3)(cid:47)(cid:68)(cid:92)(cid:72)(cid:85) (cid:38)(cid:82)(cid:74)(cid:81)(cid:76)(cid:87)(cid:76)(cid:89)(cid:72)(cid:3)(cid:44)(cid:81)(cid:83)(cid:88)(cid:87)(cid:3)(cid:53)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86) (cid:1860) (cid:3039) (cid:3276)(cid:3287)(cid:3287) (cid:3045) (cid:1860) (cid:3047)(cid:3031) (cid:2009) (cid:54)(cid:72)(cid:79)(cid:72)(cid:70)(cid:87)(cid:72)(cid:71)(cid:3)(cid:48)(cid:72)(cid:70)(cid:75)(cid:68)(cid:81)(cid:76)(cid:86)(cid:80) (cid:1860) (cid:2919)(cid:3045)(cid:3046) (cid:171) (cid:171) Figure 2: The general architecture of CICD.", "where h dt is the hidden state of the decoder at the t -th time-step, W 2 and W 3 are trainable parameters.", "The word-level attention ascertains how to distribute the attention over words in each sentence (each ar-ticle), which could learn salient evidence segments in each article, while the sentence-level attention determines how much each article should contribute to the generation at current time-step, which could capture potential global semantics in all articles.", "Then the context vector c t is derived as a combination of all word-level representations reweighted by combined attention : c t = (cid:2) i,j i,j h ri (7) And the attentional vector is calculated as: h dt = tanh( W 4 [h dt ; c t ]) (8) Finally, the predicted probability distribution over the vocabulary V at the current step is: PV = softmax( WV h dt + b V ) (9) where W 4 , WV , and b V are trainable parameters.", "We adopt G = { g 1 , g 2 , ..., g o } to denote the generated sequence rich in global evidence.", "To capture evidence fragments from individual cognition view, we design ISI module with the following layers: 1) Sentence-level representation for capturing high-level representations of relevant articles; 2) Selected mechanism for screening the representative topk relevant articles with degree of difference; and 3) Co-interaction layer for making the claim and the selected articles interact with each other to explore local key evidence fragments.", "We exploit BiLSTM to encode each relevant article and capture the output of the last hidden state as the sentence-level representation, where the encoding process is similar to sequence encoding layer in Section 3.2.1, where the sentence-level representation of the i -th article is h rsi .", "To capture representative topk articles, we develop selected mechanism to calculate the difference between each articles and other articles in an automated manner.", "To do this, selected mechanism learns and optimizes an inter-sentential attention matrix A RN N .", "The entry ( m, n ) of A holds the difference between article m and article n ( 1 m, n N and m (cid:6) = n ) and is computed as: u m = ( W m h rsm + b m ) u n = ( W n h rsn + b n ) (10) A [ m, n ] = exp(u m (cid:7) u n ) (cid:3) Ni =1 exp(u i (cid:7) u n ) (11) where is a activation function, W m and W n are weight matrix, b m and b n are biases, and (cid:7) denotes dot product operator.", "The larger the entry A [ m, n ] is, the higher the similar between article m and article n is.", "Thus, the smaller A [ m, n ] corresponds to article m and n contain more differential semantics, and finally we screen topk relevant articles with high difference for further downstream interaction.", "This co-interaction layer aims to explore local key evidence fragments.", "Specifically, the layer enables the claim to focus on the i -th article to discover the specific evidence fragment, while the i -th article pays close attention to the claim to explore the possible false part of 63 the claim.", "Finally, we combine the two interactions to constitute the individual key local evidence fragments.", "H rini = h rsi + softmax(h rsi (( H cs ) (cid:2) )) H cs (12) H cin = H cs + softmax( H cs ((h rsi ) (cid:2) ))h rsi (13) I i = [ H rini ; H cin ] (14) where H rini is the evidence fragment of the i -th article, H cin is the false part of the claim, and H cs is the outputs of the last time step of H c .", "For all topk articles, we integrate all local evidence fragments by concatenation operation.", "I = [ I 1 ; I 2 ; ... ; I k ] (15) 3.3 Dual-View Classification To alleviate the bias of individual cognition-view evidence fragments and strengthen the consistent shared evidence between global and local evidence, we introduce an inconsistency loss to penalize the disagreement between the both evidence.", "We define the inconsistency loss function as the Kulllback-Leibler (KL) divergence between G and I .", "where G (cid:2) k is the k -th element of the concatenation of the words in G , and I (cid:2) k is the k -th element of I .", "Furthermore, we fuse the two types of penalized evidence, and adopt softmax function to emit the probability distribution for training, where a loss forces the model to minimize the cross-entropy error for a training sample with ground-truth label y : Loss = (cid:2) y log p (17) p = softmax( W p [ G ; I ] + b p ) (18) where W p and b p are the learnable parameters.", "To ensure the effective synergy of the two cognition views, we put together all loss mentioned above for joint training.", "L = Loss + Loss in (19) where is the hyper-parameter.", "For evaluation, we utilize three publicly available datasets, i.e., Snopes, PolitiFact (both released by (Popat et al., 2018)), and FEVER (Thorne et al., 2018).", "The first two datasets contain 4,341 and 3,568 news claims, associating with 29,242 and 29,556 relevant articles (these articles can be regarded as responses of different individual users to claims) collected from various web sources respectively.", "FEVER consists of 185,445 claims accompanied by manual annotation Wikipedia articles.", "For labels, each claim in Snopes is labeled as true and false , while PolitiFact divides claims into six kinds of credibility labels: true, mostly true, half true, mostly false, false, and pants on fire.", "To distinguish the veracity more practically, like Ma et al. (2019), we merge mostly true, half true and mostly false into mixed, and treat false and pants on fire as false.", "Then, the labels of PolitiFact are classified as true , mixed , and false .", "On FEVER, each claim is partitioned as supported , refuted , or NEI (not enough information).", "For evaluation metrics, on Snopes and PolitiFact, we exploit micro-/macro-averaged F1(micF1/macF1), class-specific precision (Prec.), recall (Rec.) and F1-score (F1) as evaluation metrics.", "We hold out 10% of the claims for tuning the hyper parameters, and conduct 5-fold cross-validation on the rest of the claims.", "On FEVER, we leverage accuracy (Acc.), and F1-score (F1) as evaluation metrics, and follow Thorne et al. (2018) to partition the annotated claims into training, development (Dev.), and testing (Test.) sets.", "For parameter configurations, we adjust them according to the performance of development sets, we set the word embedding size d to 768.", "The dimensionality of LSTM hidden states d h is 120.", "The length l of each relevant article is 100 and that of the claim p is assigned as 20.", "Due to no parameters depend on the number of articles N , instead of intercepting a fixed number, we set N to vary with claims.", "Initial learning rate is set to 2e-3.", "The loss weight coefficient is trained to 0.2.", "The dropout rate is 0.4, and we set the mini-batch size of the three datasets as 32, 32, and 64, respectively.", "Additionally, an Adam (Kingma and Ba, 2015) optimizer with 1 as 0.9 and 2 as 0.999 is used to optimize all trainable parameters.", "We compare CICD and several competitive baselines: 1) DeClarE (Popat et al., 2018) models joint interactions between claims and articles and aggregates word-level credibility signals from external articles for evidence-aware assessment; 2) BERT (Devlin et al., 2019), we employ pre-trained BERT classifier to verify claims; 3) HAN (Ma et al., 2019), a hierarchical attention network, constructs the interactions between claims and relevant articles for capturing sentence-64", "level evidence by considering their topical coherence and semantic inference strength; 4) HAN-ba (Ma et al., 2019) is a variant of HAN, where the gated attention is replaced to biaffine attention for acquiring evidence; and 5) EHIAN (Wu et al., 2020c) is an evidence-aware hierarchical interactive attention network, which focuses on the direct interaction between claim and relevant articles to explore key evidence fragments.", "As shown in Table 1, we observe that: BERT achieves at least 6.5% improvement on micF1 than DeClarE, which illustrates pre-trained model can learn rich semantic context features to improve performance, which is also the reason that we adopt BERT to train word embeddings.", "HAN consistently outperforms BERT, which indicates HAN capturing the coherence between relevant articles could help improve the task performance.", "In interpretable methods, CICD outperforms DeClarE, which is because our model not only focuses on word-level semantics like DeClarE, but also grasps the holistic sentence-level features.", "Moreover, owing to HAN and HAN-ba drive all relevant articles to participate in the interaction, prompting them to gain a small boost in precision on Snopes, but this way may introduce noise from nonsignificant articles.", "CICD effectively avoids this problem by selecting vital articles for interaction, which obtains significant improvements in other metrics compared with HAN and HAN-ba.", "Furthermore, CICD consistently outperforms EHIAN on Snopes and PolitiFact.", "The superiority is clear: CICD not only values individual cognition view to capture key evidence fragments, but also generates collective cognition-view evidence for claim verification.", "In order to evaluate the impact of each component of CICD, we ablate CICD into the following simplified models: 1) -matching U represents the attention-based matching layer of CED is removed; 2) -CED", "means CED is deleted from our model; 3) -selected I refers to the selected mechanism is removed from ISI; 4) -interaction I represents the co-interaction unit of ISI is replaced by concatenation operation; 5) -ISI corresponds to ISI is separated; and 6) -inconsistency loss means the inconsistency loss is removed.", "As shown in Table 2, we observe that: The removal of each module (-CED or -ISI) weakens the performance of CICD, presenting from 4.2% to 5.5% degradation in micF1, and the stripping of different layers (like -selected I and interaction I) of each module also reduces the model performance, reducing at least 2.4% performance in micF1, which describes the effectiveness of each component and the organic integrity of CICD.", "-CED reflects the lowest performance in all simplified models, decreasing 5.5% and 4.6% in micF1 on the two datasets, respectively, which elaborates the effectiveness of our CICD capturing the collective cognition-view global evidence.", "Meanwhile, -ISI underperforms CICD, showing 4.3% and 4.2% degradation in micF1 on the two datasets respectively, which conveys the necessity of the exploration of local key evidence fragments from individual cognition view.", "When compared with -inconsistency loss, CICD significantly improves the performance on the two datasets with the help of inconsistency loss unit, which verifies the effectiveness of our model rely-65 ing on inconsistency loss to discover shared valuable semantics between global and local evidence.", "To obtain a more detailed understanding of the superiority of our co-interaction networks (CoI), we compare CoI with the following prevalent interaction networks: 1) MLP ( Multilayer Perceptron ) acts as an interaction strategy to automatically abstract the integrated representation of claims and articles; 2) Self-Att ( Self-attention Networks ) (Vaswani et al., 2017) adopts the claim as query, and relevant articles to serve as values and keys for interaction; 3) Biaf-Att ( Biaffine Attention ) (Ma et al., 2019) measures the degree of semantic matching for interaction; and 4) Symm-Intr ( Symmetric interaction attention ) (Tao et al., 2019) is exploited to model the interaction between claims and articles.", "Specifically, we investigate the performance and time cost of these methods on Snopes and PolitiFact based on Linux CentOS with NVIDIA TITAN Xp GPU, as shown in Figure 3.", "We observe that: From the overall performance of all methods, our method achieves the optimal performance, outperforming other methods by more than 5.1% and 5.6% performance in micF1, respectively.", "From the indicator of time cost, our method saves a great deal of time.", "Compared with Self-Att and Symm-Intr, our method saves from 500 to 1,000 seconds in time cost on the two datasets, respectively.", "The reason is that the structures of multiple mappings of self-attention networks and the repeat stacks of symmetric attention delay the efficiency.", "Although the time cost of our method is higher than that of MLP and Biaf-Att, the performance of both methods is unsatisfactory, which is lower than our method al least 2.6% and 3.7% in micF1 on both datasets.", "On the whole, these adequately manifest the superiority of our method.", "To verify the effectiveness of the internal structure of hierarchical attention decoder (HAD) in CED, we ablate HAD with the following models: -word.", ", -sentence.", ", and -merge.", "respectively denote HAD removing word-level attention , sentence-level attention , and merged semantics .", "decoder.", "represents the vanilla decoder.", "Experimental results are shown in Table 3, we observe that: first, the removal of any module of HAD could weaken the performance of the model, which confirms the effectiveness of each module.", "Second, in addition to the basic decoder, Methods Snopes PolitiFact micF1 macF1 micF1 macF1 -word.", "our model achieves the most prominent boost with the support of sentence-level attention, which proves the effectiveness of HAD fusing sentence-level semantics to capture global semantics of HAD.", "To further investigate the contribution of sentence-level semantics to the global evidence, we take Figure 1 as an example to visualize the global evidence generated by our model with and without sentence-level attention, respectively.", "As shown in Figure 4, we observe that the model with sentence-level attention focuses more on the sentences with maximum weight, that is, R4, such as the words do not spread' and re-futed it spreads in the air', while the model without sentence-level attention does not identify which relevant articles are more valuable, so that they concentrate more on R2 and R3, like get infected husband' and not all types of mosquitoes'.", "These fully prove the effectiveness of sentence-level semantics for the generation of global evidence.", "To examine the extensibility of our model, we also compare CICD and the following state-of-the-art baselines on FEVER dataset: 1) NSMN : The pipeline-based system, Neural Semantic Matching Network (Nie et al., 2019), conducts document retrieval, sentence selection, and claim verification jointly for fact extraction and verification; 2) HAN : It has introduced in Section 4.3.1; 3) GEAR : A graph-Do", "graph-Do not spread this news, we prevent the transmission of dengue fever through mosquito.", "It is refuted that it spreads in the air.", "I get infected after my husband, it maybe true that dengue fever could be transmitted through mosquitoes and air, but not all types of mosquitoes.", "(a) Our model with sentence-level attention", "(b) Our model without sentence-level attention (cid:3) Figure 4: The sequences generated by our model with and without sentence-level attention, respectively.", "based evidence aggregating and reasoning model (Zhou et al., 2019) enables information to transfer on a fully-connected evidence graph and then utilizes different aggregators to collect multi-evidence information; 4) KGAT : Kernel graph attention network (Liu et al., 2020) conducts more fine-grained fact verification with kernel-based attentions, where using BERT (Base) encoder with ESIM retrieved sentences.", "As shown in Table 4, we observe that: CICD outperforms the two pipelines (NSMN and HAN) by from 4.3% to 11.0% boost in accuracy, respectively.", "This is because these two baselines lack the integration and reasoning process between relevant articles when capturing evidence.", "CICD boosts the performance in comparison with GEAR and KGAT, showing at least 1.8% and 1.5% improvement in accuracy on development and testing sets, respectively.", "The reason may be that although the two graph-based models aggregate and reason information from relevant articles to collect multi-evidence, they treat each relevant article equally, leading to individual-cognitive relevant articles with some biased semantics interfering with their reasoning process.", "It is more feasible for our model to discover global evidence and local key evidence fragments comprehensively from the perspectives of collective and individual cognition.", "To interpret the results of our model more transparently and intuitively, we visualize the outputs of each module of CICD as shown in Figure 5, where Figure", "5", "(a) is the sequence generated by CED module, and the highlighted words in Figure", "5(b) and 5", "(c) are respectively the words captured by CICD to interpret the results and the words obtained by ISI module to obtain the evidence fragments.", "We could learn: ISI ignores some articles with pale and feeble semantics (R2 and R4), and selects the articles with more valuable semantics (R1, R3, and R5) and captures multiple local evidence fragments, such as this video screenshot shows' (E1), se-rious error' (E2), and screenshot of the video is one-sided' (E3).", "Particularly, fragment E1 is misleading, which reflects the deviation of individual cognition.", "The sequence generated by CED effectively gains available evidence 120 million Americans a serious error' and the screenshot is one-sided' through balancing the possible evidence semantics in relevant articles from a global perspective.", "By constraining global and local evidence, CICD disciplines the misleading evidence fragment E1 captured by ISI, and finally highlights the shared salient evidence between the both as the final interpretability of the verification results.", "In this paper, we proposed a unified dual-view model based on the perspectives of collective and individual cognition for interpretable claim verification, which constructed collective cognition view-based", "encoder-decoder module to generate global evidence and designed individual cognition view-based selected interaction module to explore local key evidence segments.", "Besides, we introduced inconsistent loss to penalize the disagreement between global and local evidence for promoting the capture of consistent shared evidence.", "Experiments on three different widely used datasets demonstrated the effectiveness and interpretability of our model.", "In the future, we plan to expand the work as follows: 1) Developing questioning mechanism to filter the suspicious evidence; and 2) Integrating social cognition, psychology, and other interdisciplinary knowledge to improve the interpretability of claim verification.", "The research work was supported by National Key Research and Development Program in China (2019YFB2102300); The World-Class Universities (Disciplines) and the Characteristic Development Guidance Funds for the Central Universities of China (PY3A022); Ministry of Education Fund Projects (18JZD022 and 2017B00030); Shenzhen Science and Technology Project (JCYJ20180306170836595); Basic Scien-tific Research Operating Expenses of Central Universities (ZDYF2017006); Xi'an Navinfo", "Corp.& Engineering Center of Xi'an Intelligence Spatial-temporal Data Analysis Project (C2020103); Beilin District of Xi'an Science & Technology Project (GX1803).", "We would like to thank the anonymous reviewers for their valuable and constructive comments." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "other", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "other", "method", "method", "method", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "method", "other", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "objective", "other", "other", "other" ]
[ "Past research has demonstrated that large neural language models (LMs) encode surprising amounts of factual information: however, augmenting or modifying this information requires modifying a corpus and retraining, which is computationally expensive.", "To address this problem, we develop a neural LM that includes an interpretable neuro-symbolic KB in the form of a fact memory.", "Each element of the fact memory is formed from a triple of vectors, where each vector corresponds to a KB entity or relation.", "Our LM improves performance on knowledge-intensive question-answering tasks, sometimes dramatically, including a 27 point increase in one setting of WebQuestionsSP over a state-of-the-art open-book model, despite using 5% of the parameters.", "Most interestingly, we demonstrate that the model can be modified, without any re-training, by updating the fact memory.", "Neural language models (LMs) (Peters et al., 2018; Devlin et al., 2019; Raffel et al., 2019) that have been pre-trained by self-supervision on large corpora contain rich knowledge about the syntax and semantics of natural language (Tenney et al., 2019), and are the basis of much recent work in NLP.", "Pretrained LMs also contain large amounts of factual knowledge about the world (Petroni et al., 2019; Roberts et al., 2020; Brown et al., 2020).", "However, while large LMs can be coerced to answer factual queries, they still lack many of the properties that knowledge bases (KBs) typically have.", "In particular, it is difficult to distinguish answers produced by memorizing factual statements in the pre-training corpus from lower-precision answers produced by linguistic generalization (Poerner et al., 2019).", "It is also difficult to add or remove factual information without retraining the LM, an expensive process 1 .", "The difficulty of updating knowledge in neural LMs contrasts with symbolic KBs, where it is very easy to add or modify triples, and is a major disadvantage of using a LM as a KBas in many domains (news, product reviews, scientific publications, etc) the set of known facts changes frequently.", "Symbolic KBs thus remain practically important (Google, 2012; Dong, 2017), especially for NLP applications where text is hard to automatically process (e.g., scientific, technical, or legal) or tasks rich in information that exists only in structured form (e.g., technical specifications of a new product, where no product page or review text discussing it yet exists).", "Motivated by this, past work has sought to combine the benefits of neural LMs with the large, broad-coverage KBs that now exist (Bollacker et al., 2008; Auer et al., 2007; Vrandecic and Krtzsch, 2014).", "This paper continues this research program with a new knowledge-augmented LM called Fact Injected Language Model (FILM).", "FILM is a masked LM, where masks can be filled either from the token vocabulary or an entity vocabulary.", "The vector representation of each entity in a KB is jointly learned alongside other parameters of a Transformer LM, and stored in a separate entity memory .", "FILM also includes a fact memory where each element is derived from a triple of vectors, representing a KB entity or relation.", "Since these triples are defined compositionally from (rep-resentations of) entities and relations, they have an interpretable symbolic meaning: e.g., if e mtv is the vector representation of KB entity Mountain View, CA and e google and r hq similarly correspond to Google Inc and the relation headquartered in, these vectors can be used to construct a memory element f ( e google , r hq , e mtv ) for the KB assertion Google, Inc is headquartered in Mountain 1 Models large enough to achieve good factual coverage require extreme amounts of compute, and the largest neural LMs now cost millions of dollars to train (Brown et al., 2020). Figure 1: Fact Injected Language Model architecture. The model takes a piece of text (a question during finetuning or arbitrary text during pre-training) and first contextually encodes it with an entity enriched transformer. FILM uses the contextually encoded MASK token as a query to the fact memory. In this case, the contextual query chooses the fact key ( Charles Darwin, born_in ) which returns the set of values { United Kingdom } (The value set can be multiple entity objects such as the case from calling the key [ United Kingdom, has_city ]) . The returned object representation is incorporated back into the context in order to make the final prediction. Note that the entity representations in the facts (both in keys and values) are shared with the entity memory. The portion within the dashed line follows the procedure from Fvry et al. (2020). View, CA.", "This means that the fact memory can be easily extended with new facts.", "In analysis on four benchmark question answering datasets we show that FILM improves significantly, and sometimes dramatically, over several strong baselines (e.g. BART (Lewis et al., 2019) and T5 (Raffel et al., 2019)) and this improvement is even larger when removing train-test overlap.", "In one setting of WebQuestionsSP, we outperform the next best performing model (RAG (Lewis et al., 2020a)) by 27 points despite using only 5% of the number of parameters.", "Most interestingly, we demonstrate that FILM models can be updated without any re-training, by modifying the fact memory.", "Specifically, in 4.1, we show we can inject new fact memories at inference time, enabling FILM to correctly answer questions about pairs of entities that were never observed in the training (either during pre-training or fine-tuning).", "In 4.2 we also evaluate updating the model by inserting contra-positive facts that contradict facts mentioned in the pretraining data, and we show that FILM can correctly answer novel questions in this scenario as well.", "To summarize, this paper's contributions are:", "incorporates a symbolic fact memory.", "2. We outperform most baselines on several benchmark open-domain QA datasets, and dramatically if test-train overlap in the datasets are removed.", "3. We show FILM can easily adapt to newly injected and modified facts without retraining.", "The Fact Injected Language Model (FILM) model (see Figure 1) extends the Transformer (Vaswani et al., 2017) architecture of BERT (Devlin et al., 2019) with additional entity and facts memories.", "These memories store semantic information which can later be retrieved and incorporated into the representations of the transformer.", "Similar to the approach in Fvry et al. (2020), entity embeddings will (ideally) store information about the textual contexts in which that entity appears, and by inference, the entity's semantic properties.", "The fact memory encodes triples from a symbolic KB, constructed compositionally from the learned embeddings of the entities that comprise it and implemented as a key-value memory which is used to retrieve entities given their KB properties.", "This combination results in a neural LM which learns to access information from a symbolic KB.", "We represent a Knowledge Base K as a set of triples ( s, r, o ) where s, o E are the subject and object entities and r R is the relation, where E and R are pre-defined vocabularies of entities and relations.", "A text corpus C is a collection of paragraphs 2 { p 1 , . . . , p | C | } .", "Let M be the set of entity mentions in the corpus C .", "A mention m i is encoded as ( e m , s pm , t pm ) , indicating entity e m is mentioned in paragraph p starting at token position s pm and ending at t pm .", "We will usually drop the superscript p and use s m and t m for brevity.", "The input to our model is a piece of text; either a question during fine tuning (see A.2.2) or a paragraph in pre-training (see A.2.1).", "Pretraining is formulated as a cloze-type Question Answering (QA) task: given a paragraph p = { w 1 , . . . , w | p | } with mentions { m 1 , . . . , m n } , we sample a single mention m i to act as the cloze answer and replace all tokens of m i with [ MASK ] tokens.", "The entity in E named by the masked entity is the answer to the cloze question q ('United Kingdon' in the example input of Figure 1).", "Mentions in the paragraph other than m are referred to below as context mentions .", "In the following sections we describe how our model learns to jointly link context entities (2.3) and predict answer entities (2.5).", "Our entity memory E R |E| d e is a matrix containing a vector for each entity in E and trained as an entity-masked LM.", "The model input is a text span containing unlinked entity mentions with known boundaries 3 .", "Mentions are masked with some probability.", "Our entity memory follows Entity as Experts (EaE) (Fvry et al., 2020) which interleaves standard Transformer (Vaswani et al., 2017) layers with layers that accesss the entity memory 4 .", "Given a piece of text q = { w 1 , . . . , w | q | } the contextual embedding h ( l ) i is the output at the i 'th 2 Although we use the term paragraph here, in our experiments we use spans of 128 tokens, which need not follow paragraph boundaries.", "3 Fvry et al. (2020) also showed the model is capable of learning to predict these boundaries.", "For simplicity, in this work we assume they are given.", "4 We follow the implementation of Fvry et al. (2020) and have a single entity memory access between the fourth and fifth transformer layers.", "token of the l 'th intermediate transformer layer.", "These contextual embeddings are used to compute query vectors that interface with the entity memory.", "For each context mention m i = ( e m i , s m i , t m i ) in q, we form a query vector to access the Entity memory by concatenating the context embeddings for the mention m i 's start and end tokens, h ( l ) s mi and h ( l ) t mi and projecting them into the entity embedding space.", "We use this query to compute attention weights over the full entity vocabulary and produce an attention-weighted sum of entity embeddings u lm i .", "The result is then projected back to the dimension of the j -indexed contextual token embeddings, and added to what would have been the input to the next layer of the Transformer: h ( l ) m i = W Te [ h ( l ) s mi ; h ( l ) t mi ] (1) u ( l ) m i = softmax ( h ( l ) m i , E ) E (2) h ( l +1) j = h ( l ) j + WT 2 u ( l ) m i , s m i < j < t m i (3) After the final transformer layer T , h ( T ) m i is used to predict the context entities e m i and produce a loss with I e mi , the one-hot label of entity e m i .", "Following Fvry et al. (2020), we supervise the entity access for the intermediate query vector in Eq.", "1. e m i = argmax e i E ( c Tm i e i ) loss ctx = cross_entropy ( softmax ( c m i , E ) , I e mi ) loss ent = cross_entropy ( softmax ( h ( l ) m i , E ) , I e mi ) 2.4 Fact Memory FILM contains a second fact memory , populated by triples from the knowledge base K , as shown on the right side of Figure 1 5 .", "The fact memory shares its on entity representations with the entity memory embeddings in E , but each element of the fact memory corresponds to a symbolic substructure, namely a key-value pair (( s, r ) , { o 1 , . . . , o n } ) .", "The key ( s, r ) is a (subject entity, relation) pair, and the corresponding value { o 1 , . . . , o n } is the list of object entities associated with s and r , i.e. ( s, r, o i ) K for i = { 1 , . . . , n } .", "Conceptually, KB triples with the same subject entity and relation are grouped into a single element.", "We call the subject and relation pair a j = ( s, r ) A a head pair and the list of objects b j = { o 1 , . . . , o n } B a tail set 6 .", "5 In our experiments we use a single fact memory access after the final (12th) transformer layer.", "6 The size of the tail set b j can be large for a popular head pair ( s, r ) .", "In such cases, we randomly select a few tails and drop the rest of them.", "The maximum size of the tail set is 32 in the experiments in this paper.", "In more detail, we encode a head pair a j = ( s, r ) A by concatenating embeddings for the subject entity and relation, and then projecting them linearly to a new head-pair embedding space.", "More precisely, let E R |E| d e be the entity embeddings trained in 2.3, and R R |R| d r be embeddings of relations R in the knowledge base K .", "We encode a head pair a as: a j = W Ta [ s ; r ] R d a where s E and r R are the embeddings of subject s and relation r , and W a is a learned linear transformation matrix.", "We let A R | A | d a denote the embedding matrix of all head pairs.", "Let the answer for q be denoted e ans , and its masked mention m ans = ( e ans , s ans , t ans ).", "For a masked mention m ans , define a query vector to access the fact memory as: v m ans = W Tf [ h ( T ) s ans ; h ( T ) t ans ] (4) where h ( T ) s ans and h ( T ) t ans are the contextual embeddings for the start and end tokens of the mention m ans , and W f is the linear transformation matrix into the embedding space of head pairs A .", "Head pairs in A are scored by the query vector v m ans and the top k head pairs with the largest inner product are retrieved.", "This retrieval process on the fact memory is distantly supervised.", "We define a head pair to be a distantly supervised positive example a ds = ( s, r ) for a passage if its subject entity s is named by a context mention m i and the masked entity e ans is an element of the corresponding tail set, i.e. e ans b ds .", "When no distantly supervised positive example exists for a passage, it is trained to retrieve a special null fact comprised of the s null head entity and r null relation: i.e. a ds = ( s null , r null ) and its tail set is empty.", "This distant supervision is encoded by a loss function: TOP k ( v m ans , A ) = argmax k,j { 1 ,..., | A |} a Tj v m ans loss fact = cross_entropy ( softmax ( v m ans , A ) , I a ds ) The result of this query is that the tail sets associated with the top k scored head pairs, i.e. { b j | j TOP k ( v , A ) } , are retrieved from the fact memory.", "Next, tail sets retrieved from the fact memory are aggregated.", "Recall that a tail set b j returned from the fact memory is the set of entities { o 1 , . . . , o n } s.t. ( s, r, o i ) K for i { 1 , . . . , n } with the associated a j = ( s, r ) .", "Let o i E be the embedding of entity o i .", "We encode the returned tail set b j as a weighted centroid of the embeddings of entities in the tail set b j .", "where i is a context-dependent weight of the object entity o i .", "To compute the weights i , we use a process similar to Eq.", "4: we compute a second query vector z m ans to score the entities inside the tail set b j , and the weights i are the softmax of the inner products between the query vector z m ans and the embeddings of entities in the tail set b j .", "where W b is a transformation matrix distinct from W e in Eq.", "1 and W f in Eq.", "4. The top k tail sets b j are further aggregated using weights j , which are the softmax of the retrieval (inner product) scores of the top k head pairs a j .", "This leads to a single vector f m ans that we call the knowledge embedding for the masked mention m ans .", "Intuitively f m ans is the result of retrieving a set of entities from the fact memory.", "The last step is to integrate this retrieved set into the Transformer's contextual embeddings.", "Of course, KBs are often incomplete, and especially during pre-training, it might be necessary for the model to ignore the result of retrieval, if no suitable triple appears in the KB.", "To model this, the final step in the integration process is to construct an integrated query q m ans with a learnable mixing weight .", "Algorithmically, is computed as the probability of retrieving a special null head a null from the fact memory, i.e. whether an oracle head pair exists in the knowledge base.", "q m ans is used to predict the masked entity.", "However, we first validate the efficacy of our model on standard splits of widely used knowledge-intensive benchmarks against many state-of-the-art systems (3.3), as well as two subsets of these benchmarks restricted to examples answerable with wikidata (3.4) and examples filtered for train/test overlap (3.5).", "We evaluate on four knowledge intensive tasks 7 .", "WebQuestionsSP is an Open-domain Question Answering dataset containing 4737 natural language questions linked to corresponding Freebase entities and relations (Yih et al., 2015) derived from WebQuestions(Berant et al., 2013).", "LAMA TREx is a set of fact-related cloze questions.", "Since we are interested in entity prediction models, we restrict our LAMA investigations to TREx, which has answers linked to Wikidata.", "TriviaQA (open) contains questions scraped from quiz-league websites (Joshi et al., 2017).", "We use the open splits following Lee et al. (2019).", "FreebaseQA is an Open-domain QA dataset derived from TriviaQA and other trivia resources (See Jiang et al. (2019) for full details).", "Every answer can be resolved to at least one Freebase entity and each question contains at least one entity.", "T5 (Raffel et al., 2019) and BART (Lewis et al., 2019) are large text-to-text transformers.", "Dense Passage Retrieval ( DPR ) (Karpukhin et al., 2020) is a two stage retrieve and read model.", "Retrieval Augmented Generation ( RAG ) (Lewis et al., 2020a) and Fusion in Decoder ( FID ) (Izac-ard and Grave, 2020) use DPR retrieval, followed 7 All data is English.", "See A.1 for additional details.", "by generative decoders based on BART and T5 respectively.", "FID is the current state-of-the-art on the open domain setting of TriviaQA.", "K-Adapter (Wang et al., 2020a) and Bert-KNN (Kassner and Schtze, 2020) are recent BERT extensions that perform at or near state-of-the-art on the LAMA benchmark.", "Entities-as-Experts ( EaE ) (Fvry et al., 2020) is discussed in 2.3.", "Our EaE models are trained using the same hyperparameters and optimization settings as FILM.", "Generally, open book models refer to 'retrieve and read' pipelines (Chen et al., 2017) which, given a query, 1) retrieve relevant passages from a corpus, 2) separately re-encode the passages conditioned on the question and then 3) produce an answer.", "Conversely, closed book models answer questions directly from their parameters without additional processing of source materials.", "We consider FILM and EaE closed-book models as they do not retrieve and re-encode any source text, and instead attend to parameterized query-independent memories.", "LAMA TREx.", "In Table 1, we can see that FILM outperforms several recently proposed models on the LAMA TREx task.", "FILM outperforms the next best performing model, BERT-KNN by 5.5 points.", "Question-Answering.", "In Table 2, we compare FILM to five close-book and three open-book QA models on WebQuestionsSP and TriviaQA.", "The columns denoted Full Dataset-Total show results for the standard evaluation.", "For WebQuestionsSP, despite using far fewer parameters (see Table 3 and A.3 for details), FILM outperforms all other models including the top open-book model RAG.", "On TriviaQA, FILM outperforms all other closed-book modelsthough the open-book models are substantially more accurate on this task, likely because of the enormous size of the models and their access to all of Wikipedia, which contains all (or nearly all) of the answers in TriviaQA.", "WebQuestionsSP (and similarly FreebaseQA discussed in 4) was constructed such that all questions are answerable using the FreeBase KB, which was last updated in 2016.", "Because our pretraining corpus is derived from larger and more recent versions of Wikipedia, we elected to use a KB con-WebQuestionsSP TriviaQA Model Full Dataset Wikidata Answer Full Dataset Wikidata Answer Total No Overlap Total No Overlap Total No Overlap Total No Overlap Closed-book FILM 54.7 36.4 78.1 72.2 29.1 15.6 37.3 28.4 EaE 47.4 25.1 62.4 42.9 19.0 9.1 24.4 17.1 T5-11B 49.7 31.8 61.0 48.5 BART-Large 30.4 5.6 36.7 8.3 26.7 0.8 30.6 1.0 Open-Book RAG 50.1 30.7 62.5 45.1 56.8 29.2 64.9 45.2 DPR 48.6 34.1 56.9 45.1 57.9 31.6 66.3 48.8 FID 67.6 42.8 76.5 64.5 EmQL 75.5 -74.6 --Table 2: Open Domain QA Results .", "structed from Wikidata.", "Many entities in Freebase are unmappable to the more recent Wikidata KB which means that some questions are no longer answerable using the KB.", "Because of this, we created reduced versions of these datasets which are Wikidata answerable i.e., containing only questions answerable by triples from our Wikidata-based KB.", "The model should learn to rely on the KB to answer the questions.", "We do the same for TriviaQA.", "8 As seen in Table 2 in the column Wikidata answer-Total, FILM does much better on Wikidata answerable questions on WebQuestionsSP.", "EmQL (Sun et al., 2020), the state-of-the-art dataset specific model, gets 75.5% accuracy on the full dataset.", "Not surprisingly, this is because EmQL operates over the Freebase knowledge base, giving it full upperbound recall.", "However, when we restrict to Wikidata answerable questions, thus giving both EmQL and FILM potential for full recall, FILM outperforms EmQL by 3.5 points and the next best model (RAG) by over 15 points.", "8 TriviaQA does not have linked entities in its questions so for those results we relax this restriction to include all examples where the answer resolves to a Wikidata entity.", "We are interested in the ability of models to use external knowledge to answer questions, rather than learning to recognize paraphrases of semantically identical questions.", "Unfortunately, analysis showed that many of the test answers also appear as answers to some training-set question: this is the case for 57.5% of the answers in WebQuestionsSP and 75.0% for FreebaseQA.", "This raises the possibility that some of the performance can be attributed to simply memorizing specific question/answer pairs, perhaps in addition to recognizing paraphrases of the question from its pretraining data.", "Overlap in fine-tuning train/test splits was concurrently observed by Lewis et al. (2020b), who created human verified filtered splits for TriviaQA and WebQuestions.", "We evaluate our models on those splits and report results in Table 2 in the No Overlap columns.", "We see that the gap between FILMand the next best performing model RAG in-creasses from 4.6 to 5.7 points on WebQuestionSP.", "On TriviaQA, FILMis still able to answer many questions correctly after overlap is removed.", "In contrast, the majority of closed book models such as BART get less than 1% of answers correct.", "The filtering procedure from Lewis et al. (2020b) addresses finetuning train/test overlap but does not account for overlap with the pretraining data.", "To investigate this further, we looked at FreebaseQA and WebQuestionsSP which both contain entity linked questions and answers.", "We first perform a similar procedure to Lewis et al. (2020b) and discard questions in the fine-tuning training data that contain answers which overlap with answers to questions in the dev and test data.", "We end up with 9144/2308/3996 data (train/dev/test) in FreebaseQA and 1348/151/1639 data in WebQuestionsSP.", "This setting is referred to as Fine-tune column in Table 4 which shows the effects of different filterings of the data.", "Next we want to ensure that the model will be unable to simply memorize paraphrases of question answer pairs that it observed in the text by removing all overlap between the pretraining data and finetuning test data.", "For every question answer entity pair in our finetuning dataset (coming from any split), we filter every example from our Wikipedia pretraining corpus where those pair of entities co-occur.", "Additionally, we filter every fact from our fact memory containing any of these entity pairs.", "Results for this setting are in the column labeled Pretrain .", "The All column combines both pretrain and fine tune filtering.", "We see that the models perform substantially worse when these filterings are applied and they are forced to reason across multiple examples, and in the case of FILM, the fact memory.", "Finally, the column denoted None has no filtering and is the same as the Full Dataset.", "Because our model defines facts symbolically, it can in principle reason over new facts injected into its memory, without retraining any parameters of the model .", "Since existing datasets do not directly test this capability, we elected to construct variants of FreebaseQA and WebQuestionsSP where we could simulate asking questions that are answerable only from newly injected KB facts.", "The approach we used was to (1) identify pairs of entities that occur in both a question and answer of some test example; (2) filter out such pairs from the KB as well as all pre-training and fine-tuning data; and (3) test the system trained on this filtered data, and then manually updated by injecting facts about those entity pairs.", "This filtering procedure is reminiscent of that used by Lewis et al. (2020b), but also addresses pretraining / test-set overlap.", "We evaluate EaE and FILM given full knowledge (the original setting); given filtered knowledge; and given filtered knowledge followed by injecting test-question-related facts into the KB.", "The gap between the filtered knowledge setting and injected knowledge setting will indicate how well the model incorporates newly introduced facts.", "In more detail, we first perform a similar procedure to Lewis et al. (2020b) and discard questions in the fine-tuning training data that contain answers which overlap with answers to questions in the dev and test data.", "We end up with 9144/2308/3996 data (train/dev/test) in FreebaseQA and 1348/151/1639 data in WebQuestionsSP.", "Next, to ensure that the model will be unable to memorize paraphrases of question-answer pairs that it observed in the pretraining text, we remove all overlap between the pretraining data and fine-tuning test data: specifically, for every question-answer entity pair in our fine-tuning dataset (from any split), we filter every example from our Wikipedia pretraining corpus in which that pair of entities co-occur.", "Additionally, we filter every fact from our fact memory containing any of these entity pairs.", "In these sections we compare against EaE for two reasons: 1) we are specifically looking at closed-book open domain entity based QA and EaE is shown to be at or near state-of-the-art for that task (Fvry et al., 2020), 2) most importantly, we want to be able to precisely control for memorization in the training corpus and therefore did not consider existing unconstrained pre-trained models like T5 (Raffel et al., 2019).", "For reference, the previous state-of-the-art FOFE (Jiang et al., 2019) on FreebaseQA had a score of 37.0% using the original train-test split, while FILM is at 63.3%.", "The results are shown in Table", "5. In the Full column, we pretrain and finetune the FILM model with the full knowledge base and corpus.", "In the Filter setting, facts about the finetuning data are hidden from the model at both pretraining and finetuning time.", "In this case, the model must fall back to the language model to predict the answer, and as shown in Table 5, the accuracies of FILM and EaE are similar.", "In the Inject Facts setting, Facts are hidden at pretraining time, but are injected at test time.", "The results show that FILM can effectively use the newly injected facts to make prediction, obtaining an absolute improvement of 9.3% compared to the Filter setting.", "EaE does not have a natural mechanism for integrating this new information 9 .", "One of the main motivations for our model is to provide knowledge representations that can be incrementally updated as the world changes, avoiding stale data.", "In order to accomplish this, the model must learn to utilize the fact memory even in the case where those facts have changed such that they may no longer be consistent with the data the model was initially trained on.", "Further, it needs to accomplish that without any additional training.", "To probe this ability, we simulate an extreme version of stale facts where all answers to QA pairs in the FreebaseQA test set are updated' with plausible alternatives.", "For each QA pair, we replace the original answer entity e original with another entity, e new , from our vocabulary that has: 1) been used as an object in at least one of the same relation types in which e original was used as an object, and 2) shares at least three Wikipedia categories with e original .", "We use the same pretrained models from our earlier experiments and fine-tune on the filtered FreebaseQA train set for 10,000 steps.", "We then modify the memory of this model without applying any additional training on the new memory.", "In addition to adding new memories which correspond to our newly created facts, we also must remove the original stale facts that we are updating.", "We look at two methods for filtering those stale facts' from the fact memory.", "Basic Filter deletes every modified fact e question , r , e original and replaces it with a new fact e question , r , e new .", "This would be a low recall filter as it does not account for all possible related facts.", "The Strict Filter is a high recall filter that more aggressively removes information that may conflict with the newly added fact, additionally removing all facts that contain e question or e original .", "This is important for cases such as when a question contains multiple entities, or the linking relation is one-to-many, leading to multiple plausible answers.", "Together these two settings define rough bounds on the model's ability to perform this task.", "In Table 6, we see that FILM is able to utilize the modified KB to make the correct prediction for 54.5% of questions in the Basic Filter setting and 70.3% in the Strict Filter setting.", "Symbolic KBs have been a core component of AI since the beginning of the field (Newell and Simon, 1956; Newell et al., 1959), and widely available public KBs have been invaluable in research and industry (Bollacker et al., 2008; Auer et al., 2007; Google, 2012; Dong, 2017; Vrandecic and Krtzsch, 2014).", "In machine learning, a well studied problem is learning KB embeddings (Bordes et al., 2013; Lin et al., 2015; Trouillon et al., 2017; Dettmers et al., 2018) which enable generalization from known KB triples to novel triples that are plausibly true.", "KB embeddings can often be improved by incorporating raw text and symbolic KGs into a shared embedding space (Riedel et al., 2013; Verga et al., 2016, 2017), to be jointly reasoned over (Sun et al., 2018, 2019).", "Many prior neural-symbolic methods have attempted to unify symbolic KBs and neural methods (Pinkas, 1991; de Penning et al., 2011; Laird et al., 2017; Besold et al., 2017).", "Recently, researchers have explored query languages for embedded KBs that are similar to symbolic KB query languages (Cohen et al., 2017; Hamilton et al., 2018; Ren et al., 2020; Cohen et al., 2020).", "Our fact memory builds on this prior work, and is most closely related to the memory used in EmQL (Sun et al., 2020), one KB embedding model that supports compositional query language.", "EmQL implements projection using neural retrieval over vectorized KB triples.", "Unlike this work, however, EmQL did not embed its fact memory into a LM, which could be finetuned for many NLP tasks: instead requiring the implementation of a neural module into some task-specific architecture.", "At a more abstract level, the fact memory is a key-value memory (Weston et al., 2014; Miller et al., 2016), a construct used in many neural models in the past.", "It has been shown that sufficiently large LMs trained through self supervision (Peters et al., 2018; Devlin et al., 2019; Raffel et al., 2019; Brown et al., 2020) also encode factual information, motivating work on the extent to which a LM can serve as a KB (Roberts et al., 2020; Petroni et al., 2019; Poerner et al., 2019).", "Other work has explored techniques to improve the performance of large LMs in answering factual probes, by adding additional supervision in pre-training (Xiong et al., 2019; Wang et al., 2020b) or by adding entity embeddings into an extended LM (Peters et al., 2019; Zhang et al., 2019; Fvry et al., 2020).", "Our entity memory extends the Entities-as-Experts (EaE) model (Fvry et al., 2020).", "It is both the current state-of-the-art for a number of tasks and simpler to use than most prior models because it does not require external components for entity linking or entity encoding (like (Peters et al., 2019; Zhang et al., 2019; Logan et al., 2019)) and is not restricted to lexical KBs like WordNet and ConceptNet (like (Weissenborn et al., 2017; Chen et al., 2018; Mihaylov and Frank, 2018)).", "with millions of entities, whereas prior systems that make use of KB triples have been with only a few hundreds of triples in the model at any point, necessitating a separate heuristic process to retrieve candidate KB triples (Ahn et al., 2016; Henaff et al., 2016; Weissenborn et al., 2017; Chen et al., 2018; Mihaylov and Frank, 2018; Logan et al., 2019).", "There have been a few exploratory experiments on modifying the predictions of retrieval augmented language models by changing the underlying text corpus (Guu et al., 2020; Lewis et al., 2020a).", "However, text passages are not easily interpretable resulting in them being less inspectible and modifiable than a symbolic fact based memory.", "We presented FILM, a neural LM with an interpretable symbolically bound fact memory.", "We demonstrated the effectiveness of this method by outperforming many state-of-the-art methods on four benchmark knowledge intensive datasets.", "We used the model's symbolic interface to change the output of the LM by modifying only the nonparametric memories, without any additional training.", "We showed FILM could incorporate newly injected facts unseen during training.", "Additionally, we can modify facts, such that they contradict the initial pre training text, and our model is still largely able to answer these questions correctly.", "All language models learn to exploit correlations in the data they were trained on.", "As such, they inherit all of the underlying biases within that data (Zhao et al., 2019; Bender et al., 2021).", "These models require vast amounts of data to train on and therefore tend to rely on internet corpora which have skewed representations of particular groups, cultures, and languages, as well as variable levels of factuality.", "Our hope is that research into endowing these models with interpretable and modifiable memories will allow us to more readily identify and remedy some of these failures.", "We thank the anonymous reviewers for their helpful feedback and Patrick Lewis for providing prediction files.", "We also thank Thibault Fvry, Nicholas FitzGerald, Eunsol Choi, Tom Kwiatkowski and other members of Google Research for discussions and feedback." ]
[ "abstain", "objective", "abstain", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "other", "abstain", "result", "result", "objective", "objective", "objective", "objective", "objective", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "other", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "method", "other", "method", "other", "other", "other", "abstain", "other", "other", "other", "other", "method", "objective", "method", "result", "method", "abstain", "abstain", "abstain", "method", "other", "other" ]
[ "Recently, a lot of research has been carried out to improve the efficiency of Transformer.", "Among them, the sparse pattern-based method is an important branch of efficient Transformers.", "However, some existing sparse methods usually use fixed patterns to select words, without considering similarities between words.", "Other sparse methods use clustering patterns to select words, but the clustering process is separate from the training process of the target task, which causes a decrease in effectiveness.", "To address these limitations, we design a neural clustering method, which can be seamlessly integrated into the Self-Attention Mechanism in Transformer.", "The clustering task and the target task are jointly trained and optimized to benefit each other, leading to significant effectiveness improvement.", "In addition, our method groups the words with strong dependencies into the same cluster and performs the attention mechanism for each cluster independently, which improves the efficiency.", "We verified our method on machine translation, text classification, natural language inference, and text matching tasks.", "Experimental results show that our method outperforms two typical sparse attention methods, Reformer and Routing Transformer while having a comparable or even better time and memory efficiency.", "Transformer (Vaswani et al., 2017) has been widely used and achieved state-of-the-art results in a variety of NLP tasks such as neural machine translation (Bahdanau et al., 2015), text classification, etc.", "Its good effectiveness benefits from its core component Self-Attention Mechanism which can capture global dependencies well.", "However, the large calculation and memory cost limit the further application of Transformer on long sequence The first two authors contributed equally Corresponding Author tasks due to the complexity of O ( N 2 d ) of Self-Attention.", "As a result, many research works have been carried out to improve the efficiency of Transformer (Tay et al., 2020b).", "These efficient Transformers can be roughly divided into two categories: approximation-based (Tay et al., 2020a; Katharopoulos et al., 2020) and sparse pattern-based methods (Qiu et al., 2020; Ho et al., 2019; Beltagy et al., 2020; Liu et al., 2018).", "Regarding approximation-based methods, some works are based on low-rank approximation (Tay et al., 2020a; Wang et al., 2020) while others are based on kernels (Katharopoulos et al., 2020; Choromanski et al., 2020).", "Specifically, Lin-former (Wang et al., 2020) adopts a low-rank approximation idea and projects the length dimension of keys and values to a lower dimension ( N k ).", "This reduces the complexity to O ( Nkd ) .", "However, the projection matrix in this method requires that all input sequences must be filled to the same length N , which makes it cannot handle the variable-length sequence well.", "Linear Transformer (Katharopou-los et al., 2020) uses kernels and the associative property of matrix multiplication to linearize the softmax attention, which reduces the complexity to O ( Nd 2 ) .", "However, the approximation error to the softmax matrix in Self-Attention can be large in some cases (Xiong et al., 2021).", "Sparse pattern-based methods introduce sparse patterns into the attention mechanism and limit the number of key vectors that the query vector should pay attention to.", "Prior work (Child et al., 2019; Qiu et al., 2020; Ho et al., 2019; Beltagy et al., 2020; Liu et al., 2018) proposed to use fixed sparse patterns to improve the efficiency of Self-Attention.", "For example, Sparse Transformer (Child et al., 2019) restricts the query to focus only on keys that are nearby or at fixed intervals.", "Such fixed sparse patterns do not consider the similarity between the query and different keys, and directly filter keys according to their location, which re-2390 sults in a degradation of the model effectiveness.", "More recently, the clustering sparse patterns are proposed.", "Reformer (Kitaev et al., 2020) and Routing Transformer (Roy et al., 2021) use Locality Sensitive Hashing (LSH) and K-Means algorithms, respectively, to divide the words in the sequence into different clusters, and then the attention operation is restricted within each cluster independently.", "In this way, they reduce the complexity to O ( N log Nd ) and O ( N Nd ) , respectively.", "However, in Reformer and Routing Transformer, both LSH and K-Means only play the role of cluster partitioning, but run separately from the attention network training.", "In addition, these two methods also have the problem of inconsistency in the similarity measure between the clustering and attention operation.", "LSH and K-Means respectively use the hash value obtained by random projections and the negative Euclidean distance as the similarity measure between the input vectors while the attention operations use the inner product.", "Therefore, such a sparse pattern idea often results in the reduced effectiveness.", "To address the reduced effectiveness issue of many efficient Transformers, especially sparse pattern-based methods, we propose Neural Clustering Method to learn the sparse pattern of the Attention.", "It can be seamlessly integrated into neural networks for joint training and optimization.", "In our method, the cluster center (centroid) is updated by a weighted sum of all word hidden states.", "At the same time, the members of clusters are divided according to the subordinate matrix of the centroids and word hidden states.", "The optimization of the clustering loss can guide the representation of word hidden states while learning cluster centroids.", "The integration of the neural clustering method and attention training enables our Neural Clustering Attention to perform better than previous clustering-based sparse attention mechanisms.", "Our Neural Clustering Method is a general clustering method, in the sense that in addition to being integrated into the network of the specific task, it is can handle clustering tasks alone.", "Our overall model is called ClusterFormer and it is obtained by replacing the Self-Attention Mechanism in Transformer with Neural Clustering Attention Mechanism.", "In order to validate the benefits of ClusterFormer, we have carried out comparison experiments of the efficiency and effectiveness respectively.", "For efficiency, we provide a detailed analysis about the time and memory on dataset 20NEWS of text classification.", "Results show that our model has a comparable or even better efficiency compared with two typical sparse attention models, Reformer and Routing Transformer.", "Especially, when the sequence length exceeds 2000, our model, Routing Transfomer and Reformer reduce the memory of Transformer by 53.8%, 60.8% and 31.8% while reducing the training time by 51.4%, 41.8% and 14.4%, respectively on GPU.", "For effectiveness, we test it on machine translation, text classification, natural language inference, and text matching tasks.", "Experimental results show that on all tasks, our model consistently outperforms Reformer and Routing Transformer.", "In particular, our method improves the accuracy by 15.6% and 7.2% on SciTail datasets of natural language inference task compared with Reformer and Routing Transformer, respectively.", "The major contributions of our work are as follows: We propose a general end-to-end fuzzy clustering method based on neural network, named Neural Clustering Method, which can dynamically learn weights of each word and then update centroids by weighting all the input words along with the training of specific tasks.", "We design the Neural Clustering Attention Mechanism based on our proposed clustering method to refactor Self-Attention Mechanism.", "The experimental results show that our method has comparable efficiency and better effectiveness than typical sparse attention models.", "The Self-Attention is the core component of Transformer (Vaswani et al., 2017).", "It extracts sequence features by processing the interaction of three sequence matrices Q , K , and V .", "Referring to the standard Transformer, its function can be written as follows: Attention ( Q, K, V ) = Softmax ( QKT d ) V Q, K, V = XWQ , XWK , XWV (1) where X RN d model , Q , K, V RN d , WQ , WV R d model d , N is the length of the se-2391 Word sequence after sort Query after equal chunking Key after equal chunking Word sequence after clustering 1 2 3 4 5 6 7 8 9 1 5 2 3 4 6 7 8 9 1 5 2 3 4 6 7 8 9 1 5 2 3 4 6 7 8 9 1 2 3 4 5 6 7 8 9 X( Word Sequence ) U (Subordinate Matrix ) Clustering Normalization Sort Word Sequence I( Cluster index ) ( Sorted Word Sequence ) ( New Centroids ) C( Centroids ) 1 2 3 1 222123 3 3 1 5 2 3 6 4 7 8 9 1 2 3 a Neural Clustering Method b Sorting and Chunking Operation Figure 1:", "quence, d model is the dimensionality of the model, and d is the dimensionality of the attention head.", "In Self-Attention Mechanism, the interaction of Q and K gives the N N attention (weight) matrix, and it leads to the complexity of O ( N 2 d ) , which has been one of the crucial limitation of Transformer.", "Transformer has been developed into many variants to reduce the complexity of the attention mechanism.", "In these works, one of the main research directions is to use a sparse attention to substitute the quadratic-cost attention.", "Some early works (Qiu et al., 2020; Ho et al., 2019; Beltagy et al., 2020) have been proposed to reduce the time complexity by restricting every query to focus only on keys that are nearby or at fixed intervals.", "This method fixes the sparsity pattern without considering the similarity between queries and keys, limiting its ability to assemble critical information from large contexts.", "Different from these works, our method attempts to automatically aggregate critical keys for each query based on dependency relationships.", "Moreover, the clustering-pattern methods were used in Self-Attention to implement a sparse attention.", "For example, Reformer (Kitaev et al., 2020) and Routing Transformer (Roy et al., 2021) introduce Locality-Sensitive Hashing and K-Means algorithms, respectively, to reduce complexity to O ( N log Nd ) and O ( N Nd ) .", "However, in this kind of method, the clustering process and training process are separate, which is a limitation in improving effectiveness.", "Based on previous researches, we proposed a novel Neural Clustering Method, which can be seamlessly integrated into the network of specific tasks for joint training and optimization to improve effectiveness.", "In this section, we first introduce our Neural Clustering Method.", "Then, we introduce our Neural Clustering Attention Mechanism which combines our clustering method and the Self-Attention Mechanism.", "As shown in Figure 1", "(a), our clustering method takes word hidden states X RN d and centroid hidden states C R k d as inputs.", "C is initialized randomly in the first layer.", "Then, we can get the subordinate (similarity) matrix U between word vectors and centroid vectors.", "It can be defined as: U ij = exp ( ( C i , X j WC )) (cid:80) Nj =1 exp ( ( C i , X j WC )) 1 i k , 1 j N (2) where k is the number of clusters and N is the length of sequence.", "WC R d model d model is a parameter matrix.", "() is a similarity measure function and it is the inner product operation in this scenario.", "X j is the j -th row of matrix X and C i is the i -th row of matrix C .", "The subordinate value U ij [0 , 1] is the normalized similarity value between the i -th centroid vector and emphj-th word vector, and it represents the degree of the word X j belonging to the centroid C i .", "where i and j represent the index value of the centroid and word, respectively.", "Then we group the word vectors according to the subordinate matrix U , as follows: I j = Argmax ( U : j ) 1 j N (4) where U : j is the j -th column of the matrix U and function Argmax () assigns word hidden states to the corresponding cluster according to the maximum subordinate value.", "Therefore, I RN represents the cluster index of all the word hidden states.", "Then, we sort the word vectors according to the cluster indexes I , as follows: XS , I (cid:48) = Sort ( X, I ) (5) where the function Sort () is used to arrange word hidden states belonging to the same cluster to adjacent positions in ascending order of cluster index.", "XS RN d model is the sorted word vectors.", "I (cid:48) RN is used to record the original positions of shuffled word hidden states in the sequence and will be used in Eq.", "10.", "Through the above process, we get the grouped and sorted word hidden states XS , as shown in Figure 1", "(a).", "Clustering Loss: Clustering Loss ( L 1 ) is the mean of the negative similarity scores of word hidden states and their belonging centroids, and it will give guidance to learn the optimal clustering scheme.", "It is defined as follows: L 1 = 1 NN (cid:88) j =1 (cid:16) X j , (cid:98) CI j (cid:17) (6) where X j , (cid:98) CI j represent the j -th word hidden state in the sequence and the updated centroid.", "The function () is a similarity measure function and needs to be consistent with Eq.", "2.", "From the above analysis, our Neural Clustering Method is based on the soft clustering.", "There is a subordinate value between each pair of word vectors and centroid vectors, which can quantitatively describe the fuzzy relationship, so that the clustering can be carried out objectively and accurately.", "In addition, Neural Clustering Method is based on the neural network, which is easy to integrate into the network corresponding to the target task.", "The reconstruction of centroid vectors depends on all the word vectors and is based on the continuous optimization for the clustering objective function (as shown in Eq. 6) and the task-specific objective function to get better effectiveness.", "In addition, we carried out a clustering comparison experiments between our method and traditional clustering methods and observed improvements of our method in effectiveness.", "See Appendix A for more details.", "As described in Section 3.1, our Neural Clustering Method groups word vectors with strong dependency into the same cluster and outputs the sorted word vectors XS .", "Then, we use different matrices to project XS into matrix QS , KS , and VS , as follows: QS , KS , VS = XSWQ , XSWK , XSWV (7) where WQ , WK and WV R d model d are weight matrices.", "QS , KS and VS are matrices Query, Value and Key, respectively.", "The number of members in each cluster may not be uniform, which makes it difficult for all clusters to perform the attention mechanism in parallel.", "For parallel computing, after arranging word hidden states in the same cluster to be in adjacent positions, 2393 we chunk them into equal blocks in order, as shown in Figure 1", "(b) (essentially similar to the masking of Reformer).", "The process can be written as follows: QO i = QS (cid:16) ( i 1) (cid:24) N k (cid:25) : i (cid:24) N k (cid:25) (cid:105) KO i = KS (cid:16) ( i 2) (cid:24) N k (cid:25) : i (cid:24) N k (cid:25) (cid:105) 1 i k (8) where QO i R w d and KO i R 2 w d are the i -th Query block and Key block respectively.", "w ( w = Nk ) is the number of members in each block.", "Matrix VO i has operations similar to KO i .", "After chunking, Query contains one sequence block while Key and Value consist of two contiguous blocks, which corresponds to L 2 mentioned in Eq.", "11.", "Each token in Query focuses on two blocks of tokens so that the query can cover the words in the same cluster as much as possible.", "Of course, it does not have to be 2, and can be adjusted.", "Then, we perform the attention operation within the sequence block in parallel and concatenate the output of each block.", "where ZO i R w d and ZO RN d .", "ZO i is the output of the i -th sequence block after the attention operation.", "Finally, we recover the shuffled sequence (out-put) to obtain the final result, as follows: Z = Resort ( ZO , I (cid:48) ) (10) where the function Resort () aims to recover shuffled sequence according to the original position record vector I (cid:48) obtained from the Eq.", "5. Z RN d is the output of Neural Clustering Attention.", "For the autoregressive modeling, we provide a Masked Neural Clustering Attention Mechanism to prevent the leftward information flow.", "More details can be found in Appendix B. Centroid Sorting Loss: Centroid Sorting Loss ( L 2 ) is the mean of the negative similarity scores of the adjacent centroid pairs.", "In Eq.", "8, each token in Query block is expected to focus on two continuous blocks of tokens.", "L 2 makes word hidden states belonging to adjacent clusters are also close to each other.", "It is defined as follows: L 2 = 1 k (cid:32)(cid:32) k (cid:88) i =2 (cid:16) (cid:98) C i , (cid:98) C i 1 (cid:17)(cid:33) + (cid:16) (cid:98) C 1 , (cid:98) C k (cid:17)(cid:33) (11) where k is the number of centroids, (cid:98) C i is the i th updated centroid, and the meaning of () is consistent with Eq.", "6. In our method, Clustering Loss, Centroid Sorting Loss, and the loss of target tasks of the model are assigned different weights for joint optimization.", "More details can be found in Appendix C. 3.3 Analysis of Complexity The complexity of Neural Clustering Attention Mechanism comes from two parts:", "(i) Neural Clustering Method.", "In this part, we need to calculate the subordinate matrix between centroid hidden states C R k d and the word hidden states X RN d , referring to the Eq.", "2, which leads to the complexity of O ( Nkd ) .", "(ii) Attention Mechanism.", "For this part, we compute attention within the Query block ( R k w d ) and Key block ( R k 2 w d ), referring to the Eq.", "9, which leads to the complexity of O ( kw 2 d ) where w = Nk .", "In summary, the overall complexity is O ( Nkd + kw 2 d ) .", "When k is set to N , the complexity is approximately O ( N Nd ) .", "In order to verify the effectiveness and efficiency of our method, we carried out the following tasks.", "We choose Transformer and its clustering-pattern variants (Reformer, Routing transformer) as baseline models.", "The implementations of the attention layer of Reformer and Routing transformer refer to the open source codes 12 .", "For a fair comparison, our proposed method and baseline models have the same architecture, except for the attention layer.", "We validate our model on IWSLT14 German-English and WMT14 English-German benchmarks, which have been widely used for machine translation tasks.", "For IWSLT14 De-En, it contains about 160K training sentence pairs and is pre-processed by using prepare-iwslt14en2de.sh 3 .", "For WMT14 En-De, it contains about 4.5 million training sentence pairs and it is pre-processed by using prepare-wmt14en2de.sh 4 .", "We use the BLEU score as the effectiveness evaluation metric.", "Some hyperparameters are set: the number of encoder and decoder 1 https://github.com/lucidrains/reformer-pytorch 2 https://github.com/lucidrains/routing-transformer 3 https://github.com/pytorch/fairseq/blob/master/examples /translation/prepare-iwslt14.sh 4 https://github.com/pytorch/fairseq/blob/master/examples /translation/prepare-wmt14en2de.sh 2394 Model IWSLT14 De-En WMT14 En-De Transformer (Vaswani et al., 2017) 34.4 27.3 / 26.4 Reformer (Kitaev et al., 2020) 34.0 26.3 / 25.4 Routing Transformer (Roy et al., 2021) 32.5 24.3 / 23.6 ClusterFormer 34.9 27.4 / 26.5 Table 1: Test BLEU on IWSLT14 (De-En) and WMT14(En-De).", "layers L = 6, the number of centroids k = 3.", "The dimension of word embedding and model d model = 512.", "Specifically, for IWSLT14, the number of heads is set to 4 and d ff = 1024.", "For WMT14, the number of heads is set to 8 and d ff = 2048.", "As shown in Table 1, our method boosts effectiveness on both datasets.", "Specifically, the Tokenized BLEU score is improved by at least 1.5% compared with other models on IWSLT14 datasets.", "Compared with the latest models Reformer and Routing Transformer, ClusterFormer respectively has 2.6% and 7.4% improvement.", "Our method shows the same trend on WMT14 datasets.", "Especially, compared with Reformer and Routing Transformer, the Tokenized BLEU score of ClusterFormer respectively has 4.2% and 12.8% improvement and the sacreBLEU score respectively has 4.3% and 12.3% improvement.", "We validate our model on five text classification tasks.", "CR (Hu and Liu, 2004): Customer reviews composed of positive or negative product reviews; MR (Pang and Lee, 2004): Movie reviews divided into positive and negative categories; SUBJ: Subjectivity dataset where the target is to classify a text as being subjective or objective; MPQA (Wiebe et al., 2005): Opinion polarity detection subtask.", "20NEWS: A international standard dataset for text classification, text mining, and information retrieval research.", "The dataset collects about 20,000 newsgroup documents, divided into a collection of newsgroups on 20 different topics.", "Accuracy is used as the evaluation metric for these datasets.", "In addition, for all datasets, word embeddings are initialized by GloVe (Pennington et al., 2014) with 300-dimension.", "Some hyperparameters are set: The number of encoder layers L = 2, the dimension of model d = 300, the number of heads h = 4, and the number of centroids k is adjusted near the square root of the max length.", "As shown in Table 2, ClusterFormer outperforms all baseline models and improves the test accuracy by at least 3.16%, 1.70% for CR and SUBJ datasets, respectively.", "In addition, on the MPQA dataset, ClusterFormer achieves a comparable result with MPSAN.", "We also carry out the text classification task on the long text dataset 20NEWS.", "The accuracy for the 20NEWS dataset increases at least 0.24% compared with other models.", "In addition, compared with the latest models Reformer and Routing Transformer, our model respectively has 6.1%, 3.8%, 1.6%, 2.0%, 2.6% and 10.0%, 4.9%, 2.0%, 11.3%, 3.1% improvement for CR, MR, SUBJ, MPQA and 20NEWS datasets.", "In this section, we conduct Natural Language Inference tasks on SNIL, SciTail datasets, and Text Matching tasks on Quora, WikiQA datasets.", "SNLI (Bowman et al., 2015) is a benchmark dataset 2395 Model SNLI SciTail Quora WikiQA map mrr DELTA (Han et al., 2019) 80.7 Bigram-CNN (Yu et al., 2014) 0.619 0.628 Transformer (Vaswani et al., 2017) 83.7 76.6 85.4 0.601 0.613 Reformer (Kitaev et al., 2020) 78.6 67.3 74.3 0.587 0.603 Routing Transformer (Roy et al., 2021) 76.3 72.6 81.5 0.560 0.574 ClusterFormer 83.9 77.8 85.4 0.630 0.648 Table 3: Experimental results on Neural Language Inference (NLI) and Text Matching tasks.", "for natural language inference.", "There are 570k human-annotated sentence pairs with four labels.", "SciTail (Khot et al., 2018) is an entailment classification dataset constructed from science questions and answers.", "Quora Question Pairs is a dataset for paraphrase identification with two classes indicating whether one question is a paraphrase of the other.", "The evaluation metric for these three data sets is Accuracy.", "WikiQA (Yang et al., 2015) is a retrieval-based question answering dataset based on Wikipedia, which is composed of 20.4k/2.7k/6.2k (train/dev/test) samples.", "The mean average precision (MAP) and mean reciprocal rank (MRR) are used as the evaluation metrics.", "For SNIL and Quora datasets, word embeddings are initialized by GloVe (Pennington et al., 2014) with 300 dimensions.", "For the rest, we use random word embedding vectors with 300 dimensions.", "Some hyperparameters are set: L = 1, the number of heads h = 6 and the number of centroids k = 3.", "As shown in Table 3, our model achieves the best results for most datasets.", "Specifically, the accuracy of our model is at least 1.6% higher than baseline models on the SciTail dataset.", "On WikiQA, our model improves the result by at least 1.8% and 3.2% in MAP and MRR evaluation metrics, respectively.", "Our model and Transformer have considerable effectiveness on SNLI and Quora datasets.", "In addition, compared with the latest models Reformer and Routing Transformer, our model has 6.7%, 15.6%, 14.9%, and 10.0%, 7.2%, 4.8% improvement for SNLI, SciTail, Quora datasets.", "For the WikiQA dataset, the score increases 7.3% and 7.5% by our model in MAP and MRR compared to Reformer.", "The score increases 12.5% and 13.0% compared to Routing Transformer.", "In this section, we test the effect of different clustering numbers ( k ) on the effectiveness and efficiency.", "We test our model on the 20NEWS dataset of text classification tasks with a NVIDIA V100 (16GB) GPU.", "Some hyperparameters are set: the number of encoder layers L is 2, the dimension of the model d is 300, the batch size is 64, and the max sequence length N is 1500.", "From Table 4, we can draw the following conclusions:", "(i) Accuracy of our model: In general, within a certain range during the growth of k , the performance of our model is relatively stable.", "When the value of k goes beyond a certain threshold, the performance of our model degrades;", "(ii) Memory cost of our model: As the number of centroids k increases, the memory cost of the model decreases first and then increases;", "(iii) Training time of our model: As the number of centroids k increases, the training time of the model also decreases first and then increases.", "Therefore, according to this law, our method can simultaneously gain both the effectiveness and efficiency of the model by determining an appropriate k value through finite experiments.", "In this section, we provide an ablation experiment about the two kinds of clustering losses.", "We verify the effectiveness of the two loss modules by assigning different weight.", "Some hyperparameters 2396", "are set: the number of encoder layers L is 1, the dimension of model d is 300, the batch size is 128 and the max sequence length N is 500.", "From Table 5, the experimental result shows that both L 1 and L 2 contribute to the performance.", "For example, on dataset SciTail, the accuracy with the best result is improved by 1.46% (acc) compared with the result without the two losses.", "On dataset WikiQA, the accuracy with the best result is improved by 3.10% (map), 1.89% (mrr) compared with the result without the two losses.", "In this section, we provide a comparison experiment on dataset 20NEWS about the time and memory cost for different models.", "About the dataset, its average sequence length is approximately 280 and the maximum sequence length exceeds 10,000.", "To compare time and memory cost, we set the range of sequence length N as (0, 2000] and batch size to 20.", "We test the memory and time cost on a NVIDIA V100 GPU.", "We take the time of 1000 steps forward propagation of the model as the inference time, and the time of 1000 steps forward and back propagation as the training time.", "As shown in Figure 4, as the sentence length increases, both Routing Transformer and our model can significantly reduce memory cost compared to Transformer.", "When N exceeds 2000, our model, Routing Transfomer and Reformer reduce the memory by 53.8%, 60.8%, and 31.8%, respectively.", "TransFigure 4: Memory cost versus the sequence length.", "former increases significantly with increasing sequence length, while our model and Routing Transformer have a relatively small increase on GPU devices.", "When N is 2000, our model, Routing Transfomer and Reformer reduce the training time by 51.4%, 41.8%, and 14.4%.", "However, the inference speed of these improvements is inferior compared with Transformer, which may be caused by the decrease of the model parallelism.", "The above analysis fully demonstrates the efficiency and effectiveness of our proposed Neural Clustering Mechanism.", "In this paper, we propose a Neural Clustering Attention Mechanism to address the reduced effectiveness issue in sparse attention methods.", "This issue is mainly caused by the introduction of a sparse pattern that is separated from the target task or does not consider the similarity between words.", "In our method, we design a neural clustering algorithm to better capture critical pairs of dependencies.", "We integrate this clustering algorithm and the neural network to jointly train and optimize with specific tasks together to further contribute to the effectiveness and efficiency.", "The experimental results show that our model can achieve better effectiveness and a comparable or even better efficiency, compared with the latest typical sparse attention models, Reformer and Routing Transformer.", "This work is supported in part by the state key development program of China (grant No.2017YFE0111900), Natural Science Foundation of China (grant No.61772363), PolyU internal fund (grant No.1-BD47) under the research project (P0039657) of the Hong Kong Polytechnic University, and the Beijing Academy of Artificial Intelli-gence(BAAI)." ]
[ "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "method", "abstain", "result", "abstain", "method", "result", "result", "abstain", "result", "result", "objective", "objective", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "objective", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "objective", "result", "other" ]
[ "The rapid development of conversational assistants accelerates the study on conversational question answering (QA).", "However, the existing conversational QA systems usually answer users' questions with a single knowledge source, e.g., paragraphs or a knowledge graph, but overlook the important visual cues, let alone multiple knowledge sources of different modalities.", "In this paper, we hence define a novel research task, i.e., multimodal conversational question answering (MMCoQA), aiming to answer users' questions with multimodal knowledge sources via multi-turn conversations.", "This new task brings a series of research challenges, including but not limited to priority, consistency, and complementarity of multimodal knowledge.", "To facilitate the data-driven approaches in this area, we construct the first multimodal conversational QA dataset, named MMConvQA.", "Questions are fully annotated with not only natural language answers but also the corresponding evidence and valuable decontextualized self-contained questions.", "Meanwhile, we introduce an end-to-end baseline model, which divides this complex research task into question understanding, multi-modal evidence retrieval, and answer extraction.", "Moreover, we report a set of benchmarking results, and the results indicate that there is ample room for improvement.", "The ever-increasing variety of information leads to the current information explosion.", "Question answering (QA) systems play an important role in alleviating information overload by providing users brief and accurate answers.", "Towards this end, a great many QA systems have been developed by utilizing external knowledge sources to obtain the correct answer, including knowledge-based QA (Deng et al., 2019), document-based QA (Wang et al., 2018), and community-based QA (Fang et al., 2016).", "Recently, as the rapid Which city features a green copper statue of a woman holding a torch?", "development of conversational assistants, there is growing interest in all matters conversational.", "Conversational QA, aiming to satisfy users' complex information needs via multi-turn conversations, attracts a lot of attention.", "The existing conversational QA systems usually rely on a single knowledge source, e.g., paragraphs or a knowledge graph, and assume it contains enough evidence to extract answers to users' questions.", "However, these conversational QA systems are limited in real-world QA scenarios due to the following reasons.", "On the one hand, the important visual cues are overlooked in the existing conversational QA systems.", "As an old saying goes, a picture is worth a thousand words, namely a picture can often vividly express a lot of information.", "For example, as shown in Figure 1, the question Which city features a green copper statue of a woman holding a torch? can be naturally answered by looking at the related picture.", "On the other hand, the series of questions in a conversation may dynamically require multiple knowledge sources that encompass different modalities rather than only one constant knowledge source.", "As shown in Figure 1, three questions in the conversation involve images, passages, and structured tables respectively to extract the correct answers.", "In fact, although QA systems have been well studied thus far, conversational question answering with multiple knowledge sources of multi-modalities is still untapped.", "In this paper, we hence define this novel research task, i.e., multimodal conversational question answering (MMCoQA), aiming to answer users' questions with multimodal knowledge sources via multiturn conversations.", "MMCoQA is indeed non-trivial due to the following research challenges.", "1) Priority of multimodal knowledge.", "For a specific question, one modality may be more suitable for locating its corresponding answer than the others.", "For example, questions about numerical inquiries like date or statistics are better answered by utilizing tables.", "Different from the previous conversational QA tasks, the most appropriate modality that can be used to answer the current question is not given in MMCoQA.", "Given the conversation context, how to correctly determine the appropriate modality for the current question is a challenge.", "2) Consistency of multimodal knowledge.", "Different modalities may provide consistent evidence to extract the correct answer for a question.", "For example, for the first question in Figure 1, the visual modality provides intuitional and direct information, while the related paragraph The Statue of Liberty ..., off the coast of New York City. She holds a torch in her raised right hand ... also reveals a certain of cues to indicate the correct answer.", "How to utilize the consistency among different modalities to verify the answer is another challenge.", "3) Complementarity of multimodal knowledge.", "Some questions may require evidences of different modalities to reason the final answer.", "For example, the question Billy Slater played for the NRL team in 2006 with a character holding what on the logo? must be answered based on both the table about Billy Slater's career and the image of his team logo.", "Therefore, to answer these questions, the system is required to have the ability of reasoning across multiple modalities.", "More importantly, the aforementioned three issues are not standalone but interweaved as conversation goes.", "Thus, MMCoQA is not the simple combination of multimodal QA and conversational QA but requires deep multimodal understanding and reasoning abilities across multi-turn conversations, which leaves ample room to study.", "To advance the progress of building MMCoQA systems using data-driven approaches, we construct the MMConvQA dataset, the first dataset for MMCoQA (see Table", "1).Each question is fully annotated with not only the natural language answer but also the related evidence.", "Besides, the valuable decontextualized self-contained questions are also annotated for all questions.", "Hence, MMConvQA can be used to develop individual system modules for multimodal conversational search, conversational question rewrite, and multimodal QA systems.", "Accordingly, we introduce an end-to-end baseline model and provide a set of bench-marking results, which may facilitate a lot of exciting ongoing researches in the area.", "The contributions of this work are threefold: To the best of our knowledge, this is the first work towards the multimodal conversational question answering problem.", "We clearly define the research scope of this task and identify its potential research challenges.", "We construct the first dataset, MMConvQA, for the multimodal conversational QA task.", "MMConvQA contains multiple supervised labels, including related evidence, answers, and decontextualized questions, which facilitates the data-driven approaches in this community.", "We introduce an end-to-end model as the baseline and report a set of results.", "Experiment results indicate the significant room for future improvement.", "Besides, the data and codes of this work are released 1 .", "Conversational QA is a relatively new topic in the QA community.", "Benefiting from the released dataset (Reddy et al., 2019; Choi et al., 2018), text-based conversational QA has been greatly developed.", "Researchers proposed to model and filter the conversation context via binary term classification (Voskarides et al., 2020) and question rewriting (Elgohary et al., 2019; Vakulenko et al., 2021; Yu et al., 2020).", "Recently, some efforts (Qu et al., 2020; Li et al., 2022; Anantha et al., 2021b) 1 https://github.com/liyongqi67/MMCoQA .", "expanded conversational QA to the open-domain setting, where the related passages must be retrieved rather than given directly.", "In addition to text-based conversational QA, knowledge-based conversational QA (Christmann et al., 2019) was also developed to answer conversational questions based on a knowledge base.", "Saha et al. (2018a) created a large-scale dataset and Shen et al. (2019) proposed a multi-task learning framework to resolve coreference in conversations and detect entities simultaneously.", "However, these existing methods only involved one knowledge source and the important visual cues were overlooked.", "Our work is also closely related to multimodal QA.", "Essentially, the task of VQA (Jing et al., 2020; Shah et al., 2019) is multimodal and involves images and textual questions.", "However, in this work, we are more interested in question answering with multimodal knowledge sources.", "In fact, QA with multiple mediums has been studied for a long time.", "For example, in the year of 2011, Nie et al. proposed to enrich textual question answering with image and video data.", "Besides, Textbook QA (Kemb-havi et al., 2017) and TVQA (Lei et al., 2018) were also explored under specific scenes.", "Recently, Hannan et al. introduced the ManymodalQA challenge, where the questions are ambiguous and the modality is not easily determined based solely upon the question.", "Talmor et al. innovatively introduced the complex question scenario to multimodal QA, where a complex question requires several modalities to answer.", "In this work, we believe that conversational QA is a natural scenario for combining multimdoal knowledge sources, where different modalities are dynamically required as the conversation moves on.", "In fact, there are few websites or applications where we can directly obtain a huge amount of questions that are answered with multimodal knowledge sources, let alone in the conversational form.", "Fortunately, we notice that the MMQA (Talmor et al., 2021) dataset contains a number of complex questions answered with multiple modalities of knowledge.", "Considering that an important intention of developing conversational QA systems is to gradually satisfy users' complex information needs via multi-turn conversations (Dalton et al., 2020), we intuitively propose to decompose these complex questions into conversational questions (Saha et al., 2018c).", "For example, as shown in Figure 2, the complex question The player not wearing a helmet in 2019-20 Buffalo Sabres season free agents was on what team? can be better presented in a conversation Q1: Which player not wear a helmet in 2019-20 Buffalo Sabres season free agents and Q2: He was on what team in that season?.", "However, if we obtain conversational questions only by decomposing complex questions, the number of questions in a conversation is rather limited since a complex question can only be decomposed into two or three questions.", "Therefore, we automatically generate potential conversations as references for annotators to refine.", "Generate potential conversations .", "Observing that the follow-up questions in a conversation are usually related to the topics that have occurred in the previous questions or answers, we thus add the questions that contain the same entities into one po-4222 Q1: The player not wearing a helmet in 19-20 Buffalo Sabres season free agents was on what team?", "tential conversation.", "Specifically, for all questions in the question pool, i.e., the MMQA dataset, we identify the entities of the question text and answer text.", "We then randomly select a question from the question pool as the seed of a conversation.", "We argue that a user may be interested in the entities that he/she have asked and the new entities occurred in the system's responses.", "We hence randomly select one from the identified entities in previous questions and answers as the user's next point of interest.", "Then we randomly select a question from the question pool that contains the selected entity as the follow-up question in the conversation.", "Once a question is selected to conduct a conversation, it will be removed from the question pool to keep the diversity of conducted conversations.", "Continually add follow-up questions until the conversation turn exceeds a certain number or there is no corresponding question in the question pool.", "Repeat the above process, and finally we obtain a number of artificial conversations.", "The automatically generated conversations are unnatural:", "1) there are a lot of complex questions requiring multi-hop logical reasoning, which are not common in daily conversations (Reddy et al., 2019).", "2) The sequential questions in the potential conversations lack coherence of dialogues such as coreference and ellipsis.", "And", "3) some questions in the conversation may be not consistent with the whole conversation.", "Therefore, annotators are involved to manually decompose complex questions and refine (including rewrite, delete and rearrange) the conversational questions towards the real conversation scenario.", "Decompose complex questions .", "To facilitate the decomposition of complex questions, the types and intermediate answers of complex questions provided in the MMQA dataset, are also shown to the annotators.", "The types of questions indicate the logic and the target number of decomposed questions for a complex question.", "For example, Q1 of the Potential Conversation in Figure 2 is a complex question and its type is Compose(TableQ, Im-ageQ).", "Compose(A,B) means question A containing an entity is the answer of question B, while TableQ and ImageQ indicate that questions A and B can be answered with tables and images, respectively.", "Therefore, the annotator can easily decompose this complex question into two sequential questions according to its type.", "Notably, each annotator is required to decompose a complex question into self-contained questions that can be answered without the conversation context.", "Refine conversational questions .", "After the decomposition, the same annotator refines conversational questions for an artificial conversation.", "Each annotator is showed some typical examples in the existing conversational QA datasets before the annotation.", "After fully understanding the linguistic phenomena in conversational QA, such as coreference and ellipsis, annotators write conversational questions for artificial conversations.", "It is worth mentioning that they have the right to delete and rearrange questions in a conversation to guarantee the smooth conversation flow.", "They can also report to delete a whole conversation if they think it is poor-quality.", "Data quality .", "Four students that majored in computer science and have NLP research experience are invited to make annotations.", "To ensure the quality of collected conversation data, we apply the 5-step scheme of training, annotation, checking, modification, and re-checking.", "Before the collection of data, we carry out training for all participants to 4223 Table 2: Statistics for MMConvQA dataset.", "explain the annotation guidelines (see Appendix A) for about two hours.", "Each conversation will be checked by another annotator and the unqualified ones will be returned to modify.", "It is worth mentioning that since we only write conversational questions rather than give answers to questions, we do not need to calculate the annotation agreement of answers.", "MMConvQA contains 1,179 conversations and 5,753 QA pairs.", "There are 4.88 QA pairs on average for each conversation, as summarized in Table 2.", "The multimodal knowledge collection consists of 218,285 passages, 10,042 tables, and 57,058 images.", "Each question is annotated with the related evidence (a table, an image or a passage in the knowledge collection), and a natural language answer.", "Besides, each question is also accompanied with a corresponding self-contained question.", "Question Analysis .", "Figure 3 shows sunburst plots of question types in MMConvQA.", "We can see that most of the first words are similar to those questions in other conversational QA datasets (Choi et al., 2018; Reddy et al., 2019; Saha et al., 2018b).", "The and In are frequently used because they usually relate to the coherence of conversations, such as The actor.", "There are also some special patterns in MMConvQA featuring multi-modalities.", "For example, What Color pattern is related to the visual modality and How Many may refer to the tables.", "On average, each question contains 14.4 words, while this number in the MMQA dataset is 19.2.", "This illustrates that we well decompose the W h a t w a s i s c o l o r y ea r I n p a r t r o l e t h e w h a t T h e ac t o r W ho W h i c h W h e n A m ong I s H o w W h e r e i s m a ny t h e t h e d i d l ook i ng w a s w a s p l a y e d i s p l a y s Figure 3: Distribution of the digram prefixes of questions in the MMConvQA dataset.", "complex questions.", "The average number of words for gold questions is 15.5, which is slightly bigger than that of the conversational questions.", "This is because conversational questions embody the linguistic phenomena of dialogues, such as coreference and ellipsis, thus have less words than gold questions.", "It is worth mentioning that two different complex questions may produce the same single question.", "Therefore, there are some duplicated questions in a conversation.", "Answer Analysis .", "The types of answers in MMConvQA are diverse.", "Most of answers are text spans of passages, cells of tables, and titles of images, whereas some answers do not exactly overlap with the evidence.", "For example, the answer to the question The singer of Take Me As I Am' is shown wearing what item on her neck? is scarf, which needs to be detected based on a related image rather than the title of the image.", "Apart from single answers, 9.9% questions require a list of answers.", "For example, the answer to the question who is the owner of cape town knight riders? is Shah Rukh Khan and Juhi Chawla.", "On average, each answer contains 2.11 words.", "Modality Analysis .", "As summarized in Table 2, there are 45.6% questions can be answered with textual passages.", "Besides, 29.8% and 24.6% questions must be answered based on images and tables, respectively.", "Among the 1,179 conversations in this dataset, 57.7% conversations involve two different modalities of knowledge and 24.4% conversations involve three modalities.", "This indicates that as conversations proceed, questions dynamically require different modalities of knowledge to answer.", "To better illustrate the conversation flow, 4224 T e x tQ Im a g e Q T a b le Q T e x tQ Im a g e Q T a b le Q T e x tQ Im a g e Q T a b le Q T e x tQ Im a g e Q T a b le Q T e x tQ Im a g e Q T a b le Q T e x tQ Im a g e Q T a b le Q T e x tQ Im a g e Q T a b le Q T e x tQ Im a g e Q T a b le Q 1 2 3 4 5 6 7 8 Figure 4: Transitions of modalities as conversation goes.", "we visualize the transition of modalities as conversation progresses in Figure 4.", "It is observed that the transitions of modalities are frequent.", "For example, about 70% table questions at the first turn transform to the text and the image questions at the second turn.", "And as the turn number increases, the bonds become cluttered, which indicates that more conversations involve multiple modalities.", "Linguistic Phenomena .", "To measure the quality of the conversational questions and analyze their linguistic phenomena, we sample 100 follow-up questions in the development set and annotate various phenomena.", "Our analysis shows that around 33% questions do not rely on coreference with the conversational history and are answerable on their own.", "Around 57% questions contain explicit coreference markers such as he, she, it.", "The remaining 10% do not have explicit coreference markers but refer to an entity or event implicitly.", "Another feature of open-retrieval conversational QA is the topic switch.", "Among the questions, 24% change the conversation topic (WikiEntity).", "We introduce a M ultimodal Conversational QA system with A daptive E xtractors, MAE for short, as a baseline model.", "As Figure 5 illustrates, MAE divides the MMCoQA task into three steps: conversational question understanding, multimodal evidence retrieval, and adaptive answer extraction.", "Assume that the current turn in a conversation is k and the current question is q k .", "The conversation context for the current question q k is denoted as H k = { q 1 , a 1 , ..., q k 1 , a k 1 } .", "A multimodal knowledge collection that contains different modalities of items is given, denoted as C = {C p C t C i } , where C p , C t , and C i are the sets of passages, tables, and images, respectively.", "The system is required to retrieve the related evidence from the knowledge collection C and extract a natural language span a k to answer the question q k .", "To understand the current question with the conversation context H k , we apply the sliding window mechanism (Qu et al., 2020) to filter the previous questions.", "We feed the reformatted question q k into the BERT network (Devlin et al., 2019) to obtain the question representation, which is formulated as, v q = W q F q ( q k ) , (1) where F q is the BERT based question encoder, W q is the question projection matrix, and v q R d q .", "The table's representation is computed via, v j t = W t F t ( t j ) , (3) where F t is the BERT based encoder and v jt R d t .", "For different modalities of items in C , we pass them to different knowledge encoders.", "For each passage p j in C p , we obtain its representation v ip as follows, v jp = W p F p ( p j ) , (2) where F p is the BERT based passage encoder, W p is the passage projection matrix, v jp R d p .", "Following the prior work (Herzig et al., 2020; Talmor et al., 2021), we linearize tables by rows as t j to obtain their representations.", "v ji = W i F i ( i j ) , (4) where F i is a pretrained Resnet (He et al., 2016) network on ImageNet, and v ji R d i .", "Noticed that d q , d p , d t , d i have the same dimension.", "To facilitate large-scale retrieval, we apply the dense retriever mechanism inspired from open-domain QA (Karpukhin et al., 2020).", "Differently, 4225 Images Documents Tables Multimodal Knowledge Collection Question Encoder MultimodalKnowledge Encoders Multimodal Knowledge Space Text Extractor Table Extractor Image Extractor Modality Detection Answer Ranking Question Representation Passage Representation Table Representation Image Representation On what dates was the Better with U Tour in the city?", "we have three knowledge encoders F p , F t , F i , and they are independent from questions in order to enable strong precomputed multimodal encodings and execute the efficient maximum inner product search (Lee et al., 2019).", "We first pretrain the question encoder and the knowledge encoders and then input all items in C into knowledge encoders to obtain their representations.", "The parameters of the knowledge encoders are frozen in the following training phases.", "Benefiting from this, we can efficiently calculate the similarity s a between a given question embedding v q and all knowledge item embeddings via the inner product, and select the topN r items I r as evidence, where N r is the number of the retrieved items.", "The retrieved evidence list I r contains items of different modalities, and different modalities need different answer extractors.", "We hence first detect the most appropriate modality for the question.", "We regard the modality detection as a multi-class classification task where the network takes a question as input to predict the probabilities of three modalities.", "The classifier is formulated as, s b = f ( W c F c ( q k )) , (5) where f () denotes the softmax function, F c is the question encoder and s b R 3 .", "TextExtractor .", "It is basically a machine read-ing comprehension model.", "Given the reformulated question and a passage in I r as input, TextExtractor predicts an answer span by computing two scores for each token in a passage in P r to be the start token and the end token, respectively.", "TableExtractor .", "Following the previous work (Herzig et al., 2020), we concatenate the question text to the linearized table sequence, and encode them using BERT.", "Two linear classifiers are then followed to compute the probability of the token being the start token and the end token of the answer span, respectively.", "ImageExtractor .", "We collect the answers in the training set as the answer set for testing (Talmor et al., 2021).", "We extract the visual feature v i for an image with the ResNet, and append the question text with all the answers in the answer set as a text sequence.", "And then we input the text sequence into the BERT to obtain the representations for all tokens, which are then simply combined with the visual feature v i .", "Similarly, two linear classifiers are then followed to compute the probability of the token in the text sequence being the start token and the end token.", "The answer extraction score s c for a candidate answer predicted by the above three extractors is defined as the average of the probabilities of the start and the end token.", "For each candidate answer, we compute its final score as the sum of the retrieval score s a , the modality score s b , and the answer extraction score s c .", "The training details are illustrated in Appendix B. 6 Experiments 6.1 Evaluation Protocols We comprehensively evaluated the baseline models based on their performance in evidence retrieval and answer extraction.", "We adopted Recall and NDCG to evaluate the coverage and the rank position of the retrieval list.", "Following previous conversational QA tasks (Reddy et al., 2019), we reported macro-average F1 in the word level and Exact Match (EM) to estimate the performance of answer extraction.", "We evaluated the open-retrieval conversational QA system ORConvQA and a multimodalQA model ManyModalQA on our MMCoQA dataset.", "And 4226 Table 3: Performance of various methods on the test set.", "to better illustrate the characteristics of the dataset, we developed several variants of the MAE: including w/o conversation context , gold question , gold answer , evidence given , QR(question rewrite) , and pretrain .", "Please see Appendix C for the implement details.", "The results are summarized in Table 3.", "By analyzing the results, we gained the following insights.", "(1) The existing open-retrieval conversational QA and multimodal QA methods cannot handle the MMCoQA problem well, since they are either single-modal or single-turn.", "(2) The results of the MAE variants partly evaluate the quality of the MMConvQA dataset.", "When the conversation context is removed, the performance drops, which verifies the dependency on the conversation context.", "Appending the previous gold answers or directly using the gold questions improve the performance, which is consistent with the dataset construction strategy.", "(3) Using extra data benefits the model, which illustrates that the size of the dataset is kind of small and pretraining can alleviate this problem.", "(4) When we manually complemented the relevant evidence into the retrieval list, it outperforms the normal MAE model a lot.", "It seems that the evidence retrieval is a bottleneck for the current model because the relation among multimodal knowledge is complex as claimed before.", "Modality analysis .", "We summarized the performance of MAE-EG on the three different modal questions in Figure", "6(a).", "It can be seen that the performance on ImageQ is the worst.", "It may be because that our ImageExtractor is a little coarse and 1 2 3 4 5 6 0 1 0 2 0 3 0 4 0 5 0 6 0 F 1 T u rn N u m b er T ex tQ T ab leQ Im ag eQ", "more fine-grained interactions are expected.", "Besides, we selected some items that associated with same entities and visualized their embeddings in Figure", "6(b).", "It is observed that the images' embeddings are isolated, which illustrates that the visual and semantic meanings are not well-aligned.", "Some text's and tables' embeddings are partly syncretic, but it is still far away from an ideal common space where the embeddings of different modal items are evenly distributed according to their meanings.", "It seems that the successful dense retrieval scheme for document retrieval needs to be further modified for the multi-modal retrieval.", "We define a novel and practical task, i.e., MMCoQA, and identify its research challenges, including priority, consistency, and complementarity of multimodal knowledge.", "We construct the MMConvQA dataset, containing multiple supervised labels to facilitate related researches in this community.", "We also report a set of results and analyze the current bottleneck.", "The work described in this paper was supported by Research Grants Council of Hong Kong(PolyU/5210919, PolyU/15207821), National Natural Science Foundation of China (62076212) and PolyU internal grants (ZVQ0)." ]
[ "abstain", "abstain", "objective", "abstain", "objective", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "result", "objective", "method", "objective", "objective", "result", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "abstain", "other", "other", "other", "other", "other", "method", "result", "abstain", "abstain", "abstain", "result", "method", "abstain", "result", "abstain", "method", "abstain", "objective", "method", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "result", "other" ]
[ "This paper investigates contextual word representation models from the lens of similarity analysis.", "Given a collection of trained models, we measure the similarity of their internal representations and attention.", "Critically, these models come from vastly different architectures.", "We use existing and novel similarity measures that aim to gauge the level of localization of information in the deep models, and facilitate the investigation of which design factors affect model similarity, without requiring any external linguistic annotation.", "The analysis reveals that models within the same family are more similar to one another, as may be expected.", "Surprisingly, different architectures have rather similar representations, but different individual neurons.", "We also observed differences in information localization in lower and higher layers and found that higher layers are more affected by fine-tuning on downstream tasks.", "1 1 Introduction Contextual word representations such as ELMo (Pe-ters et al., 2018a) and BERT (Devlin et al., 2019) have led to impressive improvements in a variety of tasks.", "With this progress in breaking the state of the art, interest in the community has expanded to analyzing such models in an effort to illuminate their inner workings.", "A number of studies have analyzed the internal representations in such models and attempted to assess what linguistic properties they capture.", "A prominent methodology for this is to train supervised classifiers based on the models' learned representations, and predict various linguistic properties.", "For instance, Liu et al. (2019a) train such classifiers on 16 linguistic tasks, including part-of-speech tagging, chunking, named Equal contribution 1 The code is available at https://github.com/ johnmwu/contextual-corr-analysis .", "entity recognition, and others.", "Such an approach may reveal how well representations from different models, and model layers, capture different properties.", "This approach, known as analysis by probing classifiers, has been used in numerous other studies (Belinkov and Glass, 2019).", "While the above approach yields compelling insights, its applicability is constrained by the availability of linguistic annotations.", "In addition, comparisons of different models are indirect, via the probing accuracy, making it difficult to comment on the similarities and differences of different models.", "In this paper, we develop complementary methods for analyzing contextual word representations based on their interand intra-similarity.", "While this similarity analysis does not tell us absolute facts about a model, it allows comparing representations without subscribing to one type of information.", "We consider several kinds of similarity measures based on different levels of localization/distributivity of information: from neuron-level pairwise comparisons of individual neurons to representation-level comparisons of full word representations.", "We also explore similarity measures based on models' attention weights, in the case of Transformer models (Vaswani et al., 2017).", "This approach enables us to ask questions such as: Do different models behave similarly on the same inputs?", "Which design choices determine whether models behave similarly or differently?", "Are certain model components more similar than others across architectures?", "Is the information in a given model more or less localized (encoded in individual components) compared to other models?", "2 2 Hinton (1984) defines a localist representation as one using one computing element for each represented entity.", "In a language model, this definition would depend on what linguistic concepts we deem important, and is thus somewhat arbitrary.", "We develop a measure that aims to capture this notion of localization without recourse to a specific set of linguistic properties.", "We choose a collection of pre-trained models that aim to capture diverse aspects of modeling choices, including the building blocks (Recur-rent Networks, Transformers), language modeling objective (unidirectional, bidirectional, masked, permutation-based), and model depth (from 3 to 24 layers).", "More specifically, we experiment with variants of ELMo, BERT, GPT (Radford et al., 2018), GPT2 (Radford et al., 2019), and XLNet (Yang et al., 2019).", "Notably, we use the same methods to investigate the effect that fine-tuning on downstream tasks has on the model similarities.", "Our analysis yields the following insights: Different architectures may have similar representations, but different individual neurons.", "Models within the same family are more similar to one another in terms of both their neurons and full representations.", "Lower layers are more similar than higher layers across architectures.", "Higher layers have more localized representations than lower layers.", "Higher layers are more affected by fine-tuning than lower layers, in terms of their representations and attentions, and thus are less similar to the higher layers of pre-trained models.", "Finally, we show how the similarity analysis can motivate a simple technique for efficient fine-tuning, where freezing the bottom layers of models still maintains comparable performance to fine-tuning the full network, while reducing the fine-tuning time.", "The most common approach for analyzing neural network models in general, and contextual word representations in particular, is by probing classifiers (Ettinger et al., 2016; Belinkov et al., 2017; Adi et al., 2017; Conneau et al., 2018; Hupkes et al., 2018), where a classifier is trained on a corpus of linguistic annotations using representations from the model under investigation.", "For example, Liu et al. (2019a) used this methodology for investigating the representations of contextual word representations on 16 linguistic tasks.", "One limitation of this approach is that it requires specifying linguistic tasks of interest and obtaining suitable annotations.", "This potentially limits the applicability of the approach.", "An orthogonal analysis method relies on similarities between model representations.", "Bau et al. (2019) used this approach to analyze the role of individual neurons in neural machine translation.", "They found that individual neurons are important and interpretable.", "However, their work was limited to a certain kind of architecture (specifically, a recurrent one).", "In contrast, we compare models of various architectures and objective functions.", "Other work used similarity measures to study learning dynamics in language models by comparing checkpoints of recurrent language models (Mor-cos et al., 2018), or a language model and a part-of-speech tagger (Saphra and Lopez, 2019).", "Our work adopts a similar approach, but explores a range of similarity measures over different contextual word representation models.", "Questions of localization and distributivity of information have been under investigation for a long time in the connectionist cognitive science literature (Page, 2000; Bowers, 2002; Gayler and Levy, 2011).", "While neural language representations are thought to be densely distributed, several recent studies have pointed out the importance of individual neurons (Qian et al., 2016; Shi et al., 2016; Radford et al., 2017; Lakretz et al., 2019; Bau et al., 2019; Dalvi et al., 2019; Baan et al., 2019).", "Our study contributes to this line of work by designing measures of localization and distributivity of information in a collection of models.", "Such measures may facilitate incorporating neuron interactions in new training objectives (Li et al., 2020).", "We present five groups of similarity measures, each capturing a different similarity notion.", "Consider a collection of M models { f ( m ) } Mm =1 , yielding word representations h ( m ) l and potentially attention weights ( m ) l at each layer l .", "Let k index neurons h ( m ) l [ k ] or attention heads ( m ) l [ k ] .", "h ( m ) l [ k ] , ( m ) l [ k ] are real (resp. matrix) valued, ranging over words (resp. sentences) in a corpus.", "Our similarity measures are of the form sim ( h ( m ) l , h ( m (cid:48) ) l (cid:48) ) or sim ( ( m ) l , ( m (cid:48) ) l (cid:48) ) , that is, they find similarities between layers.", "We present the full mathematical details in appendix A. 3.1 Neuron-level similarity A neuron-level similarity measure captures similarity between pairs of individual neurons.", "We consider one such measure, neuronsim , following Bau et al. (2019).", "For every neuron k in layer l , neuronsim finds the maximum correlation between it and another neuron in another layer l (cid:48) .", "Then, it averages over neurons in layer l .", "3 This measure aims to capture localization of information.", "It is high when two layers have pairs of neurons with similar behavior.", "This is far more likely when the models have local, rather than distributed representations, because for distributed representations to have similar pairs of neurons the information must be distributed similarly.", "A mixed neuronrepresentation similarity measure captures a similarity between a neuron in one model with a layer in another.", "We consider one such measure, mixedsim : for every neuron k in layer l , regress to it from all neurons in layer l (cid:48) and measure the quality of fit.", "Then, average over neurons in l .", "It is possible that some information is localized in one layer but distributed in another layer.", "mixedsim captures such a phenomenon.", "A representation-level measure finds correlations between a full model (or layer) simultaneously.", "We consider three such measures: two based on canonical correlation analysis (CCA), namely singular vector CCA ( svsim ; Raghu et al. 2017) and projection weighted CCA ( pwsim ; Morcos et al. 2018), in addition to linear centered kernel alignment ( ckasim ; Kornblith et al. 2019).", "4 These measures emphasize distributivity of information if two layers behave similarly over all of their neurons, the similarity will be higher, even if no individual neuron has a similar matching pair or is represented well by all neurons in the other layer.", "Other representation-level similarity measures may be useful, such as representation similarity analysis (RSA; Kriegeskorte et al. 2008), which 3 In this and other measures that allowed it, we also experimented with averaging just the top k neurons (or canonical correlations, in Section 3.3 measures) in case most of the layer is noise.", "Heatmaps are in the online repository.", "We did not notice major differences.", "4 We also experimented with the RBF variant, which is computationally demanding.", "We found similar patterns in preliminary experiments, so we focus on the linear variant.", "has been used to analyze neural network representations (Bouchacourt and Baroni, 2018; Chrupaa and Alishahi, 2019; Chrupaa, 2019), or other variants of CCA, such as deep CCA (Andrew et al., 2013).", "We leave the explorations of such measures to future work.", "Previous work analyzing network similarity has mostly focused on representation-based similarities (Morcos et al., 2018; Saphra and Lopez, 2019; Voita et al., 2019a).", "Here we consider similarity based on attention weights in Transformer models.", "Analogous to a neuron-level similarity measure, an attention-level similarity measure finds the most correlated other attention head.", "We consider three methods to correlate heads, based on the norm of two attention matrices ( m ) l [ k ] , ( m (cid:48) ) l (cid:48) [ k (cid:48) ] , their Pearson correlation, and their JensenShannon divergence.", "5 We then average over heads k in layer l , as before.", "These measures are similar to neuronsim in that they emphasize localization of informationif two layers have pairs of heads that are very similar in their behavior, the similarity will be higher.", "We consider parallels of the representation-level similarity.", "To compare the entire attention heads in two layers, we concatenate all weights from all heads in one layer to get an attention representation.", "That is, we obtain attention representations ( m ) l [ h ] , a random variable ranging over pairs of words in the same sentence, such that ( m ) l, ( i,j ) [ h ] is a scalar value.", "It is a matrix where the first axis is indexed by word pairs, and the second by heads.", "We flatten these matrices and use svsim , pwsim , and ckasim as above for comparing these attention representations.", "These measures should be high when the entire set of heads in one layer is similar to the set of heads in another layer.", "Models We choose a collection of pre-trained models that aim to capture diverse aspects of modeling choices, including the building blocks (RNNs, Transformers), language modeling objective (uni-directional, bidirectional, masked, permutation-based), and model depth (from 3 to 24 layers).", "5 Other recent work has used the JensenShannon divergence to measure distances between attention heads (Clark et al., 2019; Jain and Wallace, 2019).", "ELMo variants We use the original ELMo (Peters et al., 2018a), a bidirectional RNN model with two hidden layers, as well as two variants a deeper and larger 4-layer model and a Transformer-equivalent variant (Peters et al., 2018b).", "GPT variants We use both the original OpenAI Transformer (GPT; Radford et al. 2018) and its successor GPT2 (Radford et al., 2019), in the small and medium model sizes.", "These are all unidirectional Transformer LMs.", "XLNet We use XLNet-base/large (12/24 layers; Yang et al. 2019).", "Both are Transformer LM with a permutation-based objective function.", "Data For analyzing the models, we run them on the Penn Treebank development set (Marcus et al., 1993), following the setup taken by Liu et al. (2019a) in their probing classifier experiments.", "7 We collect representations and attention weights from each layer in each model for computing the similarity measures.", "We obtain representations for models used in Liu et al. (2019a) from their implementation and use the transformers library (Wolf et al., 2019) to extract other representations.", "We aggregate sub-word representations by taking the representation of the last sub-word, following Liu et al. (2019a), and sub-word attentions by summing up at-6 BERT is also trained with a next sentence prediction objective, although this may be redundant (Liu et al., 2019b).", "7 As suggested by a reviewer, we verified that the results are consistent when using another dataset (Appendix B.1).", "tention to sub-words and averaging attention from sub-words, following Clark et al. (2019), which guarantees that the attention from each word sums to one.", "Figure 1 shows heatmaps of similarities between layers of different models, according to neuronsim and ckasim .", "Heatmaps for the other measures are provided in Appendix B. The heatmaps reveal the following insights.", "Different architectures may have similar representations, but different individual neurons Comparing the heatmaps, the most striking distinction is that neuronsim induces a distinctly block-diagonal heatmap, reflecting high intra-model similarities and low inter-model similarities.", "As neuronsim is computed by finding pairs of very similar neurons, this means that within a model, different layers have similar individual neurons, but across models, neurons are very different.", "In contrast, ckasim show fairly significant similarities across models (high values off the main diagonal), indicating that different models generate similar representations.", "The most similar cross-model similarities are found by mixedsim (Figure 8d in Appendix B), which suggests that individual neurons in one model may be well represented by a linear combination of neurons in another layer.", "The other representation-level similarities ( ckasim , svsim , and pwsim ), also show cross-model similarities, albeit to a lesser extent.", "Models within the same family are more similar The heatmaps show greater similarity within a model than across models (bright diagonal).", "Different models sharing the same architecture and objective function, but different depths, also exhibit substantial representation-level similarities for instance, compare BERT-base and BERT-large or ELMo-original and ELMo-4-layers, under ckasim (Figure 1b).", "The Transformer-ELMo presents an instructive case, as it shares ELMo's bidirectional objective function but with Transformers rather than RNNs.", "Its layers are mostly similar to themselves and the other ELMo models, but also to GPT, more so than to BERT or XLNet, which use masked and permutation language modeling objectives, respectively.", "Thus it seems that the objective has a considerable impact on representation similarity.", "8 The fact that models within the same family are more similar to each other supports the choice of Saphra and Lopez (2019) to use models of similar architecture when probing models via similarity measures across tasks.", "9 A possible confounder is that models within the same family are trained on the same data, but cross-family models are trained on different data.", "It is difficult to control for this given the computational demands of training such models and the current practice in the community of training models on ever increasing sizes of data, rather than a standard fixed dataset.", "However, Figure 2 shows similarity heatmaps of layers from pre-trained and randomly initialized models using ckasim , exhibiting high intra-model similarities, as before.", "Interestingly, models within the same family (either GPT2 or XLNet) are more similar than across families, even with random models, indicating that intrinsic aspects of models in a given family make them similar, regardless of the training data or process.", "10 As may be expected, in most cases, the similarity between random and pre-trained models is small.", "One exception is the vertical bands in the lower triangle, which indicate that the bottom layers of trained models are similar to many layers of random models.", "This may be due to random models merely transferring information from bottom to top, without meaningful processing.", "8 Voita et al. (2019a) found that differences in the training objective result in more different representations (according to pwsim ) than differences in random initialization.", "9 We thank a reviewer for pointing out this connection.", "10 Relatedly, Morcos et al. (2018) found similar CCA coefficients in representations from recurrent language models trained on different datasets.", "Still, it may explain why random models sometimes generate useful features (Wieting and Kiela, 2019).", "Meanwhile, as pointed out by a reviewer, lower layers converge faster, leaving them closer to their initial random state (Raghu et al., 2017; Shwartz-Ziv and Tishby, 2017).", "architectures The representation-level heatmaps (Figure 1) all exhibit horizontal stripes at lower layers, especially with ckasim , indicating that lower layers are more similar than higher layers when comparing across models.", "This pattern can be explained by lower layers being closer to the input, which is always the same words.", "A similar observation has been made for vision networks (Raghu et al., 2017).", "11 Voita et al. (2019a) found a similar pattern comparing Transformer models with different objective functions.", "Adjacent layers are more similar All heatmaps in Figure 1 exhibit a very bright diagonal and bright lines slightly off the main diagonal, indicating that adjacent layers are more similar.", "This is even true when comparing layers of different models (notice the diagonal nature of BERT-base vs. BERT-large in Figure 1b), indicating that layers at the same relative depth are more similar than layers at different relative depths.", "A similar pattern was found in vision networks (Kornblith et al., 2019).", "Some patterns are unexpected.", "For instance, comparing 11 Raghu et al. (2017) also used svsim to study recurrent language models, showing that lower layers converge faster.", "Although they have not looked at cross-model comparisons, faster convergence may be consistent with fewer changes during training, which can explain why lower layers are more similar across architectures.", "XLNet with the BERT models, it appears that lower layers of XLNet are more similar to higher layers of BERT.", "We speculate that this is an artifact of the permutation-based objective in XLNet.", "We found corroborating evidence for this observation in ongoing parallel work, where we compare BERT and XLNet at different layers through word-(Liu et al., 2019a) and sentence-level tasks (Wang et al., 2019): while BERT requires mostly features from higher layers to achieve state-of-the-art results, in XLNet lower and middle layers suffice.", "Higher layers are more localized than lower ones The different similarity measures capture different levels of localization vs. distributivity of information.", "neuronsim captures cases of localized information, where pairs of neurons in different layers behave similarly.", "svsim captures cases of distributed information, where the full layer representation is similar.", "To quantify these differences, we compute the average similarity according to each measure when comparing each layer to all other layers.", "In effect, we take the column-wise mean of each heatmap.", "We do this separately for svsim as the distributed measure and neuronsim as the localized measure, and we subtract the svsim means from the neuronsim means.", "This results in a measure of localization per layer.", "Figure 3 shows the results.", "In all models, the localization score mostly increases with layers, indicating that information tends to become more localized at higher layers.", "12 This pattern is quite consistent, but may be surprising given prior observations on lower layers capturing phenomena that operate at a local context (Tenney et al., 2019), which presumably require fewer neurons.", "However, this pattern is in line with observations made by Ethayarajh (2019), who reported that upper layers of pre-trained models produce more context-specific representations.", "There appears to be a correspondence between our localization score and Ethayarajh's context-specificity score, which is based on the cosine similarity of representations of the same word in different contexts.", "Thus, more localized representations are also more context-specific.", "A direct comparison between context-specificity and localization may be fruitful avenue for future work.", "Some models seem less localized than others, 12 Recurrent models are more monotonous than Transformers, echoing results by Liu et al. (2019a) on language modeling perplexity in different layers.", "especially the ELMo variants, although this may be confounded by their being shallower models.", "BERT and XLNet models first decrease in localization and then increase.", "Interestingly, XLNet's localization score decreases towards the end, suggesting that its top layer representations are less context-specific.", "Figure 4 shows similarity heatmaps using two of the attention-level similarity measuresJensen Shannon and ckasim for layers from 6 models: BERT-base/large, GPT2-small/medium, and XLNet-base/large.", "Layers within the same model or model family exhibit higher similarities (bright block diagonal), in line with results from the representation-level analysis.", "In particular, under both measures, GPT2 layers are all very similar to each other, except for the bottom ones.", "Comparing the two heatmaps, the localized Jensen Shannon similarity (Figure 4a) shows higher similarities off the main diagonal than the distributed ckasim measure (Figure 4b), indicating that different models have pairs of attention heads that behave similarly, although the collection of heads from two different models is different in the aggregate.", "Heatmaps for the other measures are provided in Appendix C, following primarily the same patterns.", "It is difficult to identify patterns within a given model family.", "However, under the attention-based svsim (Figure 10d in Appendix C), and to a lesser extent pwsim (Figure 10e), we see bright diagonals when comparing different GPT2 (and to a lesser extent XLNet and BERT) models, such that layers at the same relative depth are similar in their attention patterns.", "We have seen such a result also in the representation-based similarities.", "Adjacent layers seem more similar in some cases, but these patterns are often swamped by the large intra-model similarity.", "This result differs from our results for representational similarity.", "GPT2 models, at all layers, are similar to the bottom layers of BERT-large, expressed in bright vertical bands.", "In contrast, GPT2 models do not seem to be especially similar to XLNet.", "Comparing XLNet and BERT, we find that lower layers of XLNet are quite similar to higher layers of BERT-base and middle layers of BERT-large.", "This parallels the findings from comparing representations of XLNet and BERT, which we conjecture is the result of the permutation-based objective in XLNet.", "In general, we find the attention-based similarities to be mostly in line with the neuronand representation-level similarities.", "Nevertheless, they appear to be harder to interpret, as fine-grained patterns are less noticeable.", "One might mention in this context concerns regarding the reliability of attention weights for interpreting the importance of input words in a model (Jain and Wallace, 2019; Serrano and Smith, 2019; Brunner et al., 2020).", "However, characterizing the effect of such concerns on our attention-based similarity measures is beyond the current scope.", "How does fine-tuning on downstream tasks affect model similarity?", "In this section, we compare pre-trained models and their fine-tuned versions.", "We use four of the GLUE tasks (Wang et al., 2019): MNLI A multi-genre natural language inference dataset (Williams et al., 2018), where the task is to predict whether a premise entails a hypothesis.", "QNLI A conversion of the Stanford question answering dataset (Rajpurkar et al., 2016), where the task is to determine whether a sentence contains the answer to a question.", "QQP A collection of question pairs from the Quora website, where the task is to determine whether two questions are semantically equivalent.", "SST-2 A binary sentiment analysis task using the Stanford sentiment treebank (Socher et al., 2013).", "Top layers are more affected by fine-tuning Figure 5 shows representation-level ckasim similarity heatmaps of pre-trained (not fine-tuned) and fine-tuned versions of BERT and XLNet.", "The most striking pattern is that the top layers are more affected by fine-tuning than the bottom layers, as evidenced by the low similarity of high layers of the pre-trained models with their fine-tuned counterparts.", "Hao et al. (2019) also observed that lower layers of BERT are less affected by fine-tuning than top layers, by visualizing the training loss surfaces.", "13 In Appendix D, we demonstrate that this observation can motivate a more efficient fine-tuning process, where some of the layers are frozen while others are fine-tuned.", "There are some task-specific differences.", "In BERT, the top layers of the SST-2-fine-tuned model 13 A reviewer commented that this pattern seems like a natural consequence of back-propagation, which we concur with, although in on-going work we found that middle layers of XLNet lead to more gains when fine-tuned.", "Future work can also explore the effect of optimization on the similarity measures.", "are affected more than other layers.", "This may be because SST-2 is a sentence classification task, while the other tasks are sentence-pair classification.", "A potential implication of this is that non-SST-2 tasks can contribute to one another in a multi-task fine-tuning setup.", "In contrast, in XLNet, fine-tuning on any task leads to top layers being very different from all layers of models fine-tuned on other tasks.", "This suggests that XLNet representations become very task-specific, and thus multi-task fine-tuning may be less effective with XLNet than with BERT.", "Observing the attnsim similarity based on JensenShannon divergence for base and fine-tuned models (Figure 6), we again see that top layers have lower similarities, implying that they undergo greater changed during fine-tuning.", "Other attention-based measures behaved similarly (not shown).", "Kovaleva et al. (2019) made a similar observation by comparing the cosine similarity of attention matrices in BERT, although they did not perform cross-task comparisons.", "In fact, the diagonals within each block indicate that bottom layers remain similar to one another even when fine-tuning on different tasks, while top layers diverge after fine-tuning.", "The vertical bands at layers 0 mean that many higher layers have a head that is very similar to a head from the first layer, that is, a form of redundancy, which can explain why many heads can be pruned (Michel et al., 2019; Voita et al., 2019b; Kovaleva et al., 2019).", "Comparing BERT and XLNet, the vertical bands at the top layers of BERT (especially in MNLI, QQI, and SST-2) suggest that some top layers are very similar to any other layer.", "In XLNet, top MNLI layers are quite", "different from any other layer.", "Thus different objective functions impact the attention heads differently under fine-tuning.", "Fine-tuning affects localization Figure 7 shows localization scores for different layers in pre-trained and fine-tuned models.", "In contrast to the pre-trained models, the fine-tuned ones decrease in localization at the top layers.", "This decrease may be the result of top layers learning high-level tasks, which require multiple neurons to capture properly.", "In this work, we analyzed various prominent contextual word representations from the perspective of similarity analysis.", "We compared different layers of pre-trained models using both localized and distributed measures of similarity, at neuron, representation, and attention levels.", "We found that different architectures often have similar internal representations, but differ at the level of individual neurons.", "We also observed that higher layers are more localized than lower ones.", "Comparing fine-tuned and pre-trained models, we found that higher layers are more affected by fine-tuning in their representations and attention weights, and become less localized.", "These findings motivated experimenting with layer-selective fine-tuning, where we were able to obtain good performance while freezing the lower layers and only fine-tuning the top ones.", "Our approach is complementary to the linguistic analysis of models via probing classifiers.", "An exciting direction for future work is to combine the two approaches in order to identify which linguistic properties are captured in model components that are similar to one another, or explicate how localization of information contributes to the learnability of particular properties.", "It may be insightful to compare the results of our analysis to the loss surfaces of the same models, especially before and after fine-tuning (Hao et al., 2019).", "One could also study whether a high similarity entail that two models converged to a similar solution.", "Our localization score can also be compared to other aspects of neural representations, such as gradient distributions and their relation to memoriza-tion/generalization (Arpit et al., 2017).", "Finally, the similarity analysis may also help improve model efficiency, for instance by pointing to components that do not change much during fine-tuning and can thus be pruned.", "We thank Nelson Liu for providing some of the representations analyzed in this work.", "We also thank the anonymous reviewers for their many valuable comments.", "This research was carried out in collaboration between the HBKU Qatar Computing Research Institute (QCRI) and the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).", "Y.B. is also supported by the Harvard Mind, Brain, and Behavior Initiative (MBB)." ]
[ "objective", "method", "abstain", "objective", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "objective", "method", "objective", "objective", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "objective", "other", "other", "objective", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "objective", "result", "result", "objective", "abstain", "abstain", "result", "abstain", "method", "abstain", "other", "other", "other", "other" ]
[ "We address a challenging and practical task of labeling questions in speech in real time during telephone calls to emergency medical services in English, which embeds within a broader decision support system for emergency call-takers.", "We propose a novel multimodal approach to real-time sequence labeling in speech.", "Our model treats speech and its own textual representation as two separate modalities or views, as it jointly learns from streamed audio and its noisy transcription into text via automatic speech recognition.", "Our results show significant gains of jointly learning from the two modalities when compared to text or audio only, under adverse noise and limited volume of training data.", "The results generalize to medical symptoms detection where we observe a similar pattern of improvements with multimodal learning.", "Our paper addresses the challenge of learning to discover and label questions in telephone calls to emergency medical services in English.", "The task is demanding in two key aspects:", "1. Noise: A typical phone call to an emergency medical service differs significantly from data within most standard speech datasets.", "Most importantly, emergency calls are noisy by nature due to very stressful conversations conveyed over poor telephone lines.", "Automatic speech recognition (ASR) and subsequent text processing quickly becomes prohibitive in such noisy environments, where word error rates (WER) are significantly higher than for standard benchmark data (Han et al., 2017).", "For this reason, we propose a sequence labeler that makes use of two modalities of a phone call: audio and its transcription into text by utilizing an ASR model.", "Hereby we create a multimodal Figure 1: A speech sequence from our phone call dataset.", "2. Real-time processing: Our model is required to work incrementally to discover questions in real time within incoming streams of audio in order to work as a live decision support system.", "At runtime, no segmentation into sub-call utterances such as phrases or sentences is easily available.", "The lack of segmentation coupled with the real-time processing constraint makes it computationally prohibitive to discover alignments between speech and its automatic transcription.", "For these reasons, we cannot utilize standard approaches to multimodal learning which typically rely on near-perfect cross-modal alignments between short and well-defined segments (Baltrusaitis et al., 2018).", "Context and relevance.", "Learning to label sequences of text is one of the more thoroughly explored topics in natural language processing.", "In recent times, neural networks are applied not only to sequential labeling like part-of-speech tagging (Plank et al., 2016) or named entity recognition (Ma and Hovy, 2016), but also to cast into a labeling framework otherwise non-sequential tasks such as syntactic parsing (Gomez-Rodrguez and Vilares, 2018; Strzyz et al., 2019).", "By contrast, assigning labels to audio sequences of human speech is comparatively less charted out.", "When addressed, speech labeling typically adopts a solution by proxy, which is to automatically transcribe speech into text, and then apply a text-only model (Surdeanu et al., 2005; Molla et al., 2007; Ei-delman et al., 2010).", "The challenge then becomes not to natively label speech, but to adapt the model to adverse conditions of speech recognition error rates.", "Such models typically feature in end-to-end applications such as dialogue state tracking (Hen-derson et al., 2014; Ram et al., 2018).", "Recent advances in end-to-end neural network learning offer promise to directly label linguistic categories from speech alone (Ghannay et al., 2018).", "From another viewpoint, multimodal learning is successfully applied to multimedia processing where the modalities such as text, speech, and video are closely aligned.", "However, contributions there typically feature classification tasks such as sentiment analysis and not finer-grained multimedia sequence labeling (Zadeh et al., 2017).", "Our contributions.", "We propose a novel neural architecture to incrementally label questions in speech by learning from its two modalities or views: the native audio signal itself and its transcription into noisy text via ASR.", "1. Our model utilizes the online temporal alignment between the input audio signal and its raw ASR transcription.", "By taking advantage of this fortuitous real-time coupling, we avoid having to learn the multimodal alignment over the entire phone call and its transcript, which would violate the real-time processing constraint that is crucial for decision support.", "2. We achieve consistent and significant improvements from learning jointly from the two modalities compared to ASR transcriptions and audio only.", "The improvements hold across two inherently different audio sequence labeling tasks.", "3. Our evaluation framework features a challenging real-world task with noisy inputs and real-time processing requirements.", "Under this adversity, we find questions and medical symptoms in emergency phone calls with high accuracy.", "Our task is illustrated in Figure", "1. 2 Multimodal speech labeling We define the multimodal speech labeler MultiQT as a combination of three neural networks that we apply to a number of temporal input modalities.", "In our case, we consider speech and associated machine transcripts as the separate modalities or views.", "The model is illustrated in Figure", "2. To obtain temporal alignment between speech and text, we propose a simple approach that uses the output of an ASR system as the textual representation.", "Here, we take the ASR to be a neural network trained with the connectionist temporal classification (CTC) loss function (Graves et al., 2006).", "Given audio, it produces a temporal softmax of length T s with a feature dimension defined as a categorical distribution, typically over characters, words or subword units, per timestep.", "We refer to a sequence of input representations of the audio modality as ( x ( t ) a ) t [1 ..T a ] and of the textual modality as ( x ( t ) s ) t [1 ..T s ] .", "From the input sequences we compute independent unimodal representations denoted by z ( t ) a and z ( t ) s by applying two unimodal transformations denoted by f a and f s , respectively.", "Each of these transformations is parameterized by a convolutional neural network with overall temporal strides s a and s s and receptive fields r a and r s .", "With T m as length of the resulting unimodal representations: z ( t ) a = f a (cid:18)(cid:16) x ( i ) a (cid:17) s a t + r a,r i = s a t r a,l (cid:19) z ( t ) s = f s (cid:18)(cid:16) x ( i ) s (cid:17) s s t + r s,r i = s s t r s,l (cid:19) , (1) for t [1 ..T m ] , where r a,l , r a,r , r s,l and r s,r are the left and right half receptive fields of f a and f s , respectively.", "For f a , r a,l = (cid:98) ( r a 1) / 2 (cid:99) and r a,r = (cid:100) ( r a 1) / 2 (cid:101) and similarly for f s .", "For i < 1 and i > T a we define x ( i ) a and x ( i ) s by zero padding, effectively padding with half the receptive field on the left and right sides of the input.", "This then implies that T m = (cid:98) T a /s a (cid:99) = (cid:98) T s /s s (cid:99) which constrains the strides according to T a and T s and functions as same padding.", "This lets us do convolutions without padding the internal representations for each layer in the neural networks, which in turn allows for online streaming.", "To form a joint multimodal representation from z ( t ) a and z ( t ) s we join the representations along the feature dimension.", "In the multimodal learning lit-terature such an operation is sometimes called fusion (Zadeh et al., 2017).", "We denote the combined multimodal representation by z ( t ) m and obtain it in a time-binded manner such that for a certain timestep Figure 2: MultiQT model illustration for two timesteps i and j .", "In our experiments fusion ( ) either denotes a simple concatenation, [ z ( t ) a ; z ( t ) s ] , or a flattened outer product, [1 z ( t ) a ] [1 z ( t ) s ] .", "The latter is similar to the fusion introduced by Zadeh et al. (2017), but we do not collapse the time dimension since our model predicts sequential labels.", "Finally, z ( t ) m is transformed before projection into the output space: z ( t ) y = g (cid:16) z ( t ) m (cid:17) , (3) y ( t ) = h (cid:16) z ( t ) y (cid:17) , (4) where g is a fully connected neural network and h is a single dense layer followed by a softmax activation such that y ( t ) RK is a vector of probabilities summing to one for each of the K output categories.", "The predicted class is arg max( y ( t ) ) .", "In general, the loss is defined as a function of all learnable parameters and is computed as the average loss on M examples in a mini-batch.", "We denote by {X a , X s } a dataset consisting of N pairs of input sequences of each of the two modalities.", "As short-hand notation, let X ( n ) a refer to the n 'th audio sequence example in X a and similarly for X ( n ) s .", "The mini-batch loss is then L (cid:18) ; (cid:110) X ( n ) a , X ( n ) s (cid:111) n B i (cid:19) = 1 M (cid:88) n B i L ( n ) (cid:16) ; X ( n ) a , X ( n ) s (cid:17) , (5) where B i is an index set uniformly sampled from [1 ..N ] which defines the i 'th batch of size |B i | = M .", "The loss for each example, L ( n ) , is computed as the time-average of the loss per timestep, L ( n ) (cid:16) ; X ( n ) a , X ( n ) s (cid:17) = 1 TT (cid:88) t =1 L ( n,t ) (cid:16) ; X ( n, t a ) a , X ( n, t s ) s (cid:17) , (6) where t a = [ s a t r a,l .. s a t + r a,r ] and similarly for t s since the dependency of the loss per timestep is only on a limited timespan of the input.", "The loss per timestep is defined as the categorical cross-entropy loss between the softmax prediction y ( t ) and the one-hot encoded ground truth target y ( t ) , L ( n,t ) (cid:16) ; X ( n, t a ) a , X ( n, t s ) s (cid:17) = K (cid:88) k =1 y ( t ) k log( y ( t ) k ) .", "In addition to the loss functions defined above, we also consider multitask training.", "This has been reported to improve performance in many different domains by including a suitably related auxiliary task (Bingel and Sgaard, 2017; Martnez Alonso and Plank, 2017).", "For the task of labelling segments in the input sequences as pertaining to annotations from among a set of K 1 positive classes and one negative class, we propose the auxiliary task of binary labelling of segments as pertaining to either the negative class or any of the K 1 positive classes.", "For question tracking, this amounts to doing binary labelling of segments that are questions of any kind.", "The hope is that this will make the training signal stronger since the sparsity of each of the classes, e.g. questions, is reduced by collapsing them into one shared class.", "We use the same loss function as above, but with the number of classes reduced to K = 2 .", "The total Label Description Example Count Fraction Q1 Question about the address of the incident.", "multitask loss is a weighted sum of the K -class loss and the binary loss: L ( n,t ) MT = L ( n,t ) binary + (1 ) L ( n,t ) .", "(7) The tunable hyperparameter [0 , 1] interpolates the task between regular K -class labeling for = 0 and binary classification for = 1 .", "Our dataset consists of 525 phone calls to an English-speaking medical emergency service.", "The call audio is mono-channel, PCM-encoded and sampled at 8000 Hz .", "The duration of the calls has the mean of 166 s (st. dev. 65 s , IQR 52 s ).", "All calls are manually annotated for questions by trained native English speakers.", "Each question is annotated with its start and stop time and assigned with one of 13 predefined question labels or an additional label for any question that falls outside of the 13 categories.", "Figure 1 illustrates these annotations.", "We observe an initial inter-annotator agreement of = 0 .", "8 (Krippendorff, 2018).", "Each call has been additionally corrected at least once by a different annotator to improve the quality of the data.", "On average it took roughly 30 minutes to annotate a single call.", "For our experiments, we choose the five most frequent questions classes, which are explained in Table", "1. Out of 24 hours of calls, the questions alone account for only 30 minutes (roughly 2%) of audio.", "For the experiments we use 5-fold cross-validation stratified by the number of questions in each call, such that calls of different lengths and contents are included in all folds.", "We test our model on an additional speech sequence labeling challenge: tracking mentions of medical symptoms in incoming audio.", "By using another task we gauge the robustness of MultiQT as a general sequence labeling model and not only a question tracker, since symptom utterances in speech carry inherently different linguistic features than questions.", "As our question-tracking data was not manually labeled for symptoms, we created silver-standard training and test sets automatically by propagating a list of textual keywords from the ground truth human transcripts back onto the audio signal as time stamps with a rule-based algorithm.", "The initial list contained over 40 medical symptoms, but in the experiment we retain the most frequent five: state of consciousness, breathing, pain, trauma, and hemorrhage.", "The utterances that we track are complex phrases with a high variance: There are many different ways to express a question or a medical symptom in conversation.", "This linguistic complexity sets our research apart from most work in speech labeling which is much closer to exact pattern matching (Salamon and Bello, 2017).", "Inputs.", "The audio modality is encoded using 40 log-mel features computed with a window of 0 .", "02 s and stride 0 .", "01 s .", "The textual modality is formed by application of an ASR system to the audio modality.", "In all reported experiments, only ASR outputs are used and never human transcriptions, both in training and evaluation.", "The audio input to the ASR is encoded in the same way as described above.", "The ASR available to us has a purely convolutional architecture similar to the one in (Collobert et al., 2016) with an overall stride of 2 .", "For MultiQT, this means that T a = 2 T s .", "The ASR is trained on 600 hours of phone calls to medical emergency services in English from the same emergency service provider as the question and symptoms tracking datasets.", "Both of these are contained in the ASR test set.", "The ASR is trained using the connectionist temporal classification (CTC) loss function (Graves et al., 2006) and has a character error rate of 14 % and a word error rate of 31 % .", "Its feature dimension is 29 which corresponds to the English alphabet including apostrophe, space and a blank token for the CTC loss.", "Systems.", "The basic version of MultiQT uses a single softmax cross-entropy loss function and forms a time-bound multimodal representation by concatenating the unimodal representations.", "We then augment this model in three ways:", "1. MultiQT-TF: tensor fusion instead of concatenation following Zadeh et al. (2017),", "2. MultiQT-MT: auxiliary binary classification with = 0 .", "5 , 3. MultiQT-TF-MT: combination of 1 and", "2. Baselines.", "MultiQT can easily be adapted to a single modality by excluding the respective convolutional transformation f a or f s .", "For example, MultiQT can be trained unimodally on audio by removing f s and then defining z ( t ) m = z ( t ) a instead of concatenation or tensor fusion.", "We baseline the multimodal MultiQT models against versions trained unimodally on audio and text.", "We also compare MultiQT to two distinct baseline models:", "Contrary to MultiQT, the baselines are trained to classify an input sequence into a single categorical distribution over the labels.", "At training, the models are presented with short segments of call transcripts in which all timesteps share the same label such that a single prediction can be made.", "The baselines are trained exclusively on text and both models represent the windowed transcript as a TF-IDF-normalized bag of words similar to Zhang et al. (2015).", "The bag of words uses word uniand bigrams, and character tri-, fourand five-grams with 500 of each selected by 2 -scoring between labels and transcripts on the training set.", "Hyperparameters.", "We use 1D convolutions for f a and f s .", "For f a we use three layers with kernel sizes of 10, 20 and 40, filters of 64, 128 and 128 units and strides of 2, 2 and 2 in the first, second and third layer, respectively.", "For f s we use two layers with kernel sizes of 20 and 40, filters of 128 and 128 units and strides of 2 and", "2. Before each nonlinear transformation in both f a and f s we use batch normalization (Ioffe and Szegedy, 2015) with momentum 0 .", "99 and trainable scale and bias, and we apply dropout (Srivastava et al., 2014) with a dropout rate of 0.2.", "For g we use three fully connected layers of 256 units each and before each nonlinear transformation we use batch normalization as above and apply dropout with a dropout rate of 0.4.", "We l 2 regularize all learnable parameters with a weighting of 0 .", "1 .", "The FNN model uses the same classifier as is used for g in MultiQT with a dropout rate of 0.3 and an l 2 regularization factor of 0 .", "05 .", "All neural models are trained with the Adam optimizer (Kingma and Ba, 2015) using a learning rate of 1 10 4 , 1 = 0 .", "9 and 2 = 0 .", "999 and batch size 6 except for those with tensor fusion which use a batch size of 1 due to memory constraints.", "Larger batch sizes were prohibitive since we use entire calls as single examples but results were generally consistent across different batch sizes.", "All hyperparameters were tuned manually and heuristically.", "It takes approximately one hour to train the base MultiQT model on one NVIDIA GeForce GTX 1080 Ti GPU card.", "Evaluation.", "For each model we report two F1 scores with respective precisions and recalls macro-averaged over the classes.", "TIMESTEP : For each timestep, the model prediction is compared to the gold label.", "The metrics are computed per timestep and micro-averaged over the examples.", "This metric captures the model performance in finding and correctly classifying entire audio segments that represent questions and is sensitive to any misalignment.", "INSTANCE : A more forgiving metric which captures if sequences of the same label are found and correctly classified with acceptance of misalignment.", "Here, the prediction counts as correct if there are at least five consecutive correctly labeled time steps within the sequence, as a heuristic to avoid ambiguity between classes.", "This metric also excludes the non-question label.", "The baseline models are evaluated per TIMESTEP by labeling segments from the test set in a sliding window fashion.", "The size of the window varies from 3 to 9 seconds to encompass all possible lengths of a question with the stride set to one word.", "Defining the stride in terms of words is possible because the ASR produces timestamps for the resulting transcript per word.", "Labeling accuracy.", "The results are presented in Table", "2. They show that for any model variation, the best performance is achieved when using both audio and text.", "The model performs the worst when using only audio which we hypothesize to be due INSTANCE TIMESTEP Model Modality P R F1 P R F1 RF-BOW T 61.8 3.5 88.5 0.9 72.2 2.2 39.3 1.1 70.4 1.0 48.1 1.0 FNN-BOW T 42.2 1.4 92.8 0.6 57.5 1.3 38.1 0.7 71.0 1.7 46.9 0.8 MultiQT A 87.4 1.9 60.6 4.0 70.3 3.1 79.2 1.3 57.8 3.3 65.0 2.4 MultiQT T 84.2 1.6 78.5 2.8 81.1 2.0 78.8 1.2 69.4 2.0 73.5 1.3 MultiQT A+T 83.6 2.2 83.3 2.5 83.3 1.6 75.7 2.2 73.8 2.3 74.5 1.3 MultiQT-MT A 84.6 5.1 57.4 3.9 66.2 2.9 77.7 5.6 56.0 2.8 62.8 2.0 MultiQT-MT T 81.9 1.1 80.6 2.8 81.0 1.8 75.9 1.5 71.2 2.4 73.3 1.7 MultiQT-MT A+T 85.2 2.7 83.2 1.2 84.1 2.0 78.5 2.5 74.0 0.7 76.0 1.1 MultiQT-TF A+T 85.0 1.8 83.3 2.6 83.9 1.7 78.9 2.1 75.2 2.3 76.7 1.2 MultiQT-TF-MT A+T 85.1 3.2 83.1 1.6 83.8 1.7 78.7 3.7 75.0 1.6 76.5 1.4 Table 2: Question tracking results on audio (A) and text (T) modalities with variations of MultiQT using modality concatenation (MultiQT) or tensor fusion (MultiQT-TF) and the auxiliary task (MultiQT-MT).", "to the increased difficulty of the task: While speech intonation may be a significant feature for detecting questions in general, discerning between specific questions is easier with access to transcribed keywords.", "Including the auxiliary binary classification task (MultiQT-MT) shows no significant improvement over MultiQT.", "We hypothesize that this may be due to training on a subset of all questions such that there are unlabelled questions in the training data which add noise to the binary task.", "Applying tensor fusion instead of concatenating the unimodal representations also does not yield significant improvements to MultiQT contrary to the findings by Zadeh et al. (2017).", "Since tensor-fusion subsumes the concatenated unimodal representations by definition and appends all element-wise products, we must conclude that the multimodal interactions represented by the element-wise products either already exist in the unimodal representations, by correlation, are easily learnable from them or are too difficult to learn for MultiQT.", "We believe that the interactions are most likely to be easily learnable from the unimodal representations.", "Comparing any MultiQT variant with INSTANCE and TIMESTEP F1 clearly shows that INSTANCE is more forgiving, with models generally achieving higher values in this metric.", "The difference in performance between different combinations of the modalities is generally higher when measured per INSTANCE as compared to per TIMESTEP .", "The RF and FNN baseline models clearly under-perform compared to MultiQT.", "It should be noted that both RF and FNN achieve F1-scores of around 85 when evaluated per input utterance, corresponding to the input they receive during training.", "On this metric, FNN also outperforms RF.", "However, both models suffer significantly from the discrepancy between the training and streaming settings as measured per the INSTANCE and TIMESTEP metrics; this effect is largest for the FNN model.", "Real-time tracking.", "One important use case of MultiQT is real-time labelling of streamed audio sequences and associated transcripts.", "For this reason, MultiQT must be able to process a piece of audio in a shorter time than that spanned by the audio itself.", "For instance, given a 1 s chunk of audio, MultiQT must process this in less than 1 s in order to maintain a constant latency from the time that the audio is ready to be processed to when it has been processed.", "To assess the real-time capability of MultiQT, we test it on an average emergency call using an NVIDIA GTX 1080 Ti GPU card.", "In our data, the average duration of an emergency call is 166 s .", "To simulate real-time streaming, we first process the call in 166 distinct one-second chunks using 166 sequential forward passes.", "This benchmark includes all overhead, such as the PCIe transfer of data to and from the GPU for each of the forward passes.", "The choice of 1 s chunk duration matches our production setting but is otherwise arbitrary with smaller chunks giving lower latency and larger chunks giving less computational overhead.", "In this streaming setting, the 166 s of audio are processed in 1 .", "03 s yielding a real-time factor of approximately 161 with a processing time of 6 .", "2 ms per 1 s of audio.", "This satisfies the real-time constraint by a comfortable margin, theoretically leaving room for up to 161 parallel audio streams to be processed on the same GPU before the real-time constraint is violated.", "When a single model serves multiple ongoing calls in parallel, we can batch the incoming audio chunks.", "Batching further increases the real-time factor and enables a larger number of ongoing calls to be processed in parallel on a single GPU.", "This efficiency gain comes at the cost of additional, but still constant, latency since we must wait for a batch of chunks to form.", "For any call, the expected additional latency is half the chunk duration.", "We perform the same experiment as above but with different batch sizes.", "We maintain super real-time processing for batches of up 256 one-second chunks, almost doubling the number of calls that can be handled by a single model.", "In the offline setting, for instance for on-demand processing of historical recordings, an entire call can be processed in one forward pass.", "Here, MultiQT can process a single average call of 166 s in 10 .", "9 ms yielding an offline real-time factor of 15,000.", "Although batched processing in this setting requires padding, batches can be constructed with calls of similar length to reduce the relative amount of padding and achieve higher efficiency yet.", "Label confusion.", "We analyze the label confusion of the basic MultiQT model using both modalities on the TIMESTEP metric.", "Less than 1% of all incorrect timestamps correspond to question-to-question confusions while the two primary sources of confusion are incorrect labelings of 1) None class for a question and 2) of a question with the None class.", "The single highest confusion is between the None class and Q4 which is the least frequent question.", "Here the model has a tendency to both over-predict and miss: ca 40% of predicted Q4 are labeled as None and 40% of Q4 are predicted as None.", "In summary, when our model makes an error, it will most likely 1) falsely predict a non-question to 0.6 0.4 0.2 0.0 0.2 0.4 0.6 start 0.6 0.4 0.2 0.0 0.2 0.4 0.6 stop error margins [s] Figure 3: Error margin distributions for start and stop timestamps of question sequences.", "be a question or 2) falsely predict a question to be a non-question; once it discovers a question, it is much less likely to assign it the wrong label.", "Model disagreement.", "We examined the inter-model agreement between MultiQT trained on the different modes.", "The highest agreement of 90% is achieved between the unimodal text and the multimodal models whereas the lowest agreement was generally between the unimodal audio and any other model at 80%.", "The lower agreement with the unimodal audio model can be attributed to the generally slightly lower performance of this model compared to the other models as per Table", "2. Question margins.", "In Figure 3, we visualize the distribution of the errors made by the model per TIMESTEP .", "For each question regarded as matching according to the INSTANCE metric we compute the number of seconds by which the model mismatched the label sequence on the left and right side of the label sequence, respectively.", "We see that the model errors are normally distributed around a center value that is shifted towards the outside of the question by slightly less than 100 ms .", "The practical consequence is that the model tends to make predictions on the safe side by extending question segments slightly into the outside of the question.", "Modality ablation.", "To evaluate the model's robustness to noise in the modalities, we remove all information from one of the modalities in turn and report the results in Table", "3. We remove the information in a modality by randomly permuting the entire temporal axis.", "This way we retain the numerical properties of the signal which is not the case when replacing a modality by zeros or noise.", "To increase MultiQT's robustness to this modality ablation, we apply it at training so that for each batch example we permute the temporal axis of the Permuted INSTANCE TIMESTEP Modality Training Test P R F1 P R F1 A+T Yes T 82.2 4.9 60.1 5.6 68.6 5.7 79.0 4.7 58.4 3.7 64.7 3.5 A+T Yes A 82.6 3.2 75.9 2.9 78.7 1.6 78.3 2.4 68.3 2.7 72.3 1.1 A+T Yes -86.3 1.6 83.8 2.8 84.8 2.0 80.4 1.0 74.1 2.2 76.9 1.3 A+T No T 0.0 0.0 0.0 0.0 0.0 0.0 16.2 0.0 16.7 0.0 16.4 0.0 A+T No A 89.5 3.1 69.2 4.4 77.0 2.5 84.3 2.6 63.7 3.5 71.0 2.0 A+T No -83.6 2.2 83.3 2.5 83.3 1.6 75.7 2.2 73.8 2.3 74.5 1.3 A No -87.4 1.9 60.6 4.0 70.3 3.1 79.2 1.3 57.8 3.3 65.0 2.4 T No -84.2 1.6 78.5 2.8 81.1 2.0 78.8 1.2 69.4 2.0 73.5 1.3 Table 3: Results from the modality ablation on the MultiQT model.", "audio or text modality with some probability p a or p s .", "We choose p a = 0 .", "1 and p s = 0 .", "5 since the model more easily develops an over-reliance on the text-modality supposedly due to higher signal-to-noise ratio.", "The results are listed in Table 3 along with results for MultiQT from Table 2 for easy reference.", "We observe that the basic MultiQT model suffers significantly from permutation of the text modality and less so for audio which suggests that it relies on the audio only for supportive features.", "Training MultiQT with the random temporal permutation forces learning of robustness to loosing all information in a modality.", "We see that the results when removing a modality almost reach the level achieved when training exclusively on that modality while still maintaining the same (or better) performance of the basic MultiQT model.", "subsets of the test split by the maximum WER of the ASR (measured only on the call-taker ut-terances).", "This evaluation compares the micro-averaged model F1-score when increasing the noise on the textual input.", "We see that regardless of the modality, the performance is the highest for calls with very low WER.", "We observe that the performance improvement of using both modalities over unimodal text or unimodal audio increases as we include noisy samples.", "This implies that multi modality increases robustness.", "Training on permuted inputs additionally improves the performance on noisy data.", "The evaluation of MultiQT in our paper has thus far been only in relation to one particular ASR model with CTC loss (Graves et al., 2006), where our system displays significant gains from multimodal learning.", "Yet, do these results hold with another ASR system, and in particular, are the multimodal gains still significant if WER decreases and produced text quality increases?", "For an initial probing of these questions, we replace the fully convolutional ASR with a densely-connected recurrent architecture with convolutional heads.", "This model is similar to the one in (Amodei et al., 2015) but also uses dense bottleneck layers.", "With this model the transcription quality improves by around +4% in WER, while the F1-scores of MultiQT still strongly favor the multimodal approach, by +6.15 points absolute over text-only.", "We argue that in a real-world scenario with high WER and limited in-domain training data, the gains warrant learning from joining the text and audio views on the input speech when learning a question tracker.", "Alternatively, the ASR model itself could be extended into a multitask learning setup to jointly track questions and transcribe speech; we defer that line of work for future research.", "On a practical note, for this multitask approach, the data must be fully transcribed by human annotators in addition to the question annotatations.", "This is generally more time consuming and expensive than exclusively annotating questions.", "Qualitative analysis.", "We analyze the model predictions on a subset of 21 calls to identify the most likely reasons for incorrect labeling.", "We find that in over half of the analysed cases the incorrect prediction is triggered either by a question-related keyword uttered in a non-question sentence or by a question asked in the background by a caller that was not assigned a label.", "We also encounter undetected questions that have a very noisy ASR transcript or are asked in an unusual way.", "Symptom labeling.", "The experiment with our silver-standard symptoms data shows a trend that is similar to question tracking: The dual-modality MultiQT scores an INSTANCE F1 score of 76.9 for a +1.8 absolute improvement over the best single modality.", "Text-only is the runner up (-1.8 F1) while audio-only lags behind with a significant -23.6 decrease in F1.", "At the same time, a simple text-only keyword matching baseline scores at 73.7.", "We argue that symptom tracking strongly favors text over audio because the distinctive audio features of questions, such as changes in intonation, are not present when communicating symptoms in speech.", "The broader context of our work is to track the dialogue state in calls to emergency medical services, where conversations are typically formed as sequences of questions and answers that pertain to various medical symptoms.", "The predominant approach to dialogue state tracking (DST) in speech is to first transcribe the speech by using ASR (Hen-derson et al., 2014; Henderson, 2015; Mrksic et al., 2017).", "In our specific context, to entirely rely on ASR is prohibitive because of significantly higher WER in comparison to standard datasets.", "To exemplify, while WER is normally distributed with a mean of 37.6% in our data, the noisiest DST challenge datasets rarely involve with WER above 30% (Jagfeld and Vu, 2017) while standard ASR benchmarks offer even lower WER (Park et al., 2019).", "None of the standard ASR scenarios thus directly apply to a real-life ASR noise scenario.", "From another viewpoint, work in audio recognition mainly involves with detecting simple single-word commands or keyword spotting (de Andrade et al., 2018), recognizing acoustic events such as environmental or urban sounds (Salamon et al., 2014; Piczak, 2015; Xu et al., 2016) or music patterns, or document-level classification of entire audio sequences (Liu et al., 2017).", "McMahan and Rao (2018) provide a more extensive overview.", "While approaches in this line of work relate to ours, e.g. in the use of convolutional networks over audio (Sainath and Parada, 2015; Salamon and Bello, 2017), our challenge features questions as linguistic units of significantly greater complexity.", "Finally, research into multimodal or multi-view deep learning (Ngiam et al., 2011; Li et al., 2018) offers insights to effectively combine multiple data modalities or views on the same learning problem.", "However, most work does not directly apply to our problem:", "i) the audio-text modality is significantly under-represented,", "ii) the models are typically not required to work online, and", "iii) most tasks are cast as document-level classification and not sequence labeling (Zadeh et al., 2018).", "We proposed a novel approach to speech sequence labeling by learning a multimodal representation from the temporal binding of the audio signal and its automatic transcription.", "This way we learn a model to identify questions in real time with a high accuracy while trained on a small annotated dataset.", "We show the multimodal representation to be more accurate and more robust to noise than the unimodal approaches.", "Our findings generalize to a medical symptoms labeling task, suggesting that our model is applicable as a general-purpose speech tagger wherever the speech modality is coupled in real time to ASR output.", "The authors are grateful to the anonymous reviewers and area chairs for the incisive and thoughtful treatment of our work." ]
[ "method", "objective", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "method", "result", "objective", "abstain", "result", "abstain", "method", "method", "abstain", "objective", "method", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "method", "abstain", "result", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "result", "method", "method", "result", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "result", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "other", "other", "other", "abstain", "other", "method", "other", "other", "other", "objective", "method", "result", "objective", "other" ]
[ "Knowledge graph (KG) embeddings learn low-dimensional representations of entities and relations to predict missing facts.", "KGs often exhibit hierarchical and logical patterns which must be preserved in the embedding space.", "For hierarchical data, hyperbolic embedding methods have shown promise for high-fidelity and parsimonious representations.", "However, existing hyperbolic embedding methods do not account for the rich logical patterns in KGs.", "In this work, we introduce a class of hyperbolic KG embedding models that simultaneously capture hierarchical and logical patterns.", "Our approach combines hyperbolic reflections and rotations with attention to model complex relational patterns.", "Experimental results on standard KG benchmarks show that our method improves over previous Euclideanand hyperbolic-based efforts by up to 6.1% in mean reciprocal rank (MRR) in low dimensions.", "Furthermore, we observe that different geometric transformations capture different types of relations while attention-based transformations generalize to multiple relations.", "In high dimensions, our approach yields new state-of-the-art MRRs of 49.6% on WN18RR and 57.7% on YAGO3-10.", "Knowledge graphs (KGs), consisting of ( head entity , relationship , tail entity ) triples, are popular data structures for representing factual knowledge to be queried and used in downstream applications such as word sense disambiguation, question answering, and information extraction.", "Real-world KGs such as Yago (Suchanek et al., 2007) or Wordnet (Miller, 1995) are usually incomplete, so a common approach to predicting missing links in KGs is via embedding into vector spaces.", "Embedding Work partially done during an internship at Google.", "methods learn representations of entities and relationships that preserve the information found in the graph, and have achieved promising results for many tasks.", "Relations found in KGs have differing properties: for example, ( Michelle Obama , married to , Barack Obama ) is symmetric, whereas hypernym relations like ( cat , specific type of , feline ), are not (Figure 1).", "These distinctions present a challenge to embedding methods: preserving each type of behavior requires producing a different geometric pattern in the embedding space.", "One popular approach is to use extremely high-dimensional embeddings, which offer more flexibility for such patterns.", "However, given the large number of entities found in KGs, doing so yields very high memory costs.", "For hierarchical data, hyperbolic geometry offers an exciting approach to learn low-dimensional embeddings while preserving latent hierarchies.", "Hyperbolic space can embed trees with arbitrarily low distortion in just two dimensions.", "Recent research has proposed embedding hierarchical graphs into these spaces instead of conventional Euclidean space (Nickel and Kiela, 2017; Sala et al., 2018).", "However, these works focus on embedding simpler graphs (e.g., weighted trees) and cannot express the diverse and complex relationships in KGs.", "We propose a new hyperbolic embedding approach that captures such patterns to achieve the best of both worlds.", "Our proposed approach produces the parsimonious representations offered by hyperbolic space, especially suitable for hierarchical relations, and is effective even with low-dimensional embeddings.", "It also uses rich transformations to encode logical patterns in KGs, previously only defined in Euclidean space.", "To accomplish this, we (1) train hyperbolic embeddings with relation-specific curvatures to preserve multiple hierarchies in KGs; (2) parameterize hyperbolic isometries (distance-preserving operations) and leverage their geometric properties to capture relations' logical patterns, such as symmetry or anti-symmetry; (3) and use a notion of hyperbolic attention to combine geometric operators and capture multiple logical patterns.", "We evaluate the performance of our approach, ATTH, on the KG link prediction task using the standard WN18RR (Dettmers et al., 2018; Bordes et al., 2013), FB15k-237 (Toutanova and Chen, 2015) and YAGO3-10 (Mahdisoltani et al., 2013) benchmarks.", "(1) In low (32) dimensions, we improve over Euclidean-based models by up to 6.1% in the mean reciprocical rank (MRR) metric.", "In particular, we find that hierarchical relationships, such as WordNet's hypernym and member meronym , significantly benefit from hyperbolic space; we observe a 16% to 24% relative improvement versus Euclidean baselines.", "(2) We find that geometric properties of hyperbolic isometries directly map to logical properties of relationships.", "We study symmetric and anti-symmetric patterns and find that reflections capture symmetric relations while rotations capture anti-symmetry.", "(3) We show that attention based-transformations have the ability to generalize to multiple logical patterns.", "For instance, we observe that ATTH recovers reflections for symmetric relations and rotations for the antisymmetric ones.", "In high (500) dimensions, we find that both hyperbolic and Euclidean embeddings achieve similar performance, and our approach achieves new state-of-the-art results (SotA), obtaining 49.6% MRR on WN18RR and 57.7% YAGO3-10.", "Our experiments show that trainable curvature is critical to generalize hyperbolic embedding methods to high-dimensions.", "Finally, we visualize embeddings learned in hyperbolic spaces and show that hyperbolic geometry effectively preserves hierarchies in KGs.", "Previous methods for KG embeddings also rely on geometric properties.", "Improvements have been obtained by exploiting either more sophisticated spaces (e.g., going from Euclidean to complex or hyperbolic space) or more sophisticated operations (e.g., from translations to isometries, or to learning graph neural networks).", "In contrast, our approach takes a step forward in both directions.", "Euclidean embeddings In the past decade, there has been a rich literature on Euclidean embeddings for KG representation learning.", "These include translation approaches (Bordes et al., 2013; Ji et al., 2015; Wang et al., 2014; Lin et al., 2015) or tensor factorization methods such as RESCAL (Nickel et al., 2011) or DistMult (Yang et al., 2015).", "While these methods are fairly simple and have few parameters, they fail to encode important logical properties (e.g., translations can't encode symmetry).", "Complex embeddings Recently, there has been interest in learning embeddings in complex space, as in the ComplEx (Trouillon et al., 2016) and RotatE (Sun et al., 2019) models.", "RotatE learns rotations in complex space, which are very effective in capturing logical properties such as symmetry, anti-symmetry, composition or inversion.", "The recent QuatE model (Zhang et al., 2019) learns KG embeddings using quaternions.", "However, a downside is that these embeddings require very high-dimensional spaces, leading to high memory costs.", "Deep neural networks Another family of methods uses neural networks to produce KG embeddings.", "For instance, R-GCN (Schlichtkrull et al., 2018) extends graph neural networks to the multi-relational setting by adding a relation-specific aggregation step.", "ConvE and ConvKB (Dettmers et al., 2018; Nguyen et al., 2018) leverage the expressiveness of convolutional neural networks to learn entity embeddings and relation embeddings.", "More recently, the KBGAT (Nathani et al., 2019) and A2N (Bansal et al., 2019) models use graph attention networks for knowledge graph embeddings.", "A downside of these methods is that they are computationally expensive as they usually require pre-trained KG embeddings as input for the neural network.", "only method that learns KG embeddings in hyperbolic space in order to target hierarchical data.", "MuRP minimizes hyperbolic distances between a re-scaled version of the head entity embedding and a translation of the tail entity embedding.", "It achieves promising results using hyperbolic embeddings with fewer dimensions than its Euclidean analogues.", "However, MuRP is a translation model and fails to encode some logical properties of relationships.", "Furthermore, embeddings are learned in a hyperbolic space with fixed curvature, potentially leading to insufficient precision, and training relies on cumbersome Riemannian optimization.", "Instead, our proposed method leverages expressive hyperbolic isometries to simultaneously capture logical patterns and hierarchies.", "Furthermore, embeddings are learned using tangent space (i.e., Euclidean) optimization methods and trainable hyperbolic curvatures per relationship, avoiding precision errors that might arise when using a fixed curvature, and providing flexibility to encode multiple hierarchies.", "We describe the KG embedding problem setting and give some necessary background on hyperbolic geometry.", "In the KG embedding problem, we are given a set of triples ( h, r, t ) E V R V , where V and R are entity and relationship sets, respectively.", "The goal is to map entities v V to embeddings e v U d V and relationships r R to embeddings r r U d R , for some choice of space U (traditionally R ), such that the KG structure is preserved.", "Concretely, the data is split into E Train and E Test triples.", "Embeddings are learned by optimizing a scoring function s : V R V R , which measures triples' likelihoods.", "s ( , , ) is trained using triples in E Train and the learned embeddings are then used to predict scores for triples in E Test .", "The goal is to learn embeddings such that the scores of triples in E Test are high compared to triples that are not present in E .", "We briefly review key notions from hyperbolic geometry; a more in-depth treatment is available in standard texts (Robbin and Salamon).", "Hyperbolic geometry is a non-Euclidean geometry with con-stant negative curvature.", "In this work, we use the d T x M M exp x ( v ) v x Figure 2: An illustration of the exponential map exp x ( v ) , which maps the tangent space T x M at the point x to the hyperbolic manifold M .", "dimensional Poincare ball model with negative curvature c ( c > 0 ): B d,c = { x R d : || x || 2 < 1 c } , where || || denotes the L 2 norm.", "For each point x B d,c , the tangent space T c x is a d -dimensional vector space containing all possible directions of paths in B d,c leaving from x .", "The tangent space T c x maps to B d,c via the exponential map (Figure 2), and conversely, the logarithmic map maps B d,c to T c x .", "In particular, we have closed-form expressions for these maps at the origin: exp c 0 ( v ) = tanh( c || v || ) v c || v || , (1) log c 0 ( y ) = arctanh( c || y || ) y c || y || .", "(2) Vector addition is not well-defined in the hyperbolic space (adding two points in the Poincare ball might result in a point outside the ball).", "Instead, Mobius addition c (Ganea et al., 2018) provides an analogue to Euclidean addition for hyperbolic space.", "We give its closed-form expression in Appendix A.1.", "Finally, the hyperbolic distance on B d,c has the explicit formula: d c ( x , y ) = 2 c arctanh( c || x c y || ) .", "The goal of this work is to learn parsimonious hyperbolic embeddings that can encode complex logical patterns such as symmetry, anti-symmetry, or inversion while preserving latent hierarchies.", "Our model, ATTH, (1) learns KG embeddings in hyperbolic space in order to preserve hierarchies (Sec-tion 4.1), (2) uses a class of hyperbolic isometries parameterized by compositions of Givens transformations to encode logical patterns (Section 4.2), (3) combines these isometries with hyperbolic attention (Section 4.3).", "We describe the full model in Section 4.4.", "As described, hyperbolic embeddings enable us to represent hierarchies even when we limit ourselves to low-dimensional spaces.", "In fact, two-dimensional hyperbolic space can represent any tree with arbitrarily small error (Sala et al., 2018).", "It is important to set the curvature of the hyperbolic space correctly.", "This parameter provides flexibility to the model, as it determines whether to embed relations into a more curved hyperbolic space (more tree-like), or into a flatter, more Euclidean-like geometry.", "For each relation, we learn a relation-specific absolute curvature c r , enabling us to represent a variety of hierarchies.", "As we show in Section 5.5, fixing, rather than learning curvatures can lead to significant performance degradation.", "Relationships often satisfy particular properties, such as symmetry: e.g., if ( Michelle Obama , married to , Barack Obama ) holds, then ( Barack Obama , married to , Michelle Obama ) does as well.", "These rules are not universal.", "For instance, ( Barack Obama , born in , Hawaii ) is not symmetric.", "Creating and curating a set of deterministic rules is infeasible for large-scale KGs; instead, embedding methods represent relations as parameterized geometric operations that directly map to logical properties.", "We use two such operations in hyperbolic space: rotations , which effectively capture compositions or anti-symmetric patterns, and reflections , which naturally encode symmetric patterns.", "Rotations Rotations have been successfully used to encode compositions in complex space with the RotatE model (Sun et al., 2019); we lift these to hyperbolic space.", "Compared to translations or tensor factorization approaches which can only infer some logical patterns, rotations can simultaneously model and infer inversion , composition , symmetric or anti-symmetric patterns.", "Reflections These isometries reflect along a fixed subspace.", "While some rotations can represent symmetric relations (more specifically rotations), any reflection can naturally represent symmetric relations, since their second power is the identity.", "They provide a way to fill-in missing entries in symmetric triples, by applying the same operation to both the tail and the head entity.", "For instance, by modelling sibling of with a reflection, we can 0 0", "Parameterization Unlike RotatE which models rotations via unitary complex numbers, we learn relationship-specific isometries using Givens transformations, 2 2 matrices commonly used in numerical linear algebra.", "Let r := ( r,i ) i { 1 ,... d 2 } and r := ( r,i ) i { 1 ,... d 2 } denote relation-specific parameters.", "Using an even number of dimensions d , our model parameterizes rotations and reflections with block-diagonal matrices of the form: Rot( r ) = diag( G + ( r, 1 ) , . . . , G + ( r, d 2 )) , (4) Ref( r ) = diag( G ( r, 1 ) , . . . , G ( r, n 2 )) , (5) where G ( ) := (cid:20) cos( ) sin( ) sin( ) cos( ) (cid:21) .", "(6) Rotations and reflections of this form are hyperbolic isometries (distance-preserving).", "We can therefore directly apply them to hyperbolic embeddings while preserving the underlying geometry.", "Additionally, these transformations are computationally efficient and can be computed in linear time in the dimension.", "We illustrate two-dimensional isometries in both Euclidean and hyperbolic spaces in Figure 3.", "Of our two classes of hyperbolic isometries, one or the other may better represent a particular relation.", "To handle this, we use an attention mechanism to learn the right isometry.", "Thus we can represent symmetric, anti-symmetric or mixed-behaviour relations (i.e. neither symmetric nor anti-symmetric) as a combination of rotations and reflections.", "Let x H and y H be hyperbolic points (e.g., reflection and rotation embeddings), and a be an attention vector.", "Our approach maps hyperbolic representations to tangent space representations, x E = log c 0 ( x H ) and y E = log c 0 ( y H ) , and computes attention scores: ( x , y ) = Softmax( a T x E , a T y E ) .", "We then compute a weighted average using the recently proposed tangent space average (Chami et al., 2019; Liu et al., 2019): Att( x H , y H ; a ) := exp c 0 ( x x E + y y E ) .", "We have all of the building blocks for ATTH, and can now describe the model architecture.", "Let ( e Hv ) v V and ( r Hr ) r R denote entity and relationship hyperbolic embeddings respectively.", "For a triple ( h, r, t ) V R V , ATTH applies relation-specific rotations (Equation 4) and reflections (Equation 5) to the head embedding: q H Rot = Rot( r ) e Hh , q H ref = Ref( r ) e Hh .", "Intuitively, rotations and reflections encode logical patterns while translations capture tree-like structures by moving between levels of the hierarchy.", "Finally, query embeddings are compared to target tail embeddings via the hyperbolic distance (Equation 3).", "The resulting scoring function is: s ( h, r, t ) = d c r ( Q ( h, r ) , e Ht ) 2 + b h + b t , (10) where ( b v ) v V are entity biases which act as margins in the scoring function (Tifrea et al., 2019; Balazevic et al., 2019).", "The model parameters are then { ( r , r , r Hr , a r , c r ) r R , ( e Hv , b v ) v V } .", "Note that the total number of parameters in ATTH is O ( |V| d ) , similar to traditional models that do not use attention or geometric operations.", "The extra cost is proportional to the number of relations, which is usually much smaller than the number of entities.", "(2) We expect the performance of relation-specific geometric operations to vary based on the relation's logical patterns (Section 5.3).", "(3) In cases where the relations are neither purely symmetric nor antisymmetric, we anticipate that hyperbolic attention outperforms the models which are based on solely reflections or rotations (Section 5.4).", "Finally, in high dimensions, we expect hyperbolic models with trainable curvature to learn the best geometry, and perform similarly to their Euclidean analogues (Section 5.5).", "Datasets We evaluate our approach on the link prediction task using three standard competition benchmarks, namely WN18RR (Bordes et al., 2013; Dettmers et al., 2018), FB15k-237 (Bor-des et al., 2013; Toutanova and Chen, 2015) and YAGO3-10 (Mahdisoltani et al., 2013).", "WN18RR is a subset of WordNet containing 11 lexical relationships between 40,943 word senses, and has a natural hierarchical structure, e.g., ( car , hypernym of , sedan ).", "FB15k-237 is a subset of Freebase, a collaborative KB of general world knowledge.", "FB15k-237 has 14,541 entities and 237 relationships, some of which are non-hierarchical, such as born-in or nationality , while others have natural hierarchies, such as part-of (for organiza-tions).", "YAGO3-10 is a subset of YAGO3, containing 123,182 entities and 37 relations, where most relations provide descriptions of people.", "Some relationships have a hierarchical structure such as playsFor or actedIn , while others induce logical patterns, like isMarriedTo .", "For each KG, we follow the standard data augmentation protocol by adding inverse relations (Lacroix et al., 2018) to the datasets.", "Additionally, we estimate the global graph curvature G (Gu et al., 2019) (see Appendix A.2 for more details), WN18RR FB15k-237 YAGO3-10 U Model MRR H@1 H@3 H@10 MRR H@1 H@3 H@10 MRR H@1 H@3 H@10 R d RotatE .387 .330 .417 .491 .290 .208 .316 .458 --MuRE .458 .421 .471 .525 .313 .226 .340 .489 .283 .187 .317 .478 C d ComplEx-N3 .420 .390 .420 .460 .294 .211 .322 .463 .336 .259 .367 .484 B d, 1 MuRP .465 .420 .484 .544 .323 .235 .353 .501 .230 .150 .247 .392 R d REFE .455 .419 .470 .521 .302 .216 .330 .474 .370 .289 .403 .527 ROTE .463 .426 .477 .529 .307 .220 .337 .482 .381 .295 .417 .548 ATTE .456 .419 .471 .526 .311 .223 .339 .488 .374 .290 .410 .537 B d,c REFH .447 .408 .464 .518 .312 .224 .342 .489 .381 .302 .415 .530 ROTH .472 .428 .490 .553 .314 .223 .346 .497 .393 .307 .435 559 ATTH .466 .419 .484 .551 .324 .236 .354 .501 .397 .310 .437 .566 Table 2: Link prediction results for low-dimensional embeddings ( d = 32 ) in the filtered setting.", "which is a distance-based measure of how close a given graph is to being a tree.", "We summarize the datasets' statistics in Table 1.", "Baselines We compare our method to SotA models, including MurP (Balazevic et al., 2019), MurE (which is the Euclidean analogue or MurP), RotatE (Sun et al., 2019), ComplEx-N3 (Lacroix et al., 2018) and TuckER (Balazevic et al., 2019).", "Baseline numbers in high dimensions (Table 5) are taken from the original papers, while baseline numbers in the low-dimensional setting (Table 2) are computed using open-source implementations of each model.", "In particular, we run hyper-parameter searches over the same parameters as the ones in the original papers to compute baseline numbers in the low-dimensional setting.", "Ablations To analyze the benefits of hyperbolic geometry, we evaluate the performance of ATTE, which is equivalent to ATTH with curvatures set to zero.", "Additionally, to better understand the role of attention, we report scores for variants of ATT E/H using only rotations (ROT E/H) or reflections (REF E/H).", "Evaluation metrics At test time, we use the scoring function in Equation 10 to rank the correct tail or head entity against all possible entities, and use in use inverse relations for head prediction (Lacroix et al., 2018).", "Similar to previous work, we compute two ranking-based metrics: (1) mean reciprocal rank (MRR), which measures the mean of inverse ranks assigned to correct entities, and (2) hits at K (H@ K , K { 1 , 3 , 10 } ), which measures the proportion of correct triples among the top K predicted triples.", "We follow the standard evaluation protocol in the filtered setting (Bordes et al., 2013): all true triples in the KG are filtered out during evaluation, since predicting a low rank for these triples should not be penalized.", "Training procedure and implementation We train ATTH by minimizing the full cross-entropy loss with uniform negative sampling, where negative examples for a triple ( h, r, t ) are sampled uniformly from all possible triples obtained by perturbing the tail entity: L = (cid:88) t (cid:48) U ( V ) log(1+exp( y t (cid:48) s ( h, r, t (cid:48) ))) , (11) where y t (cid:48) = (cid:40) 1 if t (cid:48) = t 1 otherwise .", "Since optimization in hyperbolic space is practically challenging, we instead define all parameters in the tangent space at the origin, optimize embeddings using standard Euclidean techniques, and use the exponential map to recover the hyperbolic parameters (Chami et al., 2019).", "We provide more details on tangent space optimization in Appendix A.4.", "We conducted a grid search to select the learning rate, optimizer, negative sample size, and batch size, using the validation set to select the best hy-Relation Khs G GROTE ROTH Improvement member meronym 1.00 -2.90 .320 .399 24.7% hypernym 1.00 -2.46 .237 .276 16.5% has part 1.00 -1.43 .291 .346 18.9% instance hypernym 1.00 -0.82 .488 .520 6.56% member of domain region 1.00 -0.78 .385 .365 -5.19% member of domain usage 1.00 -0.74 .458 .438 -4.37% synset domain topic of 0.99 -0.69 .425 .447 5.17% also see 0.36 -2.09 .634 .705 11.2% derivationally related form 0.07 -3.84 .960 .968 0.83% similar to 0.07 -1.00 1.00 1.00 0.00% verb group 0.07 -0.50 .974 .974 0.00% Table 3: Comparison of H@10 for WN18RR relations.", "perparameters.", "Our best model hyperparameters are detailed in Appendix A.3.", "We conducted all our experiments on NVIDIA Tesla P100 GPUs and make our implementation publicly available .", "We first evaluate our approach in the low-dimensional setting for d = 32 , which is approximately one order of magnitude smaller than SotA Euclidean methods.", "Table 2 compares the performance of ATTH to that of other baselines, including the recent hyperbolic (but not rotation-based) MuRP model.", "In low dimensions, hyperbolic embeddings offer much better representations for hierarchical relations, confirming our hypothesis.", "ATTH improves over previous Euclidean and hyperbolic methods by 0.7% and 6.1% points in MRR on WN18RR and YAGO3-10 respectively.", "Both datasets have multiple hierarchical relationships, suggesting that the hierarchical structure imposed by hyperbolic geometry leads to better embeddings.", "On FB15k-237, ATTH and MurP achieve similar performance, both improving over Euclidean baselines.", "We conjecture that translations are sufficient to model relational patterns in FB15k-237.", "To understand the role of dimensionality, we also conduct experiments on WN18RR against SotA methods under varied low-dimensional settings (Figure 4).", "We include error bars for our method with average MRR and standard deviation computed over 10 runs.", "Our approach consistently outperforms all baselines, suggesting that hyperbolic embeddings still attain high-accuracy across a broad range of dimensions.", "Additionally, we measure performance per relation on WN18RR in Table 3 to understand the benefits of hyperbolic geometric on hierarchical relations.", "We report the Krackhardt hierarchy score Code available at https://github.com/ tensorflow/neural-structured-learning/tree/master/research/kg_hyp_emb Relation Anti-symmetric Symmetric ROTH REFH ATTH hasNeighbor (cid:55) (cid:51) .750 1.00 1.00 isMarriedTo (cid:55) (cid:51) .941 .941 1.00 actedIn (cid:51) (cid:55) .145 .110 .150 hasMusicalRole (cid:51) (cid:55) .431 .375 .458 directed (cid:51) (cid:55) .500 .450 .567 graduatedFrom (cid:51) (cid:55) .262 .167 .274 playsFor (cid:51) (cid:55) .671 .642 .664 wroteMusicFor (cid:51) (cid:55) .281 .188 .266 hasCapital (cid:51) (cid:55) .692 .731 .731 dealsWith (cid:55) (cid:55) .286 .286 .429 isLocatedIn (cid:55) (cid:55) .404 .399 .420 Table 4: Comparison of geometric transformations on a subset of YAGO3-10 relations.", "(Khs G ) (Balazevic et al., 2019) and estimated curvature per relation (see Appendix A.2 for more details).", "We consider a relation to be hierarchical when its corresponding graph is close to tree-like (low curvature, high Khs G ).", "We observe that hyperbolic embeddings offer much better performance on hierarchical relations such as hypernym or has part , while Euclidean and hyperbolic embeddings have similar performance on non-hierarchical relations such as verb group .", "We also plot the learned curvature per relation versus the embedding dimension in Figure 5b.", "We note that the learned curvature in low dimensions directly correlates with the estimated graph curvature G in Table 3, suggesting that the model with learned curvatures learns more curved embedding spaces for tree-like relations.", "Finally, we observe that MurP achieves lower performance than MurE on YAGO3-10, while ATTH improves over ATTE by 2.3% in MRR.", "This suggests that trainable curvature is critical to learn embeddings with the right amount of curvature, while fixed curvature might degrade performance.", "We elaborate further on this point in Section 5.5.", "In our experiments, we find that rotations work well on WN18RR, which contains multiple hierarchical and anti-symmetric relations, while reflections work better for YAGO3-10 (Table 5).", "To better understand the mechanisms behind these observations, we analyze two specific patterns: relation symmetry and anti-symmetry.", "We report performance per-relation on a subset of YAGO3-10 relations in Table 4.", "We categorize relations into symmetric, anti-symmetric, or neither symmetric nor anti-symmetric categories using data statistics.", "More concretely, we consider a relation to satisfy a logical pattern when the logical condition is satis-fied by most of the triplets (e.g., a relation r is symmetric if for most KG triples ( h, r, t ) , ( t, r, h ) is also in the KG).", "We observe that reflections encode 10 1 10 2 10 3 Embedding dimension 0 .", "symmetric relations particularly well, while rotations are well suited for anti-symmetric relations.", "This confirms our intuitionand the motivation for our approachthat particular geometric properties capture different kinds of logical properties.", "One advantage of using relation-specific transformations is that each relation can learn the right geometric operators based on the logical properties it has to satisfy.", "In particular, we observe that in both lowand high-dimensional settings, attention-based models can recover the performance of the best transformation on all datasets (Tables 2 and 5).", "Additionally, per-relationship results on YAGO3-10 in Table 4 suggest that ATTH indeed recovers the best geometric operation.", "Furthermore, for relations that are neither symmetric nor anti-symmetric, we find that ATTH can outperform rotations and reflections, suggesting that combining multiple operators with attention can learn more expressive operators to model mixed logical patterns.", "In other words, attention-based transformations alleviate the need to conduct experiments with multiple geometric transformations by simply allowing the model to choose which one is best for a given relation.", "In high dimensions (Table 5), we compare against a variety of other models and achieve new SotA results on WN18RR and YAGO3-10, and third-best results on FB15k-237.", "As we expected, when the embedding dimension is large, Euclidean and hyperbolic embedding methods perform similarly across all datasets.", "We explain this behavior by noting that when the dimension is sufficiently large, both Euclidean and hyperbolic spaces have enough capacity to represent complex hierarchies in KGs.", "This is further supported by Figure 5b, which shows the learned absolute curvature versus the dimension.", "We observe that curvatures are close to zero in high dimensions, confirming our expectation that ROTH with trainable curvatures learns a roughly Euclidean geometry in this setting.", "In contrast, fixed curvature degrades performance in high dimensions (Figure 5a), confirming the importance of trainable curvatures and its im-pact on precision and capacity (previously studied by (Sala et al., 2018)).", "Additionally, we show the embeddings' norms distribution in the Appendix (Figure 7).", "Fixed curvature results in embeddings being clustered near the boundary of the ball while trainable curvatures adjusts the embedding space to better distribute points throughout the ball.", "Precision issues that might arise with fixed curvature could also explain MurP's low performance in high dimensions.", "Trainable curvatures allow ROTH to perform as well or better than previous methods in both low and high dimensions.", "In Figure 6, we visualize the embeddings learned by ROTE versus ROTH for a sub-tree of the organism entity in WN18RR.", "To better visualize the hierarchy, we apply k inverse rotations for all nodes at level k in the tree.", "By contrast to ROTE, ROTH preserves the tree structure in the embedding space.", "Furthermore, we note that ROTE cannot simultaneously preserve the tree structure and make non-neighboring nodes far from each other.", "For instance, virus should be far from male , but preserving the tree structure (by going one level down in the tree) while making WN18RR FB15k-237 YAGO3-10 U Model MRR H@1 H@3 H@10 MRR H@1 H@3 H@10 MRR H@1 H@3 H@10 R d DistMult .430 .390 .440 .490 .241 .155 .263 .419 .340 .240 .380 .540 ConvE .430 .400 .440 .520 .325 .237 .356 .501 .440 .350 .490 .620 TuckER .470 .443 .482 .526 .358 .266 .394 .544 --MurE .475 .436 .487 .554 .336 .245 .370 .521 .532 .444 .584 .694 C d ComplEx-N3 .480 .435 .495 .572 .357 .264 .392 .547 .569 .498 .609 .701 RotatE .476 .428 .492 .571 .338 .241 .375 .533 .495 .402 .550 .670 H d Quaternion .488 .438 .508 .582 .348 .248 .382 .550 ---B d, 1 MurP .481 .440 .495 .566 .335 .243 .367 .518 .354 .249 .400 567 R d REFE .473 .430 .485 .561 .351 .256 .390 .541 .577 .503 .621 .712 ROTE .494 .446 .512 .585 .346 .251 .381 .538 .574 .498 .621 .711 ATTE .490 .443 .508 .581 .351 .255 .386 .543 .575 .500 .621 .709 B d,c REFH .461 .404 .485 .568 .346 .252 .383 .536 .576 .502 .619 .711 ROTH .496 .449 .514 .586 .344 .246 .380 .535 .570 .495 .612 .706 ATTH .486 .443 .499 .573 .348 .252 .384 .540 .568 .493 .612 .702 Table 5: Link prediction results for high-dimensional embeddings (best for d { 200 , 400 , 500 } ) in the filtered setting.", "(a) ROTE embeddings.", "these two nodes far from each other is difficult in Euclidean space.", "In hyperbolic space, however, we observe that going one level down in the tree is achieved by translating embeddings towards the left.", "This pattern essentially illustrates the translation component in ROTH, allowing the model to simultaneously preserve hierarchies while making non-neighbouring nodes far from each other.", "We introduce ATTH, a hyperbolic KG embedding model that leverages the expressiveness of hyperbolic space and attention-based geometric transformations to learn improved KG representations in low-dimensions.", "ATTH learns embeddings with trainable hyperbolic curvatures, allowing it to learn the right geometry for each relationship and generalize across multiple embedding dimensions.", "ATTH achieves new SotA on WN18RR and YAGO3-10, real-world KGs which exhibit hierarchical structures.", "Future directions for this work include exploring other tasks that might benefit from hyperbolic geometry, such as hypernym detection.", "The proposed attention-based transformations can also be extended to other geometric operations.", "We thank Avner May for their helpful feedback and discussions.", "We gratefully acknowledge the support of DARPA under Nos.", "FA86501827865 (SDH) and FA86501827882 (ASED); NIH under No.", "U54EB020405 (Mobilize), NSF under Nos.", "CCF1763315 (Beyond Sparsity), CCF1563078 (Volume to Velocity), and 1937301 (RTML); ONR under No.", "N000141712266 (Unifying Weak Super-vision); the Moore Foundation, NXP, Xilinx, LETI-CEA, Intel, IBM, Microsoft, NEC, Toshiba, TSMC, ARM, Hitachi, BASF, Accenture, Ericsson, Qual-comm, Analog Devices, the Okawa Foundation, American Family Insurance, Google Cloud, Swiss Re, the HAI-AWS Cloud Credits for Research program, TOTAL, and members of the Stanford DAWN project: Teradata, Facebook, Google, Ant Financial, NEC, VMWare, and Infosys.", "The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon.", "Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views, policies, or endorsements, either expressed or implied, of DARPA, NIH, ONR, or the U.S. Government." ]
[ "abstain", "abstain", "abstain", "abstain", "method", "method", "result", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "method", "method", "result", "result", "result", "result", "result", "result", "objective", "result", "result", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "other", "other", "other", "other", "abstain", "other", "method", "other", "other", "other", "abstain", "other", "other", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "other" ]
[ "Event Detection (ED) is a fundamental task in automatically structuring texts.", "Due to the small scale of training data, previous methods perform poorly on unseen/sparsely labeled trigger words and are prone to overfitting densely labeled trigger words.", "To address the issue, we propose a novel Enrichment Knowledge Distillation (EKD) model to leverage external open-domain trigger knowledge to reduce the in-built biases to frequent trigger words in annotations.", "Experiments on benchmark ACE2005 show that our model outperforms nine strong baselines, is especially effective for unseen/sparsely labeled trigger words.", "The source code is released on https://github.com/shuaiwa16/ekd.git.", "Event Detection (ED) aims at detecting trigger words in sentences and classifying them into pre-defined event types, which shall benefit numerous applications, such as summarization (Li et al., 2019) and reading comprehension (Huang et al., 2019).", "For instance, in S1 of Figure 1, ED aims to identify the word fire as the event trigger and classify its event type as Attack .", "Mainstream researches (Chen et al., 2015; Liu et al., 2017, 2018b; Liao and Grishman, 2010b; Zhao et al., 2018; Liu et al., 2018a) focus on the second step event type disambiguation via lexical and contextual features.", "However, it is also crucial to identify trigger words correctly as the preliminary step.", "Trigger word identification is a non-trivial task, which suffers from the long tail issue.", "Take the benchmark ACE2005 as an example: trigger words with frequency less than 5 account for 78.2% of the Corresponding author.", "total.", "The long tail issue makes supervised methods (Li et al., 2013; Yang et al., 2019) prone to overfitting and perform poorly on unseen/sparsely labeled triggers (Lu et al., 2019).", "Automatically generating more training instances seems to be a solution: expanding more instances by bootstrapping (Fer-guson et al., 2018; Zhang et al., 2019; Cao et al., 2019) and expending more data from distantly supervised methods (Chen et al., 2017; Wang et al., 2019a).", "However, the performance of these methods on unseen/sparsely labeled trigger words is still unsatisfied, as shown in Table 1.", "We argue that these methods either lead to the homogeneity of the generated corpus, or subject to the low coverage of knowledge base.", "More importantly, the expanded data itself is unevenly distributed, and we cannot expect to alleviate the long tail problem with built-in bias data.", "In the paper, we empower the model with external knowledge called Open-Domain Trigger Knowledge to provides extra semantic support on unseen/sparsely labeled trigger words and improve trigger identification.", "Open-Domain Trig-Table 1: F score on unseen/sparsely and densely labeled triggers.", "DMBERT (Chen et al., 2015) refers to a supervised-only model with dynamic multi-pooling to capture contextual features; BOOTSTRAP (He and Sun, 2017) expands training data via bootstrapping.", "DGBERT expands training data with Freebase (Chen et al., 2017).", "ger Knowledge is defined as a prior that specifies which words can trigger events without subject to pre-defined event types and the domain of texts.", "As shown in S1 of Figure 1, open-domain trigger knowledge can identify that hearing and fire as event triggers, even if hearing does not fit into any pre-defined event types in ACE2005.", "With open-domain trigger knowledge, we are able to discover unseen/sparsely triggers from the large-scale unlabeled corpus, which will improve the recall in trigger words identification.", "However, it is challenging to incorporate open-domain trigger knowledge into ED: Triggers identified by open-domain trigger knowledge do not always fit well with in-domain labels, and thus can not be directly adopted as the trigger identification result.", "For example in S4 of Figure 1, open-domain trigger knowledge argues that exploded is the trigger word, while under the labeling rules of ACE2005, intifada is the trigger word.", "Specifically, we propose an Enrichment Knowledge Distillation (EKD) model to efficiently distill open-domain trigger knowledge from both labeled and abundant unlabeled corpora.", "We first apply a light-weight pipeline to equipment unlabeled sentences with trigger knowledge from WordNet.", "The method is not limited to specific domains, and thus can guarantee the coverage of trigger words.", "Then, given the knowledge enhanced data as well as ED annotations, we train a teacher model for better performance; meanwhile, a student model is trained to mimic teacher's outputs using data without knowledge enhancement, which conforms to the distribution during inference.", "We further promote the generalization of the model by adding noise to the inputs of the student model.", "We evaluate our model on the ACE2005 ED benchmark.", "Our method surpasses nine strong baselines, and is especially effective for un-seen/sparsely labeled triggers word.", "Experiments also show that the proposed EKD architecture is very flexible, and can be conveniently adapted to distill other knowledge, such as entity, syntactic and argument.", "Our contributions can be summarized as: To the best of our knowledge, we are the first to leverage the wealth of the open-domain trigger knowledge to improve ED.", "We propose a novel teacher-student model (EKD) that can learn from both labeled and unlabeled data, so as to improve ED performance by reducing the in-built biases in annotations.", "Experiments on benchmark ACE2005 show that our method surpasses nine strong baselines which are also enhanced with knowledge.", "Detailed studies show that our method can be conveniently adapted to distill other knowledge, such as entities.", "Traditional feature-based methods exploit both lexical and global features to detect events (Li et al., 2013).", "As neural networks become popular in NLP (Cao et al., 2018), data-driven methods use various superior DMCNN, DLRNN and PLMEE model (Duan et al., 2017; Nguyen and Grishman, 2018; Yang et al., 2019) for end-to-end event detection.", "Recently, weakly-supervised methods (Judea and Strube, 2016; Huang et al., 2017; Zeng et al., 2018; Yang et al., 2018) has been proposed to generate more labeled data.", "(Gabbard et al., 2018) identifies informative snippets of text as expending annotated data via curated training.", "(Liao and Grishman, 2010a; Ferguson et al., 2018) rely on sophisticated pre-defined rules to bootstrap from the paralleling news streams.", "(Wang et al., 2019a) limits the data range of adversarial learning to trigger words appearing in labeled data.", "Due to the long tail issue of labeled data and the homogeneity of the generated data, previous methods perform badly on unseen/sparsely labeled data and turn to overfitting densely labeled data.", "With open-domain trigger knowledge, our model is able to perceive the un-seen/sparsely labeled trigger words from abundant unlabeled data, and thus successfully improve the recall of the trigger words.", "Knowledge Distillation, initially proposed by (Hin-ton et al., 2015), has been widely adopted in NLP to distill external knowledge into the model (Laine and Aila, 2016; Saito et al., 2017; Ruder and Plank, 2018).", "The main idea is to adopt a student model to learn from a robust pre-trained teacher model.", "(Lee et al., 2018; Gong et al., 2018) reinforces the connection between teacher and student model by singular value decomposition and the laplacian regularized least squares.", "(Tarvainen and Valpola, 2017; Huang et al., 2018) stabilize the teacher model by a lazy-updated mechanism to enable student model not susceptible to external disturbances.", "(Liu et al., 2019) uses an adversarial imitation approach to enhance the learning procedure.", "Unlike previous methods that relied on golden annotations, our method is able to learn from pseudo labels and effectively extract knowledge from both labeled and unlabeled corpus.", "In the section, we introduce the proposed Enrichment Knowledge Distillation (EKD) model, which leverages open-domain trigger knowledge to improve ED.", "In general, we have a teacher model and a student model.", "The teacher is fully aware of open-domain trigger knowledge, while the student is not equipped with open-domain trigger knowledge.", "We make the student model to imitate the teacher's prediction to distill the open-domain trigger knowledge to our model.", "Figure 2 illustrates the architecture of the proposed EKD model.", "During training, we first pre-train the teacher model on labeled data, and then force the student model, under the knowledge-absent situation, to generate pseudo labels as good as the teacher model on both labeled and unlabeled data.", "By increasing the cognitive gap between teacher and student model, the student model has to learn harder.", "We first introduce how to collect the open-domain trigger knowledge in Knowledge Collection.", "We then illustrate how to exploit the labeled data to pre-train the teacher model in Feature Extraction and Event Prediction.", "Finally, we elaborate on how to force the student model to learn from the teacher model in Knowledge Distillation.", "Given the labeled corpus L = { ( S i , Y i ) } NL i =1 and abundant unlabeled corpus U = { ( S k ) } NT k = NL +1 ,", "our goal is to jointly optimize two objections:", "1) maximize the prediction probability P ( Y i | S i ) on labeled corpus L , 2) minimize the prediction probability discrepancy between the teacher P ( Y (cid:48) k | S + k ) and student model P ( Y (cid:48) k | S k ) on both L and U , where NT stand for the total number of sentences in both labeled and unlabeled data.", "S + and S stand for the enhanced and weakened variant of the raw sentence S , we will explain them in detail in the Section 3.5.", "Y = { y 1 , y 2 , . . . , y n } stands for the golden event type label, where each y Y belongs to the 33 event types pre-defined in ACE and a NEGATIVE event type (Chen et al., 2015; Nguyen et al., 2016; Feng et al., 2018).", "Y (cid:48) is the pseudo label proposed by pre-trained teacher model.", "Open-domain trigger knowledge elaborates whether a word triggers an event from the perspective of word sense.", "Whether the trigger is densely labeled or unseen/sparsely labeled, open-domain trigger knowledge will identify them without distinction.", "For instance in S3 in Figure 1, although hacked is a rare word and has not been labeled, judging from word sense, open-domain trigger knowledge successfully identifies hacked as a trigger word.", "We adopt a light-weight pipeline method, called Trigger From WordNet (TFW), to collect open-domain trigger knowledge (Araki and Mitamura, 2018).", "S + = T F W ( S ) (1) TFW uses WordNet as the intermediary.", "It has two steps,", "1) disambiguate word into WordNet sense,", "2) determine whether a sense triggers an event.", "For the first step, we adopt IMS (Zhong and Ng, 2010) to disambiguate word into word sense in WordNet (Miller et al., 1990).", "We obtain the input features by POS tagger and dependency parser in Stanford CoreNLP (Manning et al., 2014).", "For the second step, we adopt the simple dictionary-lookup approach proposed in (Araki and Mitamura, 2018) to determine whether a sense triggers an event.", "TFW is not limited to particular domains, which is able to provide unlimited candidate triggers.", "With the support of the lexical database, TFW has high effi-ciency and can be applied to large-scale knowledge collection.", "Finally, we obtain a total of 733,848 annotated sentences from New York Times (Sandhaus, 2008) S 5 Troops were trying to break up stone-throwing protests, but not use live fire.", "corpus in the first half of 2007.", "The total number of triggers is 2.65 million, with an average of 3.6 triggers per sentence.", "We adopt BERT to obtain the hidden representation for both labeled and unlabeled sentences.", "BERT is a pre-trained language representation model, and BERT has achieved SOTA performance on a wide range of tasks, such as question answering and language inference.", "The powerful capability of BERT has also been demonstrated in ED scenario (Wang et al., 2019a).", "Formally, given the raw sentence S and knowledge-attending sentence S + , we feed them into BERT respectively, and adopt the sequence output of the last layer as the hidden representation for each word in S and S + .", "After obtaining the hidden representation of senten-cen S , we adopt a full-connected layer to determine Y S", "the event type for each word in sentence .", "We use S ( i ) and Y ( i ) to denote the i -th training sentence and its event type in labeled corpus L .", "We first transform the hidden representation H obtained from Section 3.3 to a result vector O , where O ijc represents the probability that the j -th word in S i belongs to the c -th event class.", "And then we normalize O by the softmax function to obtain the conditional probability.", "Given the labeled corpus L = { S i , Y i }| NL i =1 , the optimization object is defined as:", "In this section, we distill open-domain trigger knowledge into our model.", "The main idea is to force the student model, with only raw texts as the input, to generate as good pseudo labels as the teacher model on both labeled and unlabeled data.", "Formally, given golden event type Y , the objective is: p ( Y | S + ) = p ( Y | S , ) (5) where p ( Y | S + ) and p ( Y | S , ) are the predictions from the teacher and student model respectively.", "We share the parameters of the teacher and student model.", "The input of teacher model S + is aware of the open-domain trigger knowledge, and the input of student model S does not know.", "We give the detailed construction process of S + and S below.", "Knowledge-attending Sentences ( S + ) We embed the open-domain trigger knowledge into the sentence by Marking Mechanism .", "Specifically, we introduce two symbols, named B-TRI and E-TRI to mark the beginning and ending boundary of triggers identified by open-domain trigger knowledge.", "Formally, given the raw sentence S = { w 1 , w 2 , . . . , w i , . . . , w n } and trigger w i identified by open-domain trigger knowledge, the knowledge-attending sentence is S + = { w 1 , w 2 , . . . , B-TRI , w i , E-TRI , . . . , w n } .", "Marking mechanism works well for our feature extractor BERT (Soares et al., 2019), which is very flexi-ble in embedding knowledge, and can be conveniently adapted to other types of knowledge without heavily-engineered work.", "Note that the newly added symbols are lack of pre-trained embedding in BERT.", "Random initialization undermines the semantic meaning of the introduced symbols, where B-TRI indicates the beginning of a trigger, and E-TRI means the ending.", "We address the issue by fine-tuning BERT on the annotation sentences in Section 3.2.", "Specifically, we adopt Masked LM task (Devlin et al., 2018) to exploit surrounding words to learn the semantic representation of the introduced symbols ( B-TRI and E-TRI ) based on the Harris distributional hypothesis (Harris, 1954).", "The mask word rate is set to 0.15 and the accuracy of masked words achieves 92.3% after fine-tune.", "Knowledge-absent Sentences ( S ) To make the student model learn harder from the teacher model, we further disturb the input of student model by randomly masking out triggers identified by open-domain trigger knowledge.", "In this way, the student model has to judge the event type of trigger word solely based on the surrounding context.", "Formally, given the raw sentence S = { w 1 , w 2 , . . . , w i , . . . , w n } and trigger w i identified by open-domain trigger knowledge, the knowledge-absent sentence is S = { w 1 , w 2 , . . . , [MASK] , . . . , w n } .", "The mask words are not randomly selected, but among triggers determined by open-domain trigger knowledge, avoiding the model is optimized only for the non-trigger negative class.", "KL-divergence Loss We move the added symbols to the end of the sentence to ensure strict alignment of words in S + and S , and then we minimize the discrepancy between conditional probability p ( Y | S , ) and p ( Y | S + ) with KL-divergence loss.", "Given the collection of labeled and unlabeled corpus T = { ( S k ) } NL + NU k =1 , the KL-divergence loss is: JT ( ) = KL ( p ( Y | S + , ) || p ( Y | S , )) = NL + NU (cid:88) k =1 p ( Y ( k ) | S +( k ) , ) p ( Y ( k ) | S +( k ) , ) p ( Y ( k ) | S ( k ) , ) (6) KL divergence is asymmetric in the two distributions.", "inputs as approximate distributions and predictions from knowledge-attending inputs as approximated distributions.", "If we reverse the direction of approximation, the experimental results decline significantly.", "The reason may be that we should ensure the low-confidence predictions approximate the high-confidence predictions.", "The final optimization objection is the integration of the supervised loss from labeled dataset and KL-divergence loss from unlabeled dataset defined in Equation 4 and 6.", "We stop the gradient descent of teacher model when calculating JT to ensure that the learning is from teacher to student.", "Since unlabeled data is much larger than the labeled data, joint training leads the model quickly overfitting the limited labeled data while still under-fitting the unlabeled data.", "To handle the issue, we adopt the Training Signal Annealing (TSA) technique proposed in (Xie et al., 2019) to linearly release the training signals' of the labeled examples as training progresses.", "Datasets For the labeled corpus, we adopt dataset ACE2005 to evaluate the overall performance.", "ACE2005 contains 13,672 labeled sentences distributed in 599 articles.", "Besides the pre-defined 33 event types, we incorporate an extra Negative event type for non-trigger words.", "Following (Chen et al., 2015), we split ACE2005 into 529/30/40 for train/dev/test respectively.", "Evaluation We report the Precision, Recall and micro-averaged F1 scores in the form of percentage over all 33 events.", "A trigger is considered correct if both its type and offsets match the annotation.", "Hyperparameters For feature extraction, we adopt BERT as our backbone, which has 24 16-head attention layers and 1024 hidden embedding dimension.", "For the batch size, The batch size of labeled data is 32, and we set the proportion of labeled and unlabeled data to 1:6.", "For most of our experiments, we set the learning rate 3e-5, the maximum sequence length 128 and the in joint training 1.", "Our model trains on one V100 for a half day.", "The best result appears around 12,500 epochs.", "Balancing the performance and training ef-ficiency, we actually use 40,236 unlabeled data for knowledge distillation unless otherwise stated.", "All reported results are the average results of ten runs.", "We use Adam as the gradient descent optimizer.", "Baselines As our methods incorporate open-domain trigger knowledge, for fair competition, we compare our methods with two data-driven methods and five state-of-the-art knowledge-enhanced methods, including: DMCNN proposes a dynamic multi-pooling layer above CNN model to improve event detection (Chen et al., 2015).", "DLRNN exploits document information via recurrent neural networks (Duan et al., 2017).", "ANN-S2 exploits argument information to improve ED via supervised attention mechanisms (Liu et al., 2017).", "GMLATT adopts a gated cross-lingual attention to exploit the complement information conveyed by multilingual data (Liu et al., 2018a).", "GCN-ED exploits structure dependency tree information via graph convolutions networks and entity mention-guided pooling (Nguyen and Grishman, 2018).", "Lu's DISTILL proposes a -learning approach to distill generalization knowledge to handle overfitting (Lu et al., 2019).", "TS-DISTILL exploits the entity ground-truth and uses an adversarial imitation based knowledge distillation approach for ED (Liu et al., 2019).", "AD-DMBERT adopts an adversarial imitation model to expend more training data (Wang et al., 2019b).", "DRMM employs an alternative dual attention mechanism to effectively integrate image information into ED (Tong et al., 2020).", "The last two baselines both use BERT as feature extractor.", "Table 2 presents the overall performance of the", "proposed approach on ACE2005.", "As shown in Table 2, EKD (our) outperforms various state-of-the-art models, showing the superiority of open-domain trigger knowledge and the effectiveness of the proposed teacher-student model.", "BERT-based models AD-DMBERT, DRMM and EKD (ours) significantly outperform the CNN-based or LSTM-based models, which is due to the ability to capture contextual information as well as large scale pretraining of BERT.", "Compared to these BERT-based models, our methods consistently improves the F score by 3.5% and 2.3%, which shows the superiority of our method even if the encoder is powerful enough.", "Compared to data-driven methods DMCNN and DLRNN, knowledge enhanced methods Lu's DISTILL, TS-DISTILL and EKD (ours) improve the recall by a large margin.", "Due to the small scale of ACE2005, it is quite tricky to disambiguate triggers solely based on the surrounding context.", "Enhanced by external knowledge, these methods have a stand-by commonsense to depend on, which prevents from overfitting densely labeled trigger words and thus can discover more trigger words.", "Among them, our model achieves the best performance, which may be caused by two reasons:", "1) The superiority of open-domain trigger knowledge.", "Compared to general linguistic knowledge used in Lu's DISTILL and entity type knowledge used in TS-DISTILL, open-domain trigger knowledge is more task-related, which directly provides trigger candidates for trigger identification, and thus is more informative.", "2) The superiority of the proposed teacher-student model.", "Our method is able to learn open-domain trigger knowledge from unlimited unlabeled data, while Lu's DISTILL and TS-DISTILL can only learn from labeled data.", "It is worth noting that our model simultaneously improves precision.", "Unseen/sparsely labeled trigger words are usually rare words, which are typically monosemous and exhibiting a single clearly defined meaning.", "These words are easier for the model to distinguish, thereby resulting in the improvement of the overall precision.", "To evaluate whether EKD has distilled knowledge into model, we report the performance of EKD in the test set with and without knowledge.", "As illustrated in Table 3, whether the input data masters the open-domain knowledge or not, the performance makes no big difference (78.4% vs 78.6%), which shows EKD (our) already distills the knowledge into the model.", "ACE2005 is a multi-domain dataset, with six domains: broadcast conversation (bc), broadcast news (bn), telephone conversation (cts), newswire (nw), usenet (un) and webblogs (wl).", "Following the common practice (Plank and Moschitti, 2013; Nguyen and Grishman, 2014), we adopt the union of bc and nw as source domains, and bc, ct, wl as three target domains.", "The event types and vocabulary distribution are quite different between the source and target domains (Plank and Moschitti, 2013).", "For evaluation, we split source domain data into train/test 4:1 and report the average results on ten runs as the final result.", "For baselines, MaxEnt and Joint (Li et al., 2013) are two feature-enriched methods, exploiting both lexical and global features to enhance the domain adaption ability.", "Nguyen's CNN (Nguyen and Grishman, 2015) integrates the feature and neural approaches and proposes a joint CNN for domain adaption.", "We also compare with supervised SOTA PLMEE (Yang et al., 2019), which exploits the pre-trained language model BERT for event extraction.", "As illustrated in Table 4, our method achieves the best adaptation performance on both bc and wl target domains and achieve comparable performance on cts target domain.", "The superior of domain adaption may come from the open-domain trigger knowledge.", "The open-domain trigger knowledge is not subject to specific domains, which will detect all the event-oriented trigger words and cover the event type from both the source and the target domains.", "Armed with open-domain trigger knowledge, our model reinforces associations between source and target data, and thus has superior performance in domain adaption.", "In the section, we answer the question whether our model can address the long tail problem.", "According to the frequency in the training set, we divide trigger words into three categories: Unseen , Sparsely-Labeled and Densely-Labeled .", "The frequency of Sparsely-Labeled is less than 5 and the frequency of Densely-Labeled is more than 30.", "The baselines are", "1) supervised-only method DMBERT (Chen et al., 2015),", "2) distant-supervised method DGBERT (Chen et al., 2017) and", "3) semi-supervised method BOOTSTRAP (He and Sun, 2017).", "We replace the encoders in the three baselines to more powerful BERT to make the baseline stronger.", "As illustrated in Table 5, all the three baselines show a significant performance degradation in unseen/sparsely labeled scenarios due to the limited training data.", "Our method surpasses the baselines in all three settings.", "Especially, our method gains more improvement on unseen (+6.1%) and sparsely-labeled settings (+2.8%).", "Open-domain trigger knowledge allows us to discover unseen/sparsely triggers from the large-scale unlabeled corpus, which increases the frequency at which the model sees unseen/sparsely triggers.", "Then, to evaluate whether EKD (ours) can distill other knowledge types, we conduct experiments on the three most commonly used knowledge in ED scenario:", "1) Entity knowledge.", "Entity type is an important feature for trigger disambiguation in ED (Zhang et al., 2007).", "We compare with (Liu et al., 2019), which distills ground-truth entity type knowledge via an adversarial teacher-student model.", "2) Syntactic knowledge.", "Syntactic knowledge is implied in the dependency parse tree.", "The closer in tree, the more important of the word for the trigger (McClosky et al., 2011).", "Our baseline (Nguyen and Grishman, 2018) is the best syntactic knowledge enhanced model, which exploits structure dependency tree information via graph convolutions networks.", "3) Argument knowledge.", "Event arguments play an important role in ED.", "Our baseline ANN-S2 (Liu et al., 2017) designs a supervised attention to leverage the event argument knowledge.", "For the adaption of our model, we obtain entity annotations by Stanford CoreNLP, syntactic by NLP-Cube(Boro et al., 2018) and argument by CAMR (Wang et al., 2015).", "The marking contents are:", "1) For entity, we tag three basic entity types, People , Location and Organization .", "2) For Table 4: Performance on domain adaption.", "syntactic, we take the first-order neighbor of trigger word on dependency parse tree.", "We consider neighbors in both directions.", "3) For argument, we focus on the words played as the ARG0-4 roles of the trigger in AMR parser following (Huang et al., 2017).", "As we do not know trigger words on unlabeled data, we use pseudo labels generated by pre-trained BERT instead.", "We encode the entity, syntactic and argument knowledge into sentences with the same Marking Mechanism in Section 3.2.", "To prevent information leakage, we only use that knowledge in the training procedure.", "As illustrated in Table 7, Our three adaption models, EKD-Ent, EKD-Syn and EKD-Arg, consistently outperform baselines on the F score, proving that the effectiveness of EKD is independent to specific knowledge type.", "EKD increases the cognitive gap between teacher model and student model to maximize knowledge utilization, and the idea universally works for all types of knowledge distillation.", "If we compare the performances from the perspective of knowledge type, the results show that open-domain trigger knowledge (EKD) is better than the argument knowledge (EKD-Arg), and they are both superior to the entity knowledge (EKD-Ent) and syntactic knowledge (EKD-Syn).", "The reason might be the more task-related of the knowledge, the more informative of the knowledge.", "Since open-domain trigger knowledge and event argument knowledge consider the important words directly from the event sides, they are more valuable than the entity and syntactic knowledge in ED.", "We answer the question of how and when the open-domain trigger knowledge enhances the understanding of event triggers.", "Table 6 gives examples about how open-domain trigger knowledge affects predictions of ED.", "In S1, since trek is a rare word that never shows up in the training procedure, supervised-only method fails to recognize it.", "Open-domain trigger knowledge provides the priory that trek should be an event trigger.", "Coupled with pre-trained information that trek is similar to densely-labeled trigger words such as move , our model successfully recalls it.", "In S3, be is a very ambiguous word, and in most cases, be is not used as a trigger word in the labeled data.", "Supervised-only method is prone to overfitting the labeled data and fails to recognize it.", "Open-domain trigger knowledge owns word sense disambiguation ability, which knows that be here belongs to the word sense oc-cupy a certain position' instead of the common word sense have the quality of being', and thus can successfully identify be as the trigger for event Start-Position .", "We leverage the wealth of the open-domain trigger knowledge to address the long-tail issue in ACE2005.", "Specifically, we adopt a WordNet-based pipeline for efficient knowledge collection, and then we propose a teacher-student model, EKD, to distill open-domain trigger knowledge from both labeled and abundant unlabeled data.", "EKD forces the student model to learn open-domain trigger knowledge from teacher model by mimicking the Table 6: Error analysis: How and When does the open-domain trigger knowledge improve ED?", "predicted results of the teacher model.", "Experiments show that our method surpasses seven strong knowledge-enhanced baselines, and is especially efficient for unseen/sparsely triggers identification.", "This work is supported by the National Key Research and Development Program of China (2018YFB1005100 and 2018YFB1005101), NSFC Key Projects (U1736204, 61533018).", "It also got partial support from National Engineering Laboratory for Cyberlearning and Intelligent Technology, and Beijing Key Lab of Networked Multimedia.", "This research is supported by the National Research Foundation, Singapore under its International Research Centres in Singapore Funding Initiative." ]
[ "abstain", "abstain", "objective", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "objective", "objective", "abstain", "method", "method", "method", "result", "abstain", "objective", "objective", "result", "result", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "result", "other", "other", "other" ]
[ "Biomedical named entities often play important roles in many biomedical text mining tools.", "However, due to the incompleteness of provided synonyms and numerous variations in their surface forms, normalization of biomedical entities is very challenging.", "In this paper, we focus on learning representations of biomedical entities solely based on the synonyms of entities .", "To learn from the incomplete synonyms, we use a model-based candidate selection and maximize the marginal likelihood of the synonyms present in top candidates.", "Our model-based candidates are iteratively updated to contain more difficult negative samples as our model evolves.", "In this way, we avoid the explicit pre-selection of negative samples from more than 400K candidates.", "On four biomedical entity normalization datasets having three different entity types (disease, chemical, adverse reaction), our model BIOSYN consistently outperforms previous state-of-the-art models almost reaching the upper bound on each dataset.", "Biomedical named entities are frequently used as key features in biomedical text mining.", "From biomedical relation extraction (Xu et al., 2016; Li et al., 2017a) to literature search engines (Lee et al., 2016), many studies are utilizing biomedical named entities as a basic building block of their methodologies.", "While the extraction of the biomedical named entities is studied extensively (Sahu and Anand, 2016; Habibi et al., 2017), the normalization of extracted named entities is also crucial for improving the precision of downstream tasks (Leaman et al., 2013; Wei et al., 2015).", "Unlike named entities from general domain text, typical biomedical entities have several different surface forms, making the normalization of biomedical entities very challenging.", "For instance, while two chemical entities motrin ' and ibuprofen ' belong to the same concept ID (MeSH:D007052), they have completely different surface forms.", "On the other hand, mentions having similar surface forms could also have different meanings (e.g. dystrophinopathy ' (MeSH:D009136) and bestrophinopathy ' (MeSH:C567518)).", "These examples show a strong need for building latent representations of biomedical entities that capture semantic information of the mentions.", "In this paper, we propose a novel framework for learning biomedical entity representations based on the synonyms of entities.", "Previous works on entity normalization mostly train binary classifiers that decide whether the two input entities are the same (positive) or different (negative) (Leaman et al., 2013; Li et al., 2017b; Fakhraei et al., 2019; Phan et al., 2019).", "Our framework called BIOSYN uses the synonym marginalization technique, which maximizes the probability of all synonym representations in top candidates.", "We represent each biomedical entity using both sparse and dense representations to capture morphological and semantic information, respectively.", "The candidates are iteratively updated based on our model's representations removing the need for an explicit negative sampling from a large number of candidates.", "Also, the model-based candidates help our model learn from more difficult negative samples.", "Through extensive experiments on four biomedical entity normalization datasets, we show that BIOSYN achieves new state-of-the-art performance on all datasets, outperforming previous models by 0.8% 2.6% top1 accuracy.", "Further analysis shows that our model's performance has almost reached the performance upper bound of each dataset.", "The contributions of our paper are as follows: First, we introduce BIOSYN for biomedical entity representation learning, which uses synonym marginalization dispensing with the explicit needs of negative training pairs.", "Second, we show that the iterative candidate selection based on our model's representations is crucial for improving the performance together with synonym marginalization.", "Finally, our model outperforms strong state-of-the-art models up to 2.6% on four biomedical normalization datasets.", "1 2 Related Works Biomedical entity representations have largely relied on biomedical word representations.", "Right after the introduction of Word2vec (Mikolov et al., 2013), Pyysalo et al. (2013) trained Word2Vec on biomedical corpora such as PubMed.", "Their biomedical version of Word2Vec has been widely used for various biomedical natural language processing tasks (Habibi et al., 2017; Wang et al., 2018; Giorgi and Bader, 2018; Li et al., 2017a) including the biomedical normalization task (Mondal et al., 2019).", "Most recently, BioBERT (Lee et al., 2019) has been introduced for contextualized biomedical word representations.", "BioBERT is pre-trained on biomedical corpora using BERT (Devlin et al., 2019) and numerous studies are utilizing BioBERT for building state-of-the-art biomedical NLP models (Lin et al., 2019; Jin et al., 2019; Alsentzer et al., 2019; Sousa et al., 2019).", "Our model also uses pre-trained BioBERT for learning biomedical entity representations.", "The intrinsic evaluation of the quality of biomedical entity representations is often verified by the biomedical entity normalization task (Leaman et al., 2013; Phan et al., 2019).", "The goal of the biomedical entity normalization task is to map an input mention from a biomedical text to its associated CUI (Concept Unique ID) in a dictionary.", "The task is also referred to as the entity linking or the entity grounding (D'Souza and Ng, 2015; Leaman and Lu, 2016).", "However, the normalization of biomedical entities is more challenging than the normalization of general domain entities due to a large number of synonyms.", "Also, the variations of synonyms depend on their entity types, which makes building type-agnostic normalization model difficult (Leaman et al., 2013; Li et al., 2017b; Mon-1 Code available at https://github.com/ dmis-lab/BioSyn . dal et al., 2019).", "having three different biomedical entity types.", "While traditional biomedical entity normalization models are based on hand-crafted rules (D'Souza and Ng, 2015; Leaman et al., 2015), recent approaches for the biomedical entity normalization have been significantly improved with various machine learning techniques.", "DNorm (Leaman et al., 2013) is one of the first machine learning-based entity normalization models, which learns pair-wise similarity using tf-idf vectors.", "Another machine learning-based study is CNN-based ranking method (Li et al., 2017b), which learns entity representations using a convolutional neural network.", "The most similar works to ours are NSEEN (Fakhraei et al., 2019) and BNE (Phan et al., 2019), which map mentions and concept names in dictionaries to a latent space using LSTM models and refines the embedding using the negative sampling technique.", "However, most previous works adopt a pair-wise training procedure that explicitly requires making negative pairs.", "Our work is based on marginalizing positive samples (i.e., synonyms) from iteratively updated candidates and avoids the problem of choosing a single negative sample.", "In our framework, we represent each entity with sparse and dense vectors which is largely motivated by techniques used in information retrieval.", "Models in information retrieval often utilize both sparse and dense representations (Ramos et al., 2003; Palangi et al., 2016; Mitra et al., 2017) to retrieve relevant documents given a query.", "Similarly, we can think of the biomedical entity normalization task as retrieving relevant concepts given a mention (Li et al., 2017b; Mondal et al., 2019).", "In our work, we use maximum inner product search (MIPS) for retrieving the concepts represented as sparse and dense vectors, whereas previous models could suffer from error propagation of the pipeline approach.", "We define an input mention m as an entity string in a biomedical corpus.", "Each input mention has its own CUI c and each CUI has one or more synonyms defined in the dictionary.", "The set of synonyms for a CUI is also called as a synset.", "We denote the union of all synonyms in a dictionary as N = [ n 1 , n 2 , . . . ] where n N is a single syn-Encoder Dictionary Update Embeddings ( ith iteration ) SimilarityScores Encoder Top 1 Iterative Update with Synonym Marginalization Circovirus SARS Pneumonia SARSCOV2 Influenza COVID19 WeightSharing Training Inference MERS Mention covid19 MML over Synonyms ( indicates synonyms) Sort Top k : Inner Product = Figure 1: The overview of BIOSYN .", "onym string.", "Our goal is to predict the gold CUI c of the input mention m as follows: c = CUI (argmax n NP ( n | m ; )) (1) where CUI ( ) returns the CUI of the synonym n and denotes a trainable parameter of our model.", "The overview of our framework is illustrated in Figure 1. We first represent each input mention m and each synonym n in a dictionary using sparse and dense representations.", "We treat m and n equally and use a shared encoder for both strings.", "During training, we iteratively update top candidates and calculate the marginal probability of the synonyms based on their representations.", "At inference time, we find the nearest synonym by performing MIPS over all synonym representations.", "Sparse Entity Representation We use tf-idf to obtain a sparse representation of m and n .", "We denote each sparse representation as e sm and e sn for the input mention and the synonym, respectively.", "tf-idf is calculated based on the character-level n-grams statistics computed over all synonyms n N .", "We define the sparse scoring function of a mention-synonym pair ( m, n ) as follows: S sparse ( m, n ) = f ( e sm , e sn ) R (2) where f denotes a similarity function.", "Dense Entity Representation While the sparse representation encodes the morphological information of given strings, the dense representation encodes the semantic information.", "Learning effective dense representations is the key challenge in the biomedical entity normalization task (Li et al., 2017b; Mondal et al., 2019; Phan et al., 2019; Fakhraei et al., 2019).", "We use pre-trained BioBERT (Lee et al., 2019) to encode dense representations and fine-tune BioBERT with our synonym marginalization algorithm.", "2 We share the same BioBERT model for encoding mention and synonym representations.", "We compute the dense representation of the mention m as follows: e dm = BioBERT ( m )[ CLS ] R h (3) where m = { m 1 , ..., m l } is a sequence of sub-tokens of the mention m segmented by the Word-Piece tokenizer (Wu et al., 2016) and h denotes the hidden dimension of BioBERT (i.e., h = 768 ).", "[ CLS ] denotes the special token that BERT-style models use to compute a single representative vector of an input.", "The synonym representation e dn R h is computed similarly.", "We denote the dense scoring function of a mention-synonym pair ( m, n ) using the dense representations as follows: S dense ( m, n ) = f ( e dm , e dn ) R (4) where we again used the inner product for f .", "Similarity Function Based on the two similarity functions S sparse ( m, n ) and S dense ( m, n ) , we now define the final similarity function S ( m, n ) indicating the similarity between an input mention m and a synonym n : S ( m, n ) = S dense ( m, n ) + S sparse ( m, n ) R (5) where is a trainable scalar weight for the sparse score.", "Using , our model learns to balance the importance between the sparse similarity and the dense similarity.", "The most common way to train the entity representation model is to build a pair-wise training dataset.", "While it is relatively convenient to sample positive pairs using synonyms, sampling negative pairs are trickier than sampling positive pairs as there are a vast number of negative candidates.", "For instance, the mention alpha conotoxin ' (MeSH:D020916) has 6 positive synonyms while its dictionary has 407,247 synonyms each of which can be a negative sampling candidate.", "Models trained on these pair-wise datasets often rely on the quality of the negative sampling (Leaman et al., 2013; Li et al., 2017b; Phan et al., 2019; Fakhraei et al., 2019).", "On the other hand, we use a model-based candidate retrieval and maximize the marginal probability of positive synonyms in the candidates.", "Iterative Candidate Retrieval Due to a large number of candidates present in the dictionary, we need to retrieve a smaller number of candidates for training.", "In our framework, we use our entity encoder to update the top candidates iteratively.", "Let k be the number of top candidates to be retrieved for training and ( 0 1 ) be the ratio of candidates retrieved from S dense .", "We call as the dense ratio and = 1 means consisting the candidates with S dense only.", "First, we compute the sparse scores S sparse and the dense scores S dense for all n N .", "Then we retrieve the k (cid:98) k (cid:99) highest candidates using S sparse , which we call as sparse candidates.", "Likewise, we retrieve the (cid:98) k (cid:99) highest candidates using S dense , which we call as dense candidates.", "Whenever the dense and sparse candidates overlap, we add more dense candidates to match the number of candidates as k .", "While the sparse candidates for a mention will always be the same as they are based on the static tf-idf representation, the dense candidates change every epoch as our model learns better dense representations.", "Our iterative candidate retrieval method has the following benefits.", "First, it makes top candidates to have more difficult negative samples as our model is trained, hence helping our model represent a more accurate dense representation of each entity.", "Also, it increases the chances of retrieving previously unseen positive samples in the top candidates.", "As we will see, comprising the candidates purely with sparse candidates have a strict upper bound while ours with dense candidates can maximize the upper bound.", "Synonym Marginalization Given the top candidates from iterative candidate retrieval, we maximize the marginal probability of positive synonyms, which we call as synonym marginalization.", "Given the top candidates N 1: k computed from our model, the probability of each synonym is obtained as: P ( n | m ; ) = exp( S ( n, m )) (cid:80) n (cid:48) N 1: k exp( S ( n (cid:48) , m )) (6) where the summation in the denominator is over the top candidates N 1: k .", "Then, the marginal probability of the positive synonyms of a mention m is defined as follows: P (cid:48) ( m, N 1: k ) = (cid:88) n N 1: k EQUAL ( m,n )=1 P ( n | m ; ) (7) where EQUAL ( m, n ) is 1 when CUI ( m ) is equivalent to CUI ( n ) and 0 otherwise.", "Finally, we minimize the negative marginal log-likelihood of synonyms.", "We define the loss function of our model as follows: L = 1 MM (cid:88) i =1 log P (cid:48) ( m i , N 1: k ) (8) where M is the number of training mentions in our dataset.", "We use mini-batch for the training and use Adam optimizer (Kingma and Ba, 2015) to minimize the loss.", "At inference time, we retrieve the nearest synonym of a mention representation using MIPS.", "We compute the similarity score S ( m, n ) between the input mention m and all synonyms n N using the inner product and return the CUI of the nearest candidate.", "Note that it is computationally cheap to find the nearest neighbors once we pre-compute the dense and sparse representations of all synonyms.", "We perform basic pre-processings such as lower-casing all characters and removing the punctuation for both mentions and synonyms.", "To resolve the typo issues in mentions from NCBI disease, we apply the spelling check algorithm following the previous work (D'Souza and Ng, 2015).", "Abbreviations of entities are widely used in biomedical entities for an efficient notation which makes the normalization task more challenging.", "Therefore, we use the abbreviation resolution module called Ab3P 3 to detect the local abbreviations and expand it to its definition from the context (Sohn et al., 2008).", "We also split composite mentions (e.g. 'breast and ovarian cancer') into separate mentions (e.g. 'breast cancer' and 'ovarian cancer') using heuristic rules described in the previous work (D'Souza and Ng, 2015).", "We also merge mentions in the training set to the dictionary to increase the coverage following the previous work (D'Souza and Ng, 2015).", "For sparse representations, we use character-level uni-, bi-grams for tf-idf.", "The maximum sequence length of BioBERT is set to 25 4 and any string over the maximum length is truncated to 25.", "The number of top candidates k is 20 and the dense ratio for the candidate retrieval is set to 0.5.", "We set the learning rate to 1e-5, weight decay to 1e-2, and the mini-batch size to 16.", "We found that the trainable scalar converges to different values between 2 to 4 on each dataset.", "We train BIOSYN for 10 epochs for NCBI Disease, BC5CDR Disease, and TAC2017 ADR and 5 epochs for BC5CDR Chemical due to its large dictionary size.", "Except the number of epochs, we use the same hyperparameters for all datasets and experiments.", "We use the top k accuracy as an evaluation metric following the previous works in biomedical entity normalization tasks (D'Souza and Ng, 2015; Li et al., 2017b; Wright, 2019; Phan et al., 2019; Ji et al., 2019; Mondal et al., 2019).", "We define Acc@ k as 1 if a correct CUI is included in the top k predictions, otherwise 0. We evaluate our models using Acc@1 and Acc@5.", "Note that we treat predictions for composite entities as correct if every prediction for each separate mention is correct.", "We use four biomedical entity normalization datasets having three different biomedical entity types (disease, chemical, adverse reaction).", "The statistics of each dataset is described in Table 1. NCBI Disease Corpus NCBI Disease Corpus (Dogan et al., 2014) 5 provides manually annotated disease mentions in each document with each CUI mapped into the MEDIC dictionary (Davis et al., 2012).", "In this work, we use the July 6, 2012 version of MEDIC containing 11,915 CUIs and 71,923 synonyms included in MeSH and/or OMIM ontologies.", "Biocreative V CDR BioCreative V CDR (Li et al., 2016) 6 is a challenge for the tasks of chemical-induced disease (CID) relation extraction.", "It provides disease and chemical type entities.", "The annotated disease mentions in the dataset are mapped into the MEDIC dictionary like the NCBI disease corpus.", "The annotated chemical mentions in the dataset are mapped into the Comparative Toxicogenomics Database (CTD) (Davis et al., 2018) chemical dictionary.", "In this work, we use the November 4, 2019 version of the CTD chemical dictionary containing 171,203 CUIs and 407,247 synonyms included in MeSH ontologies.", "Following the previous work (Phan et al., 2019), we filter out mentions whose CUIs do not exist in the dictionary.", "TAC2017ADR TAC2017ADR (Roberts et al., 2017) 7 is a challenge whose purpose of the task is to extract information on adverse reactions found in structured product labels.", "It provides manually annotated mentions of adverse reactions that are mapped into the MedDRA dictionary (Brown et al., 1999).", "In this work, we use MedDRA v18.1 which contains 23,668 CUIs and 76,817 synonyms.", "5 https://www.ncbi.nlm.nih.gov/ CBBresearch/Dogan/DISEASE 6 https://biocreative.bioinformatics.", "udel.edu/tasks/biocreative-v/track-3-cdr 7 https://bionlp.nlm.nih.gov/ tac2017adversereactions Models NCBI Disease BC5CDR Disease BC5CDR Chemical TAC2017ADR Acc@1 Acc@5 Acc@1 Acc@5 Acc@1 Acc@5 Acc@1 Acc@5 Sieve-Based (D'Souza and Ng, 2015) 84.7 -84.1 -90.7 -84.3 Taggerone (Leaman and Lu, 2016) 87.7 -88.9 -94.1 -CNN Ranking (Li et al., 2017b) 86.1 ---NormCo (Wright, 2019) 87.8 -88.0 --BNE (Phan et al., 2019) 87.7 -90.6 -95.8 -BERT Ranking (Ji et al., 2019) 89.1 ---93.2 TripletNet (Mondal et al., 2019) 90.0 ---BIOSYN ( S-SCORE ) 87.6 90.5 92.4 95.7 95.9 96.8 91.4 94.5 BIOSYN ( D-SCORE ) 90.7 93.5 92.9 96.5 96.6 97.2 95.5 97.5 BIOSYN ( = 0 . 0 ) 89.9 93.3 92.2 94.9 96.3 97.2 95.3 97.6 BIOSYN ( = 1 . 0 ) 90.5 94.5 92.8 96.0 96.4 97.3 95.8 97.9 BIOSYN (Ours) 91.1 93.9 93.2 96.0 96.6 97.2 95.6 97.5 We used the author's provided implementation to evaluate the model on these datasets.", "We use five different versions of our model to see the effect of each module in our framework.", "First, BIOSYN denotes our proposed model with default hyperparameters described in Section 4.1.", "BIOSYN ( S-SCORE ) and BIOSYN ( D-SCORE ) use only sparse scores or dense scores for the predictions at inference time, respectively.", "To see the effect of different dense ratios, BIOSYN ( = 0 . 0 ) uses only sparse candidates and BIOSYN ( = 1 . 0 ) uses only dense candidates during training.", "Table 2 shows our main results on the four datasets.", "Our model outperforms all previous models on the four datasets and achieves new state-of-the-art performance.", "The Acc@1 improvement on NCBI Disease, BC5CDR Disease, BC5CDR Chemical and TAC2017ADR are 1.1%, 2.6%, 0.8% and 2.4%, respectively.", "Training with only dense candidates ( = 1 . 0 ) often achieves higher Acc@5 than BIOSYN showing the effectiveness of dense candidates.", "In Figure 2, we show the effect of the iterative candidate retrieval method.", "We plot the recall of top candidates used in each model on the development sets.", "The recall is 1 if any top candidate has the gold CUI.", "BIOSYN ( = 1 ) uses only dense candidates while BIOSYN ( = 0 ) uses sparse candidates.", "BIOSYN utilizes both dense and sparse candidates.", "Compared to the fixed recall of BIOSYN ( = 0 ), we observe a consistent improvement in BIOSYN ( = 1 ) and BIOSYN .", "This proves that our proposed model can increase the upper bound of candidate retrieval using dense representations.", "We perform experiments by varying the number of top candidates used for training.", "Figure 3 shows that a model with 20 candidates performs reasonably well in terms of both Acc@1 and Acc@5.", "It shows that more candidates do not guarantee higher performance, and considering the training complexity, we choose k = 20 for all experiments.", "Our synonym marginalization method uses marginal maximum likelihood (MML) as the objective function.", "To verify the effectiveness of our proposed method, we compare our method with two different strategies: hard EM (Liang et al., 2018) and the standard pair-wise training (Leaman et al., 2013).", "The difference between hard EM and MML is that hard EM maximizes the probability of a single positive candidate having the highest probability.", "In contrast, MML maximizes marginalized probabilities of all synonyms in the top candidates.", "For hard EM, we first obtain a target n as follows: n = argmax n N 1: k P ( n | m ; ) (9) where most notations are the same as Equation 1. The loss function of hard EM is computed as follows: L = 1 MM (cid:88) i =1 log P ( n | m i ; ) .", "The pair-wise training requires a binary classifi-cation model.", "For the pair-wise training, we minimize the binary cross-entropy loss using samples created by pairing each positive and negative candidate in the top candidates with the input mention.", "Table 3 shows the results of applying three different loss functions on BC5CDR Disease and BC5CDR Chemical.", "The results show that MML used in our framework learns better semantic representations than other methods.", "In Table 4, we list top candidates of BIOSYN from the NCBI Disease development set.", "Although the Methods BC5CDR D. BC5CDR C. Acc@1 Acc@5 Acc@1 Acc@5 MML 91.1 95.4 96.7 97.7 Hard EM 91.0 95.8 96.5 97.5 Pair-Wise Training 90.7 94.4 96.3 97.2 Table 3: Comparison of two different training methods on the development sets of BC5CDR Disease, BC5CDR Chemical initial candidates did not have positive samples due to the limitation of sparse representations, candidates at epoch 1 begin to include more positive candidates.", "Candidates at epoch 5 include many positive samples, while negative samples are also closely related to each mention.", "In Table 5, we analyze the error cases of our model on the test set of NCBI Disease.", "We manually inspected all failure cases and defined the following error cases in the biomedical entity normalization task: Incomplete Synset, Contextual Entity, Overlapped Entity, Abbreviation, Hypernym, and Hyponym.", "Remaining failures that are difficult to categorize are grouped as Others.", "Incomplete Synset is the case when the surface form of an input mention is very different from the provided synonyms of a gold CUI and requires the external knowledge for the normalization.", "Contextual Entity denotes an error case where an input mention and the predicted synonym are exactly the same but have different CUIs.", "This type of error could be due to an annotation error or happen when the same mention can be interpreted differently depending on its context.", "Overlapped Entity is an error where there is an overlap between the words of input mention and the predicted candidate.", "This includes nested entities.", "Abbrevation is an error where an input mention is in an abbreviated form but the resolution has failed even with the external module Ab3P.", "Hypernym and Hyponym are the cases when an input mention is a hypernym or a hyponym of the annotated entity.", "Based on our analyses, errors are mostly due to ambiguous annotations (Contextual Entity, Overlapped Entity, Hypernym, Hyponym) or failure of pre-processings (Abbreviation).", "Incomplete Synset can be resolved with a better dictionary having richer synonym sets.", "Given the limitations in annotations, we conclude that the performance of BIOSYN has almost reached the upper bound.", "In this study, we introduce BIOSYN that utilizes the synonym marginalization technique and the iterative candidate retrieval for learning biomedical entity representations.", "On four biomedical entity normalization datasets, our experiment shows that our model achieves state-of-the-art performance on all datasets, improving previous scores up to 2.6%.", "Although the datasets used in our experiments are in English, we expect that our methodology would work in any language as long as there is a synonym dictionary for the language.", "For future work, an extrinsic evaluation of our methods is needed to prove the effectiveness of learned biomedical entity representations and to prove the quality of the entity normalization in downstream tasks.", "This research was supported by National Research Foundation of Korea (NRF-2016M3A9A7916996, NRF-2014M3C9A3063541).", "We thank the members of Korea University, and the anonymous reviewers for their insightful comments." ]
[ "abstain", "abstain", "method", "method", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "method", "method", "abstain", "objective", "result", "objective", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "method", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "other", "other", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "method", "abstain", "other", "other" ]
[ "In recent years neural language models (LMs) have set state-of-the-art performance for several benchmarking datasets.", "While the reasons for their success and their computational demand are well-documented, a comparison between neural models and more recent developments in -gram models is neglected.", "In this paper, we examine the recent progress in -gram literature, running experiments on 50 languages covering all morphological language families.", "Experimental results illustrate that a simple extension of Modified Kneser-Ney outperforms an LSTM language model on 42 languages while a word-level Bayesian gram LM (Shareghi et al., 2017) outperforms the character-aware neural model (Kim et al., 2016) on average across all languages, and its extension which explicitly injects linguistic knowledge (Gerz et al., 2018a) on 8 languages.", "Further experiments on larger Europarl datasets for 3 languages indicate that neural architectures are able to outperform computationally much cheaper -gram models: -gram training is up to 15 , 000 quicker.", "Our experiments illustrate that standalone -gram models lend themselves as natural choices for resource-lean or morphologically rich languages, while the recent progress has signifi-cantly improved their accuracy.", "Statistical language models (LMs) are the pivot for several natural language processing tasks where a model trained on a text corpus is required to assign a probability to a given sequence 1 2 ... (de-noted by 1 ).", "This probability indicates how likely is for 1 to belong to the corpus and is decomposed into conditional probabilities of words given their preceding contexts as ( 1 ) = =1 ( | 11 ) .", "In -gram LMs the unbounded conditional probabilities ( | 11 ) are approximated by imposing a finite-order Markov assumption, ( | 11 ) ( | 1 +1 ) .", "Several smoothing techniques address the statistical sparsity issue for computing the conditional probabilities (Kneser and Ney, 1995; Chen and Goodman, 1999; Teh, 2006; Shareghi et al., 2016a), while others avoided the above approximation with unbounded hierarchical nonparametric Bayesian frameworks (Wood et al., 2011; Shareghi et al., 2017).", "Alternatively, neural LMs compute ( | 11 ) via recurrent neural units which, in theory, are capable of encoding an unbounded context 11 .", "In recent years, neural LMs have become the prominent class of language modeling and have established state-of-the-art results on almost all su-ciently large benchmarks (Melis et al., 2018; Yang et al., 2018).", "While outperforming -grams in terms of predictive accuracy, the computational shortcomings of neural LMs are well-documented: Training neural LMs is computationally expensive to the point that running experiments on large data ( a few GiBs) is beyond the reach of academic research to this date (Chen et al., 2016; Patwary et al., 2018; Puri et al., 2018).", "1 Similarly, querying is slower for neural LMs due to the required matrix-based operations, whereas most of the widely used -gram LM toolkits rely on a few hash lookups and much cheaper scalar-based operations (Liu et al., 2018; Tang and Lin, 2018).", "Nonetheless, it has been shown that the best predictive performance is still achieved by combining the two models via a basic interpolation or a mixture model (Jozefowicz et al., 2016; Neubig and Dyer, 2016): this indicates that the progress in -gram LM should eventually be reflected in improving the 1 For instance, -gram LMs could be trained on 32GiB of data on a single CPU with 32GiB of RAM in half a day (Shareghi et al., 2016b).", "A ballpark estimate for neural LMs, based on Puri et al. (2018), requires 26 Tesla V100 16GB GPUs to finish within the same amount of time while its financial cost is at least 100 higher.", "state-of-the-art performance.", "Inspired by this, in this paper we shed new light on the most notable recent progress in -gram statistical LMs which improves their predictive accuracy.", "We demonstrate that under a recent massively multilingual experimental setup of Gerz et al. (2018a), more recent extensions of Kneser-Ney fam-ily of -gram LMs (Shareghi et al., 2016a, 2017) are highly competitive with neural LMs.", "More specifi-cally, we experiment on 50 languages from dierent morphological families, and illustrate that a word-level Bayesian -gram LM (Shareghi et al., 2017) outperforms the character-level informed neural counterpart (Kim et al., 2016) on average, and its linguistically informed variant (Gerz et al., 2018a) on 8 languages.", "On larger Europarl datasets we find that -gram models cannot reach the performance peaks of computationally much more expensive neural models, but a 2 decrease in perplexity comes at the cost of 15 , 000 longer training time.", "Our work reveals that recent -gram LMs should be used as strong baselines, especially in resource-lean LM data and for morphologically rich languages.", "Additionally, -gram LMs oer a stringent way of dealing with Out-of-Vocabulary (OOVs) and rare words in a full vocabulary setting without relying on any pruning (Heafield, 2013).", "However, in neural LMs this remains an open question (Kawakami et al., 2017; Kim et al., 2016; Cotterell et al., 2018), while a common practice is pruning the training corpus and imposing closed vocabulary assump-tion (Mikolov et al., 2010) where rare words at training and unseen words at test are treated as an UNK token.", "We provide the mathematical underpinnings of -gram models and highlight how this popular treatment works in favor of neural LMs (in comparative studies), and enforces -gram LMs to perform much worse than their full potential.", "We now provide an overview of established smoothing techniques for -gram LMs and their recent extensions.", "Smoothing is typically achieved by interpolation, where the probability of seeing a word after a context 1 +1 ( | 1 +1 , ) is smoothed by its probability after a shorter context, 1 +2 , and follows the following general form: ( | 1 +1 , )+ ( 1 +1 , ) ( | 1 +2 , ) .", "(1) The term ( ) represents the existing mass for the -gram (e.g. via maximum likelihood estimation), ( | 1 +1 , ) KN [ ( +1 ) ] + ( 1 +1 ) MKN [ ( +1 ) ( +1) ] + ( 1 +1 ) {1 , 2 , 3+} GKN [ ( +1 ) ( +1) ] + ( 1 +1 ) {1 ,..., 10+} BKN [ ( +1 ) +1 +1 ] + ( 1 +1 )+ +1 +1 , +1 , +1 Table 1: Top-level interpolation for -gram LMs smoothings and its parameters, and [ ] + def = max{ , 0} .", "is the weight used for redistributing the preserved mass (e.g. via discounting), and are the parameters of the smoothing technique.", "The recursion stops at the unigram level where the conditioning context is empty.", "The recursion at lower levels relies on dierent quantities (e.g. pseudo-counts) but for brevity we focus on the top level of recursion and only the first term, ( | 1 +1 , ) , which suces to highlight the key dierences between smoothing techniques (see Table 1).", "Kneser-Ney ( KN ).", "The key parameters of KN (Kneser and Ney, 1995) are the -gram specific discounts which control the amount of preserved and redistributed mass at th level of recursion.", "While learning the discounts (on held-out data) is a possibility, the following estimation is shown to work well in practice: = 1 2 2 ( ) 1 ( ) .", "It captures the characteristics of dierent -gram orders by looking at the number of unique -grams which occurred once, 1 ( ) , or twice, 2 ( ) in the training data, defined as follows:", "where 1+ ( ) = | { ( ) > 0} | , and is referred to as a form of pseudo-count, and ( ) denotes the frequency of sequence .", "KN considers one discount value at each level of recursion and the discounts are bounded, 0 < 1 .", "defined with modifications applied to the discounting mechanism in order to make them sensitive to the existing mass .", "The discounts are estimated as: = 0 , if = 0 ( +1) +1 ( ) ( ) 1 ( ) 1 ( )+2 2 ( ) , if < ( +1) 4 ( ) 3 ( ) .", "1 ( ) 1 ( )+2 2 ( ) , if (4) where ( ) is defined in Eq.", "(3), and = 3 .", "This leads to three discount parameters { 1 , 2 , 3+ } for each recursion level, and allows for discount values to be as large as ( +1 ) .", "MKN is widely accepted as the state-of-the-art -gram LM and is very frequently confused with KN in the literature.", "Generalized Modified Kneser-Ney ( GKN ).", "In conditions where statistical sparsity is more severe a more refined approach to model the distributions is necessary.", "Motivated by this, Shareghi et al. (2016a) provided the mathematical proof (based on leave-one-out log-likelihood) of the discount bounds used in MKN and proposed a natural extension of its discount binning to = 10 .", "This was shown to be eective for further perplexity reduction in out-of-domain setting where OOV ratio is naturally high.", "Bayesian Kneser-Ney ( BKN ).", "The Bayesian generalization of KN is the Hierarchical Pitman-Yor Process ( HPYP ) language model (Teh, 2006).", "This can be interpreted as a richer parameterization of KN and MKN , where the additional parameters (in-troduced shortly) allow the model to have more flexibility in capturing the desired distribution.", "The Pitman-Yor Process (Pitman et al., 1997), PYP ( , , ) , is a distribution defined over probability vectors (each draw from a PYP is a multinomial distribution) and has three parameters: a base probability vector which is the expected value of a draw from a PYP , the concentration parameter < which controls the variation of draws from a PYP around , and the discount parameter 0 < 1 which allows the drawn vectors to capture the power-law behavior.", "In the LM context, given a sequence of words +1 , a draw from a PYP is a multinomial distribution over the words following this sequence, denoted by +1 .", "This distribution can be captured by a vector of two counts { +1 , +1 } which defines a partitioning arrangement of +1 , while dierent partitioning correspond to dierent multinomial draws from PYP .", "Here, +1 is the total number of evidence 2 for word after the context +1 , +1 is the total number of partitions dedicated to constrained by 0 +1 +1 , and is the vocabulary.", "The HPYP ties PYP distributions through their base and oers the statistical mean to smooth over infinitely long contexts (Wood et al., 2011).", "For instance, PYP ( , , PYP ( ( ) , ( ) , ) ) is a two-level HPYP where a draw from a child distribution , is smoothed by a draw from its parent ( ) .", "Here, ( ) is with its most earliest word dropped (e.g., = , ( ) = ).", "KN can be seen as a special case for BKN , when the concentration is 0 , and { +1 =1} .", "BKN can be considered as a richer parameterization of MKN : The product +1 +1 (see Table 1) allows 1 discounting, simulating the discount range of MKN , while an additional parameter +1 permits further adjustments of the distribution.", "BKN is shown to outperform MKN (Shareghi et al., 2017) but relies on expensive parameter sampling.", "Out-of-Vocabulary ( OOV ).", "To complete the definitions we now explain how unseen words or contexts are handled during the computation without resorting to pruning or closed vocabulary setting.", "This treatment is the same for both non-Bayesian and Bayesian methods described in this paper.", "Let us consider the generic interpolation form of Eq.", "(1) and the maximum likelihood estimation of the ( . ) term (hence is dropped) in the top level of the interpolation, ( | 1 +1 ) = ( +1 ) ( 1 +1 ) .", "Regardless of the level of interpolation and the paradigm used for computing the ( . ) term, ( . ) and ( . ) always share the same denominator (normalizing factor).", "An unseen word can appear as the target word , which results in ( . ) = 0 as ( +1 ) = 0 .", "It can also appear as a part of the prediction context 1 +1 in which case both the ( . ) and ( . ) terms will be undefined (and ignored) as the denominator ( 1 +1 ) = 0 .", "This procedure is applied to all levels of interpolation without loss of generality, and as can be seen it only relies on the basic mathematical property of the involved computations rather than any other presumptions about data preprocessing or vocabulary.", "3 2 This is equal to ( +1 ) at the top level of recursion, hence not mentioned as a part of in Table", "1. 3 See Shareghi (2017) for a comprehensive explanation of the models covered in this section.", "As our main large-scale experiment we use a typologically diverse set of 50 languages.", "These LM datasets cover many languages which are challenging in terms of data size, as well as the type-token ratio.", "In a less challenging setup, we experiment on 3 languages with larger training data from the Europarl (Koehn, 2005) to compare the two classes of LMs based on perplexity reduction and training time.", "For full data statistics, actual sampling of languages and data curation see Gerz et al. (2018a,b).", "The common practice of setting a frequency threshold and mapping training data unigrams (( = 1 )-gram) to an UNK token degrades the performance of -gram LMs by discarding a range of discount parameters: e.g., using threshold ( < 3 ) results in 1 (1) , 2 (1) = 0 , both included in Eq.", "(2) and Eq.", "(4), increases the average perplexity score of 5-gram KN in our experiments by 11% .", "Motivated by its significance, we base our comparison on reported results by Gerz et al. (2018a): they deal with the task in the full vocabulary setting (Adams et al., 2017) with word-level predictions, and follow a relatively comparable treatment of unseen words with both -gram and neural LM families (although not identical) at test time without enforcing any threshold over the training data.", "The benchmarked neural LMs include three models with word-level predictions: a standard LSTM (Zaremba et al., 2014), a CharCNN-LSTM (denoted as CNN ) (Kim et al., 2016) which incorporates character-level information in the input, and the Attract-Preserve model (denoted as AP ) (Gerz et al., 2018a) which injects further subword-level information.", "All benchmarked -gram LMs are 5 -grams, with the exception of BKN which is an -gram model trained via 5 samples 4 following the recipe of Shareghi et al. (2017).", "GKN results are based on = 5 , tuned on a development set.", "Results and Discussion.", "The main results on the 50-languages benchmark are summarized in Table", "2. The results for the more recent -gram LMs indicate that these -gram models are highly competitive with neural LMs in this challenging resource-lean setup.", "5 For instance, for 26/50 languages all -gram models outperform a regular LSTM , while GKN and BKN extend the lead to 42/50 4 Marginal improvements achieved with more sampling.", "5 The size of each dataset is 40 K sentences, which is at the level of the standard Penn Treebank dataset often used for LM evaluation in English (Marcus et al., 1993).", "and 48/50 languages, respectively.", "For certain morphologically rich languages (e.g., Tamil, Mongolian, Hebrew), the -gram LMs are able to outscore even character-aware neural LMs.", "On average we observe -grams succeed, especially the BKN model, for introflexive and agglutinative languages which are known to have productive morphological systems: as reported in Table 2, they have the higher OOV ratio compared to the other language families.", "Overall, the best performing -gram model, BKN , outperforms both the LSTM ( 42% reduction in perplexity) and CNN models ( 3% reduction in perplexity), while falling behind AP by 8% .", "These results highlight that -gram models can serve as strong baselines for such morphologically rich languages with high OOV rates and type-to-token ratios, which are especially problematic in scarce data setups.", "Additionally, they suggest that more sophisticated -gram variants such as GKN or BKN should be used to provide adequate comparison points with -gram LMs than the commonly used KN or MKN .", "As expected, experiments on 10 larger Europarl datasets for 3 languages show that neural models outperform -gram models in less challenging data-intensive scenarios.", "However, training on large datasets comes at the expense of training e-ciency for neural models: e.g., according to Figure 1 training non-Bayesian -grams is around 15,000 quicker than training neural models.", "We leave a full-fledged investigation on the relation of training corpus size and ecacy of -gram vs. neural LM training for future work.", "In addition, motivated by these preliminary insights, we advocate investing further eorts in future work into coupling the ideas behind -gram and neural LMs towards improved language modeling.", "We provided an overview of previous work and very recent progress in -gram LMs.", "The recent developments, when tested on a challenging set of 50 languages, demonstrated superior or highly competitive performance compared with neural LMs, while being substantially cheaper to train.", "We also shed light on a common issue in the experimental setups, concerning OOV or rare words handling, when comparing -grams and neural LMs.", "While being non-trivial, investigating any correlation between cheap-to-compute heuristics (e.g., basic data statistics) and the choice of the most suitable model for a given dataset is worth exploring.", "Also, motivated by our findings, we will work on utilizing continuous space representations as side information in sampling the parameters of BKN , i.e. similar to Zhao et al. (2018), which potentially can reduce the gap between BKN and neural models.", "This work is supported by the ERC Consolidator Grant LEXICAL (648909).", "The authors would like to thank the anonymous reviewers for their helpful suggestions.", "The first author would like to also thank the members of the Language Technology Lab for their comments on the presentation of this work." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "objective", "abstain", "result", "result", "result", "abstain", "abstain", "result", "method", "other", "other", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "other", "other", "other" ]
[ "In this paper, we deploy binary stochastic neural autoencoder networks as models of infant language learning in two typologically unrelated languages (Xitsonga and English).", "We show that the drive to model auditory percepts leads to latent clusters that partially align with theory-driven phonemic categories.", "We further evaluate the degree to which theory-driven phonological features are encoded in the latent bit patterns, finding that some (e.g. [ approximant]), are well represented by the network in both languages, while others (e.g. [ spread glottis]) are less so.", "Together, these findings suggest that many reliable cues to phonemic structure are immediately available to infants from bottom-up perceptual characteristics alone, but that these cues must eventually be supplemented by top-down lexical and phonotactic information to achieve adult-like phone discrimination.", "Our results also suggest differences in degree of perceptual availability between features, yielding testable predictions as to which features might depend more or less heavily on top-down cues during child language acquisition.", "Distinctive features like [ voice] and [ sonorant] have been a core construct of phonological theory for many decades (Trubetskoy, 1939; Jakobson et al., 1951; Chomsky and Halle, 1968; Clements, 1985).", "They have been used in automatic speech recognition (Livescu and Glass, 2004), and psycholinguistic evidence suggests that they are cognitively available during language acquisition (Kuhl, 1980; White and Morgan, 2008).", "Nonetheless, distinctive features are not directly observed by humans; they are abstractions that must be inferred from dense perceptual information (sound waves) during language acquisition and comprehension, which raises questions about how they are learned and recognized.", "In adults, phonological comprehension is aided by top-down lexical and phonotactic (i.e. sound sequencing) constraints.", "For example, the classic phonemic restoration effect (Warren, 1970) shows that adults infer missing phonemes from context with such ease that they often fail to notice when acoustic cues to phone identity are erased.", "However, infants first learning their phonemic categories have not yet acquired reliable top-down lexical and phonotactic models and must rely more heavily on bottom-up perceptual information.", "To a learner faced with the immense challenge of discovering structure in dense perceptual input, do theory-driven phonological features stand out or are they swamped by noise?", "In this paper, we address this question using an unsupervised computational acquisition model.", "Previous models of phonological category induction have emphasized the importance of top-down information (information about the contexts in which phonemes occur) (Peperkamp et al., 2006; Swingley, 2009; Feldman et al., 2009a, 2013a,b; Moreton and Pater, 2012a,b; Martin et al., 2013; Pater and Moreton, 2014; Frank et al., 2014; Doyle et al., 2014; Doyle and Levy, 2016).", "But to prevent the acquisition process from being circular, the learner cannot operate solely on top-down information the acoustic signal must provide some evidence for the phonemic categories.", "We hypothesize that the same must be true for at least some phonological features (e.g. [ nasal], [ lateral]), but previous work on unsupervised speech processing has inferred phonological structure from spoken utterances using either (1) discrete transition-based architectures (Varadarajan et al., 2008; Jansen and Church, 2011; Lee and Glass, 2012), which do attempt to discover featurally-related natural classes, or (2) continuous deep neural (Kamper et al., 2015, 2017a; Renshaw et al., 2015) architectures, whose internal representations are difficult to interpret.", "Furthermore, these approaches do not separate the contributions of top-down sequential information from bottom-up acoustic properties of segments, making it difficult to assess the relative importance of these information sources throughout the acquisition process.", "By contrast, our model attends exclusively to phone-internal acoustic patterns using a deep neural autoencoder with a discrete embedding space composed of binary stochastic neurons (BSNs) (Rosenblatt, 1958; Hinton, 2012; Bengio et al., 2013; Courbariaux et al., 2016).", "BSNs allow us to exploit (1) the interpretability of discrete representations, (2) the decomposability of phone segments into phonological features, and (3) and the power of deep neural function approximators to relate percepts and their representations.", "Since every token is labeled with a binary latent code, it is possible to evaluate the model's recovery not only of phonological categories but also of phonological features.", "Featural representations can encode distributional facts about which processes apply to which classes of sounds in ways that cross-cut the phonological space, rather than simply grouping each segment with a set of similar neighbors (LeCun et al., 2015).", "By focusing on the acoustic properties of sounds themselves rather than their sequencing in context, our model enables exploration of two questions about the data available to young learners whose training signal must primarily be extracted from bottom-up perceptual information: (1) to what extent can phoneme categories emerge from a drive to model auditory percepts, and (2) how perceptually available are theory-driven phonological features (that is, how easily can they be extracted directly from low-level acoustic percepts)?", "Our results show", "(a) that phonemic categories emerge naturally but imperfectly from perceptual reconstruction and", "(b) that theory-driven features differ in their degree of perceptual availability.", "Together, these findings suggest that many reliable cues to phonemic structure are immediately available to infants from bottom-up perceptual characteristics alone, but that these cues may eventually need to be supplemented by top-down lexical and phonotactic information to achieve adult-like phone discrimination (Feldman et al., 2013a; Pater and Moreton, 2014).", "Our findings also suggest hypotheses as to precisely which kinds of phonological features are more or less perceptually available and therefore might depend more or less heavily on top-down cues for acquisition.", "Such differences might suggest relative timelines at which different features might be appropriated in support of phonemic, phonotactic, and lexical generalization, providing a rich set of testable hypotheses about child language acquisition.", "The present paper has a strong connection to re-cent work on unsupervised speech processing, especially the Zerospeech 2015 (Versteegh et al., 2015) and 2017 (Dunbar et al., 2017) shared tasks.", "Participating systems (Badino et al., 2015; Renshaw et al., 2015; Agenbag and Niesler, 2015; Chen et al., 2015; Baljekar et al., 2015; Rasanen et al., 2015; Lyzinski et al., 2015; Zeghidour et al., 2016; Heck et al., 2016; Srivastava and Shrivas-tava, 2016; Kamper et al., 2017b; Chen et al., 2017; Yuan et al., 2017; Heck et al., 2017; Shi-bata et al., 2017; Ansari et al., 2017a,b) perform unsupervised ABX discrimination and/or spoken term discovery on the basis of unlabeled speech alone.", "The design and evaluation of these and related systems (Kamper et al., 2015, 2017a; Elsner and Shain, 2017; Rasanen et al., 2018) are oriented toward word-level modeling.", "As such, our focus on the perceptual availability of phonological features is orthogonal to but complementary with this line of research.", "Since distinctive features are important for indexing lexical contrasts, especially between highly confusable words (e.g. onset voicing alone distinguishes sap and zap in English), studying the perceptual availability of distinctive features to an unsupervised learner may help improve the design and analysis of low-resource speech processing systems.", "To our knowledge, the task most closely related to the current paper is unsupervised phone discovery.", "Some studies in this tradition segment speech into phone-like units without clustering them (Dusan and Rabiner, 2006; Qiao et al., 2008), while others cluster small subsets of pre-segmented sounds (usually vowels) using parametric models (mixture-of-Gaussians) (Vallabha et al., 2007; Feldman et al., 2013a; Antetomaso et al., 2017).", "Further work combines these tasks and extends the approach to cover the entire acoustic space (Lee and Glass, 2012).", "However, for a variety of reasons, the Lee and Glass (2012) model does not straightforwardly support evaluation of the perceptual availability of phonological features.", "First, they do not quantitatively evaluate the discovered phoneme clusters.", "Second, the model incorporates phonotactics through transition probabilities, making it difficult to disentangle the contributions of top-down and bottom-up information to the learning process.", "Third, the clustering model is not feature-based, but instead consists of atomic categories, each defining a distinct generative process for acoustics.", "This design is at odds with the widely held view in linguistic theory that phonemes are not inscrutable atoms of the phonological grammar, but instead labels for bundles of features that define natural classes (Clements, 1985).", "Our approach is therefore more appropriate to the question at hand.", "There is a great deal of evidence that many phonological contrasts are perceptually available from a very early stage (Eimas et al., 1971; Moffitt, 1971; Trehub, 1973; Jusczyk and Derrah, 1987; Eimas et al., 1987).", "However, studies of infant phone discrimination typically use carefully-enunciated laboratory stimuli, which have been shown to be substantially easier to discriminate than phones in naturalistic utterances (Feldman et al., 2013a; Antetomaso et al., 2017).", "It is thus likely that inferring phone categories from acoustic evidence is a persistently challenging task, and studies have found language-specific tuning of the speech perception system from fetal stages (Moon et al., 2013) through the first year (Kuhl et al., 1992; Werker and Tees, 1984) and even all the way into the preteen years (Hazan and Barrett, 2000).", "Experiments show that these contrasts are expressed, not simply as oppositions between particular categories, but as a featural system, even in early infancy.", "Evidence of featural effects has been found in the phone discrimination patterns of both adults (Chladkova et al., 2015) and infants (Kuhl, 1980; Hillenbrand, 1985; White and Morgan, 2008).", "Studies have also shown that infants generalize new distinctions along featural dimensions (Maye et al., 2008b; Cristi`a et al., 2011).", "Given infants' early detection and use of some featural contrasts, we hypothesize that there is strong evidence in the acoustic signal for these distinctions, which may then bootstrap the acquisition of phonotactic and lexical patterns (Beckman and Edwards, 2000).", "Experiments also suggest asymmetries in the perceptual availability of features.", "For example, a consonant-vowel distinction appears to be an important early foothold in phonology acquisition: vowel/consonant discrimination emerges early in infant speech processing (Dehaene-Lambertz and Dehaene, 1994), language-specificity in perception follows different timecourses for consonants (Werker and Tees, 1984) and vowels (Kuhl et al., 1992), and vowels and consonants play distinct roles in lexical access vs. rule discovery in children (Nazzi, 2005; Pons and Toro, 2010; Hochmann et al., 2011).", "Young infants have also been shown to be sensitive to voicing contrasts (Lasky et al., 1975; Aslin et al., 1981; Maye et al., 2008b).", "Features that distinguish consonant-like from vowel-like segments or voiced from unvoiced segments may thus be highly available to young learners.", "Infants struggle by comparison with other kinds of phone discrimination tasks, including certain stop-fricative contrasts (Polka et al., 2001) and certain place distinctions within nasal (Narayan et al., 2010) and sibilant (Nit-trouer, 2001; Cristi`a et al., 2011) segments.", "Even adults struggle with fricative place discrimination from strictly acoustic cues (McGuire and Babel, 2012).", "Similar asymmetries emerge from our unsupervised learner, as shown in Section 4.2.", "Our computational acquisition model complements this experimental research in several ways.", "First, its internal representations, unlike those of human infants, are open to detailed analysis, even when exposed to naturalistic language stimuli.", "Second, we can perform cross-linguistic comparisons using readily available corpora without requiring access to a pool of human subjects in each language community.", "Third, our model provides global and graded quantification of the perceptual availability of distinctive features in natural speech, permitting us to explore relationships between features in a way that is difficult to do through experiments on infants, which are generally constrained to same-different contrasts over a small set of manipulations.", "The reconstruction objective used here is not merely a convenient supervision signal.", "There is reason to believe that people actively model their perceptual worlds (Mamassian et al., 2002; Feldman, 2012; Singer et al., 2018; Yan et al., 2018), and autoassociative structures have been found in several brain areas (Treves and Rolls, 1991; Rolls and Treves, 1998).", "There is also evidence that phonetic comprehension and production can be acquired symbiotically through a sensorimotor loop relating acoustic perception and articulator movements (Houde and Jordan, 1998; Fadiga et al., 2002; Watkins et al., 2003; Wilson et al., 2004; Pulvermuller et al., 2006; Kroger et al., 2009; Bolhuis et al., 2010; Kroger and Cao, 2015; Bekolay, 2016).", "Finally, evidence suggests that working memory limitations impose compression pressures on the perceptual system that favor sparse representations of dense acoustic percepts (Baddeley and Hitch, 1974) and may guide infant language acquisition (Baddeley et al., 1998; Elsner and Shain, 2017).", "It is thus reasonable to suppose that perceptual reconstruction such as that implemented by an autoencoder architecture is immediately available as a learning signal to infants who still lack reliable guidance from phonotactics or the lexicon.", "Our use of BSNs follows the spirit of the earliest work on artificial neural networks (Rosen-blatt, 1958).", "Rosenblatt's perceptron was designed to study learning and decision-making in the brain and therefore used binary neurons to model the discrete firing behavior of their biological counterparts.", "This tradition has been replaced in deep learning research with differentiable activation functions that support supervised learning through backpropagation of error but are less biologically plausible.", "Our work takes advantage of the development of effective estimators for the gradients of discrete neurons (Williams, 1992; Hinton, 2012; Bengio et al., 2013; Courbariaux et al., 2016; Chung et al., 2017) to wed these two traditions, exploiting BSNs to encode the learner's latent representation of auditory percepts and deep networks to map between percepts and their latent representations.", "In addition to the greater similarity of BSNs to biological neurons, the use of discrete featural representations is motivated by experimental evidence that human phone perception (including that of infants) is both featural (White and Morgan, 2008; Chladkova et al., 2015) and categorical (Liberman et al., 1961; Eimas et al., 1987; Harnad, 2003; Feldman et al., 2009b).", "Experiments reported here use an 8-bit binary segment encoding.", "Eight bits is the the lower bound on binary encodings that are sufficiently expressive to capture all segmental contrasts in any known language (Mielke, 2009).", "Although theory-driven taxonomies generally contain more than eight distinctive features, these taxonomies are known to be highly redundant (Cherry et al., 1953).", "For example, the phonological featurization of the Xitsonga segments analyzed in our experiments contains 26 theory-driven features (Hayes, 2011; Hall et al., 2016), yielding up to 2 26 = 67108864 distinct segment categories, far more than the number of known segment types in Xitsonga or even the number of training instances in our data.", "By entailment, any representation that can identify all segment types in a language can also identify all featural contrasts that discriminate those types, regardless of how the feature space is factored.", "For this reason, we consider a phonological feature to be represented if it can be detected by an arbitrary function of the latent bits (Section 4.2), without assuming that the true and discovered feature spaces will factor identically.", "Our study shares an interest in phonological features with previous work in automatic speech recognition attempting to discover mappings between acoustics and hand-labeled featural representations (Liu, 1996; Bitar and Espy-Wilson, 1996; Frankel and King, 2001; Kirchhoff et al., 2002; Livescu and Glass, 2004; Mitra et al., 2011, inter alia ).", "While these results provide evidence that such a mapping is indeed learnable in an oracle setting, they rely on a supervision signal (direct annotation of the target representations) to which children do not have access.", "Our unsupervised approach measures perceptual availability of features in a more realistic learning scenario.", "The simulated learner used in this study is a deep neural autoencoder with an 8-bit layer of BSNs as its principle information bottleneck, depicted in Figure 1.", "The model processes a given phone segment by encoding the segment's acoustic informa-Acoustics E 1 E 2 ...", "tion into a bit pattern and then reconstructing the acoustic information from the encoded bit pattern.", "It is thus incentivized to use the latent bits in a systematic featural manner, encoding similar segments in similar ways.", "The encoder and decoder are both deep feed-foward residual networks (He et al., 2016).", "1 To enable feedforward autoencoding of sequential data, phone segments are clipped at 50 timesteps (500ms), providing complete coverage of over 99% of the phone segments in each corpus.", "Given F -dimensional input acoustic frames and a maximum input length of M timesteps, the weight matrix of each encoder layer is RFM FM except the final layer ( RFM 8 ).", "Given R -dimensional reconstructed acoustic frames and a maximum output length of N timesteps, the weight matrix of each decoder layer is RRN RN except the first layer ( R 8 RN ).", "Both the encoder and decoder contain initial and final dense transformation layers, with three residual layers in between.", "Each residual layer contains two dense layers.", "All internal layers use tanh activations and are batch-normalized with a decay rate of 0.9 (Ioffe and Szegedy, 2015).", "Given that the capacity for speaker adaptation short-term accommodation of idiosyncrasies in individuals' productions has been shown for 1 Feedforward networks are used both for computational reasons and because they dramatically outperformed recurrent networks in initial experiments, especially when RNN's were used for decoding.", "We hypothesize that this is due to the lack of direct access to the encoder timesteps, such as that permitted by sequence to sequence models with attention (Bahdanau et al., 2015).", "Attention is not viable for our goals because it defeats the purposes of an autoencoder by allowing the decoder to bypass the encoder's latent representation.", "both adults (Clarke and Garrett, 2004; Maye et al., 2008a) and children (Kuhl, 1979; van Heugten and Johnson, 2014), we equip the models with a 16-dimensional speaker embedding, which is concatenated both to the acoustic input frames and to the latent bit vector.", "Each BSN of the latent encoding is associated with a firing probability [0 , 1] parameterized by the encoder network.", "The neural activation can be discretized either deterministically or by sampling.", "The use of BSNs to encode segments is a problem for gradient-based optimization since it introduces a non-differentiable discrete decision into the network's latent structure.", "We overcome this problem by approximating missing gradients using the straight-through estimator (Hin-ton, 2012; Bengio et al., 2013; Courbariaux et al., 2016) with slope-annealing (Chung et al., 2017).", "Slope annealing multiplies the pre-activations a by a monotonically increasing function of the training iteration t , incrementally decreasing the bias of the straight-through estimator.", "We use the following annealing function: a a (1 + 0 . 1 t ) We discretize the latent dimensions using Bernoulli sampling during training and threshold-ing at 0.5 during evaluation.", "The models are implemented in Tensorflow (Abadi et al., 2015) and optimized using Adam (Kingma and Ba, 2014) for 150 training epochs with a constant learning rate of 0.001.", "The source code is available at https://github.com/ coryshain/dnnseg .", "We apply our model to the Xitsonga and English speech data from the Zerospeech 2015 shared task.", "The Xitsonga data are drawn from the NCHLT corpus (De Vries et al., 2014) and contain 2h29m07s of read speech from 24 speakers.", "The English data are drawn from the Buckeye Corpus (Pitt et al., 2005) and contain 4h59m05s of conversational speech from 12 speakers.", "While neither of these corpora represent child-directed speech, they both consist of fluently produced word tokens in context, rather than isolated productions as in many previous laboratory studies with infants (Eimas et al., 1971; Werker and Tees, 1984; Kuhl et al., 1992, inter alia ).", "We pre-segment the audio files using time-aligned phone transcriptions pro-Xitsonga English Model H C V H C V Baseline 0.023 0.013 0.016 0.006 0.004 0.005 Sigmoid 0.281 0.191 0.227 0.246 0.166 0.198 Sigmoid+Speaker 0.302 0.185 0.230 0.205 0.180 0.192 BSN 0.360 0.206 0.262 0.240 0.161 0.193 Our model (BSN+Speaker) 0.462 0.268 0.339 0.270 0.180 0.216 Table 1: Phone clustering scores.", "vided in the challenge repository.", "The gold segment labels are used in clustering evaluation metrics, but the unsupervised learner never has access to them.", "Data selection criteria and annotation procedures are are described in more detail in Ver-steegh et al. (2015).", "Prior to fitting, we apply a standard spectral preprocessing pipeline from automatic speech recognition: raw acoustic signals are converted into 13-dimensional vectors of Mel frequency cepstral co-efficients (MFCCs) (Mermelstein, 1976) with first and second order deltas, yielding 39-dimensional frames sequenced in time.", "Each frame covers 25ms of speech, and frames are spaced 10ms apart.", "The deltas are used by the encoder but stripped from the reconstruction targets.", "Following preceding work showing improved unsupervised clustering when segments are given fixed-dimensional acoustic representations, thus abstracting away from the variable temporal dilation in natural speech (Kamper et al., 2017a,b), we resample all reconstruction targets to a length of 25 frames.", "This pipeline instantiates some standard assumptions about the perceptual representations underlying human speech processing.", "Alternative representations for instance, articulatory representations (Liu, 1996; Frankel and King, 2001; Kirchhoff et al., 2002; Livescu and Glass, 2004) or other spectral transforms (Zwicker, 1961; Makhoul, 1975; Hermansky, 1990; Hermansky et al., 1991; Coifman and Wickerhauser, 1992; Shao et al., 2009) have been proposed as alternatives to MFCCs.", "Our results concerning perceptual availability are of course tied to our input representation, since phenomena that are poorly distinguished by MFCCs have less effect on our autoencoder loss function.", "Nonetheless, MFCCs are known to produce high-quality supervised speech recognizers (Zheng et al., 2001; Hinton et al., 2012), and we therefore leave optimization of the representation of speech features to future work.", "The first research question posed in the introduction was to what extent theory-driven phoneme categories emerge from a drive to model auditory percepts.", "We explore this question by evaluating the degree of correspondence between the autoencoder hidden states and the gold phone labels.", "Table 1 reports learning outcomes using the information theoretic measures homogeneity (H), completeness (C), and V-measure (V) for unsupervised cluster evaluation (Rosenberg and Hirschberg, 2007).", "All three metrics range over the interval [0 , 1] , with 1 indexing perfect performance.", "As shown in the table, our model yields dramatically better clustering performance than a random baseline that uniformly draws cluster IDs from a pool of 256 categories: we obtain 2118% and 4500% relative V-measure improvements in Xitsonga and English, respectively.", "At the same time, clustering performance is far from perfect.", "This result indicates that perceptual modeling an immediately-available learning signal in infant language acquisition both (1) drives the learner a long way toward phoneme acquisition, and (2) is insufficient to fully identify phone categories in our learners.", "One likely explanation for the latter is evidence from cognitive science that phonotactic and lexical information (to which our learners do not have access) supplement perception as the acquisition process unfolds (Feldman et al., 2013a; Pater and Moreton, 2014).", "The middle rows of Table 1 show ablation results from using non-discrete sigmoid neurons rather than BSNs in the encoding layer ( Sigmoid vs. BSN ) 2 and/or removing the speaker adaptation feature (i.e. removing speaker embeddings).", "As shown, the classification performance of our model benefits substantially from the use of BSN encodings with speaker adaptation, especially on Xitsonga.", "Note that the reconstruction losses of the sigmoid encoders are better than those of the BSN encoders despite their degraded classification performance.", "This is to be expected: sigmoid neurons have greater representational capacity than binary neurons, since they can encode information through continuous gradations.", "They are therefore more capable of memorizing idiosyncratic properties of the input and are less incentivized to discover generalizable latent classes.", "The ablation results thus suggest that speaker adaptation and categorical perception support the discovery of linguistically relevant abstractions.", "The second research question posed in the introduction was to what extent distinctive features differ in perceptual availability.", "We explore this question in two ways.", "First, we qualitatively assess the linguistic plausibility of the natural clustering in the latent 2 To obtain class labels from the sigmoid encoder, we rounded the activations.", "Rounding was only used for evaluation and had no impact on the fitting procedure.", "bits.", "Figure 2 visualizes this clustering based on correlations between the average of the bit patterns across all instances of each gold phone type for both datasets.", "If the unsupervised classifier ignored phonological structure altogether, the plots would be roughly uniform in color, and if the unsupervised classifier perfectly identified phonemes, the plots would consist entirely of fully light or fully dark cells, with unique bit patterns associated with each phone type.", "As shown, the reality falls in between: while the visualized clas-sifications are far from perfect, they nonetheless contain a great deal of structure and suggest the presence of rough natural classes in both languages, especially of affricates, nasals, sibilants, and approximants.", "Our learners also replicate infants' difficulty in discriminating some nasal and fricative place features (Polka et al., 2001; Nittrouer, 2001; Narayan et al., 2010), assigning highly similar representations to many subtypes of nasals and fricatives across places of articulation (see e.g. similar mean bit patterns of /n/ vs. / n / and /s/ vs. / s / in both languages).", "Second, we quantitatively evaluate the degree to which theory-driven features like [ voice] are recoverable from the network's latent representations.", "To do so, we map gold phone labels into binary distinctive feature clusters from Hayes (2011) using Phonological CorpusTools (Hall et al., 2016).", "One possible form of analysis would be to search for individual correspondences between distinctive features and the model's latent dimensions.", "However, this is likely to underestimate the degree of feature learning because the deep decoder can learn arbitrary logics on the latent bit patterns, a necessary property for fitting complex non-linear mappings from latent features to acoustics.", "We instead evaluate distinctive feature discovery by fitting random forest classifiers that predict theory-driven features using the latent bit patterns as inputs.", "We can then use classifier performance to assess the degree to which a given distinctive feature can be recovered by a logical statement on the network's latent bits.", "The classifiers were fitted using 5-fold cross-validation in Scikit-learn (Pedregosa et al., 2011) with 100 estimators, balanced class weighting, and an entropy-based split criterion.", "Results are given in Tables 2 and 3.", "As shown, (1) there are large differences in perceptual availability between features, and (2) relative avail-Feature P R F voice 0.9767 0.9033 0.9386 sonorant 0.9249 0.9085 0.9166 continuant 0.9492 0.7936 0.8645 consonantal 0.8314 0.8915 0.8604 approximant 0.8998 0.8192 0.8576 syllabic 0.8278 0.8523 0.8398 dorsal 0.8935 0.7703 0.8273 strident 0.6991 0.9594 0.8089 low 0.7175 0.8978 0.7976 front 0.6590 0.8101 0.7268 high 0.5875 0.7882 0.6732 back 0.5352 0.8527 0.6577 round 0.5332 0.8551 0.6568 labial 0.5669 0.7725 0.6539 coronal 0.5382 0.8301 0.6530 tense 0.5208 0.8115 0.6344 delayed release 0.5468 0.7226 0.6225 anterior 0.4078 0.8355 0.5481 nasal 0.3635 0.8796 0.5144 distributed 0.2459 0.8537 0.3819 constricted glottis 0.1762 0.9007 0.2948 lateral 0.1536 0.8062 0.2581 labiodental 0.0934 0.7980 0.1672 trill 0.0809 0.7401 0.1458 spread glottis 0.0671 0.5856 0.1204 implosive 0.0041 0.4041 0.0081 Table 2: Perceptual availability by feature in Xitsonga Feature P R F voice 0.9244 0.8567 0.8893 sonorant 0.8544 0.8862 0.8700 approximant 0.8005 0.8370 0.8183 continuant 0.8577 0.7669 0.8098 consonantal 0.8249 0.7357 0.7777 syllabic 0.6624 0.8426 0.7417 dorsal 0.7046 0.7114 0.7080 strident 0.5505 0.9027 0.6839 coronal 0.5758 0.7066 0.6345 anterior 0.5251 0.7280 0.6101 delayed release 0.4413 0.7374 0.5521 front 0.4322 0.7407 0.5459 high 0.3841 0.6931 0.4943 tense 0.3275 0.7101 0.4483 back 0.3128 0.7504 0.4416 nasal 0.2796 0.7544 0.4080 labial 0.2541 0.7077 0.3739 low 0.2410 0.7787 0.3680 distributed 0.2203 0.6881 0.3337 diphthong 0.2039 0.8051 0.3254 round 0.1665 0.7012 0.2692 lateral 0.1484 0.8333 0.2519 labiodental 0.0787 0.6756 0.1410 spread glottis 0.0377 0.6683 0.0714 Table 3: Perceptual availability by feature in English ability of features is remarkably consistent between these unrelated languages, suggesting that the models are tapping into generalized perceptual patterns.", "The best-learned feature in both languages is [ voice], which is consistent with early evidence of voicing sensitivity in infants (see Section 2.2).", "Below this, the features [ sonorant], [ continuant], [ consonantal], [ approximant], and [ syllabic] are faithfully recovered in both languages.", "All of these features distinguish prototypical consonants from prototypical vowels but differ in their treatment of edge cases like nasals, liquids, and glides.", "Thus, similarly to the infant subjects discussed in Section 2.2, the model finds the consonant-vowel contrast to be highly available.", "Like human infants, our computational learner finds certain consonantal place and manner features relatively more difficult, although the features [ dorsal], [ coronal], [ strident] and [ delayed release] are also fairly well recovered in both languages.", "By contrast, both models poorly capture features like [ lateral], [ labiodental], [ distributed], [ nasal], [ constricted glottis], [ spread glottis], and [ implosive], 3 suggesting that these features are more difficult to discover bottom-up and may 3 Delayed release: affricates, constricted glottis: ejectives; spread glottis: glottal frication (e.g. aspirated stops).", "therefore be more dependent on phonotactic and lexical constraints for acquisition.", "4 This finding aligns with the acquisition literature in suggesting that there may be substantial differences in perceptual availability between different place and manner features (see Section 2.2).", "In addition to these cross-linguistic similarities, the models also reveal important differences between Xitsonga and English.", "For example, the two languages differ in the relative availability of features that distinguish vowels vs. features that distinguish consonants.", "In English, vowel features like [ front], [ high], and [ back] are substantially less well learned than consonant features like [ coronal], [ anterior], and [ delayed re-lease], while the opposite holds in Xitsonga.", "We hypothesize that this is due to the fact that there are more vowels and fewer consonants in English than in Xitsonga: having fewer distinctions might reduce the degree of crowding in the articulatory space, increasing perceptual contrast between phone types (Liljencrants and Lindblom, 1972).", "4 Note that we are not suggesting that e.g. [ spread glottis] cannot be detected in speech.", "Our claim is rather that acoustic cues to [ spread glottis] are less pronounced and/or less reliable than cues to e.g. [ voice] and therefore perhaps more difficult to exploit in early infancy, since our autoencoder model does not find them particularly useful for perceptual reconstruction.", "Finally, note that the cluster maps in Figure 2 and the feature recovery data in Tables 2 and 3 provide complementary perspectives on the learned representations.", "For example, it may at first seem surprising that the feature [ nasal] is recovered relatively poorly in both languages, given that nasals are well clustered in Figure 2.", "This discrepancy indicates that nasal segments are represented similarly to each other but also similarly enough to other segments that they are not reliably differentiated as a class.", "Conversely, the voicing feature is well recovered in both languages despite the lack of a visible cluster of voiced segments.", "This indicates that voicing is reliably encoded in the latent bits, even if the representation as a whole is dominated by other kinds of information.", "In this paper, we used binary stochastic neural autoencoders to explore the perceptual availability of (1) theory-driven phonemic categories and (2) theory-driven phonological features, based only on the acoustic properties of segments.", "We found that phonemic categories exert substantial influence on a learner driven to model its auditory percepts, but that additional information especially phonotactic and lexical (Feldman et al., 2013a) is likely necessary for full adult-like phone discrimination.", "We also found asymmetries in the perceptual availability of phonological features like [ voice] and [ nasal] and showed that these asymmetries reflect attested patterns of infant phone discrimination.", "Our model both replicates broad trends in the child acquisition literature (successful consonant-vowel and voicing discrimination, relatively less successful discrimination of various place and manner features) and sheds new light on potential relationships between auditory perception and language acquisition: the overall cline of perceptual availability revealed by the model in Tables 2 and 3 suggests a range of testable hypotheses about the role of perception in infant speech processing.", "The authors would like to thank the anonymous reviewers for their helpful comments.", "This work was supported by National Science Foundation grant #1422987 to ME.", "All views expressed are those of the authors and do not necessarily reflect the views of the National Science Foundation." ]
[ "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "abstain", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "result", "objective", "other", "other", "other" ]
[ "Grammatical error correction (GEC) is one of the areas in natural language processing in which purely neural models have not yet superseded more traditional symbolic models.", "Hybrid systems combining phrase-based statistical machine translation (SMT) and neural sequence models are currently among the most effective approaches to GEC.", "However, both SMT and neural sequence-to-sequence models require large amounts of annotated data.", "Language model based GEC (LM-GEC) is a promising alternative which does not rely on annotated training data.", "We show how to improve LM-GEC by applying modelling techniques based on finite state transducers.", "We report further gains by rescoring with neural language models.", "We show that our methods developed for LM-GEC can also be used with SMT systems if annotated training data is available.", "Our best system outperforms the best published result on the CoNLL-2014 test set, and achieves far better relative improvements over the SMT baselines than previous hybrid systems.", "Grammatical error correction (GEC) is the task of automatically correcting all types of errors in text; e.g. [ In a such situaction In such a situation ].", "Using neural models for GEC is becoming increasingly popular (Xie et al., 2016; Yuan and Briscoe, 2016; Ji et al., 2017; Sakaguchi et al., 2017; Schmaltz et al., 2017; Chollampatt and Ng, 2018; Ge et al., 2018a,b), possibly combined with phrase-based SMT (Chollampatt et al., 2016; Chollampatt and Ng, 2017; Grundkiewicz and Junczys-Dowmunt, 2018).", "A potential challenge for purely neural GEC models is their vast output space since they assign non-zero probability mass to any sequence.", "GEC is compared to machine translation a highly constrained problem as corrections tend to be very local, and lexical choices are usually limited.", "Finite state transducers (FSTs) are an efficient way to represent large structured search spaces.", "In this paper, we propose to construct a hypothesis space using standard FST operations like composition, and then constrain the output of a neural GEC system to that space.", "We study two different scenarios: In the first scenario, we do not have access to annotated training data, and only use a small development set for tuning.", "In this scenario, we construct the hypothesis space using word-level context-independent confusion sets (Bryant and Briscoe, 2018) based on spell checkers and morphology databases, and rescore it with count-based and neural language models (NLMs).", "In the second scenario, we assume to have enough training data available to train SMT and neural machine translation (NMT) systems.", "In this case, we make additional use of the SMT lattice and rescore with an NLM-NMT ensemble.", "Our contributions are: We present an FST-based adaptation of the work of Bryant and Briscoe (2018) which allows exact inference, and does not require annotated training data.", "We report large gains from rescoring with a neural language model.", "Our technique beats the best published result with comparable amounts of training data on the CoNLL-2014 (Ng et al., 2014) test set when applied to SMT lattices.", "Our combination strategy yields larger gains over the SMT baselines than simpler rescoring or pipelining used in prior work on hybrid systems (Grund-kiewicz and Junczys-Dowmunt, 2018).", "Constructing the set of hypotheses The core idea of our approach is to first construct a", "(weighted) hypothesis space H which is large enough to be likely to contain good corrections, but constrained enough to embrace the highly structured nature of GEC.", "Then, we use H to constrain a neural beam decoder.", "We make extensive use of the FST operations available in Open-FST (Allauzen et al., 2007) like composition (de-noted with the -operator) and projection (denoted with input ( ) and output ( ) ) to build H .", "The process starts with an input lattice I .", "In our experiments without annotated training data, I is an FST which simply maps the input sentence to itself as shown in Fig.", "1(a).", "If we do have access to enough annotated data, we train an SMT system on it and derive I from the SMT n -best list.", "1 For each hypothesis y we compute the Levenshtein distance lev ( x , y ) to the source sentence x .", "We construct a string z by prepending lev ( x , y ) many < mcorr > tokens to y , and construct I such that: z = ( < mcorr > ) lev ( x , y ) y (1) [[ I ]]( z ) = SMTSMT ( y | x ) .", "We adapt the notation of Mohri (2003) and denote the cost I assigns to mapping a string z to itself as [[ I ]]( z ) , and set [[ I ]]( z ) = if I does not accept z .", "SMT ( y | x ) is the SMT score.", "In other words, I represents the weighted SMT n -best list after adding lev ( x , y ) many < mcorr > tokens to each hypothesis as illustrated in Fig.", "1(c).", "We scale SMT scores by a factor SMT for tuning.", "Bryant and Briscoe (2018) addressed substitution errors such as non-words, morphology-, article-, and preposition-errors by creating confusion sets C ( x i ) that contain possible (context-independent) 1:1 corrections for each input word x i .", "Specifically, they relied on CyHunspell for spell checking (Rodriguez and Seal, 2014), the AGID morphology database for morphology errors (Atkinson, 2011), and manually defined confusion sets for determiner and preposition errors, hence avoiding the need for annotated training data.", "We use the same confusion sets as Bryant and Briscoe (2018) to augment our hypothesis space via the edit flower transducer E shown in Fig. 2. E can map any sequence to itself via its -self-loop.", "Additionally, it allows the mapping x i < corr > y for each y C ( x i ) .", "For example, for the misspelled word x i = situaction' and the confusion set C ( situaction' ) = { situation' , acquisition' } , E allows mapping situaction' to < corr > situa-tion' and < corr > acquisition', and to itself via the -self-loop.", "The additional < corr > token will help us to keep track of the edits.", "We obtain our base lattice B which defines the set of possible hypotheses by composition and projection: B := output ( I E ) .", "Scoring the hypothesis space We apply multiple scoring strategies to the hypotheses in B .", "First, we penalize < mcorr > and < corr > tokens with two further parameters, mcorr and corr , by composing B with the penalization transducer P shown in Fig. 3. 2 The mcorr and corr parameters control the trade-off between the number and quality of the proposed corrections since high values bias towards fewer corrections.", "To incorporate word-level language model scores we train a 5-gram count-based LM with 2 Rather than using < mcorr > and < corr > tokens and the transducer P we could directly incorporate the costs in the transducers I and E , respectively.", "We chose to use explicit correction tokens for clarity.", "KenLM (Heafield, 2011) on the One Billion Word Benchmark dataset (Chelba et al., 2014), and convert it to an FST L using the OpenGrm NGram Library (Roark et al., 2012).", "For tuning purposes we scale weights in L with KenLM : [[ L ]]( y ) = KenLM log P KenLM ( y ) .", "Our combined word-level scores can be expressed with the following transducer:", "Since we operate in the tropical semiring, path scores in H word are linear combinations of correction penalties, LM scores, and, if applicable, SMT scores, weighted with the -parameters.", "Note that exact inference in H word is possible using FST shortest path search.", "This is an improvement over the work of Bryant and Briscoe (2018) who selected correction options greedily.", "Our ultimate goal, however, is to rescore H word with neural models such as an NLM and if annotated training data is available an NMT model.", "Since our neural models use subword units (Sennrich et al., 2016, BPEs), we compose H word with a transducer T which maps word sequences to BPE sequences.", "Our final transducer HBPE which we use to constrain the neural beam decoder can be written as: HBPE = output ( H word T ) = output ( I E P L T ) .", "To help downstream beam decoding we apply (cid:15) -removal, determinization, minimization, and weight pushing (Mohri, 1997; Mohri and Riley, 2001) to HBPE .", "We search for the best hypothesis y BPE with beam search using a combined score of word-level symbolic models (represented by HBPE ) and subword unit based neural models: y BPE = arg max y BPE (cid:16) [[ HBPE ]]( y BPE ) + NLM log PNLM ( y BPE ) + NMT log PNMT ( y BPE | x BPE ) (cid:17) (7) The final decoding pass can be seen as an ensemble of a neural LM and an NMT model which is constrained and scored at each time step by the set of possible tokens in HBPE .", "We have introduced three -parameters corr , KenLM , and NLM , and three additional parameters SMT , mcorr , and NMT if we make use of annotated training data.", "We also use a word insertion penalty wc for our SMT-based experiments.", "We tune all these parameters on the development sets using Powell search (Powell, 1964).", "3 3 Experiments Experimental setup In our experiments with annotated training data we use the SMT system of Junczys-Dowmunt and Grundkiewicz (2016) 4 to create 1000-best lists from which we derive the input lattices I .", "All our LMs are trained on the One Billion Word Benchmark dataset (Chelba et al., 2014).", "Our neural LM is a Transformer decoder architecture in the transformer base con-figuration trained with Tensor2Tensor (Vaswani et al., 2018).", "Our NMT model is a Transformer model ( transformer base ) trained on the concatenation of the NUCLE corpus (Dahlmeier et al., 2013) and the Lang-8 Corpus of Learner English v1.0 (Mizumoto et al., 2012).", "We only keep sentences with at least one correction (659K sentences in total).", "Both NMT and NLM models use byte pair encoding (Sennrich et al., 2016, BPE) with 32K merge operations.", "We delay SGD updates by 2 on four physical GPUs as suggested by 3 Similarly to Bryant and Briscoe (2018), even in our experiments without annotated training data, we do need a very small amount of annotated sentences for tuning.", "Saunders et al. (2018).", "We decode with beam size 12 using the SGNMT decoder (Stahlberg et al., 2017).", "We evaluate on CoNLL-2014 (Ng et al., 2014) and JFLEG-Test (Napoles et al., 2017), using CoNLL-2013 (Ng et al., 2013) and JFLEG-Dev as development sets.", "Our evaluation metrics are GLEU (Napoles et al., 2015) and M2 (Dahlmeier and Ng, 2012).", "We generated M2 files using ERRANT (Bryant et al., 2017) for JFLEG and Tab.", "1 to be comparable to Bryant and Briscoe (2018), but used the official M2 files in Tab.", "2 to be comparable to Grundkiewicz and Junczys-Dowmunt (2018).", "Results Our LM-based GEC results without using annotated training data are summarized in Tab.", "1.", "Even when we use the same resources (same LM and same confusion sets) as Bryant and Briscoe (2018), we see gains on JFLEG (rows 1 vs. 2), probably because we avoid search errors in our FST-based scheme.", "Adding an NLM yields significant gains across the board.", "Tab.", "2 shows that adding confusion sets to SMT lattices is effective even without neural models (rows 3 vs. 4).", "Rescoring with neural models also benefits from the confusion sets (rows 5 vs. 6).", "With our ensemble systems (rows 7 and 8) we are able to outperform prior work 5 (row 1) on CoNLL-2014 and 5 We compare our systems to the work of Grundkiewicz and Junczys-Dowmunt (2018) as they used similar training data.", "We note, however, that Ge et al. (2018b) reported even better results with much more (non-public) training data.", "Comparing (Ge et al., 2018a) and (Ge et al., 2018b) suggests that most of their gains come from the larger training set.", "come within 3 GLEU on JFLEG.", "Since the baseline SMT systems of Grundkiewicz and Junczys-Dowmunt (2018) were better than the ones we used, we achieve even higher relative gains over the respective SMT baselines (Tab. 3).", "Error type analysis We also carried out a more detailed error type analysis of the best CoNLL-2014 M2 system with/without training data using ERRANT (Tab. 4).", "Specifically, this table shows that while the trained system was consistently better than the untrained system, the degree of the improvement differs significantly depending on the error type.", "In particular, since the untrained system was only designed to handle Replacement word errors, much of the improvement in the trained system comes from the ability to correct Missing and Unnecessary word errors.", "The trained system nevertheless still improves upon the untrained system in terms of replacement errors by 10 F 0 .", "5 (45.53 vs. 55.63).", "In terms of more specific error types, the trained system was also able to capture a wider variety of error types, including content word errors (adjec-tives, adverbs, nouns and verbs) and other categories such as pronouns and punctuation.", "Since the untrained system only targets spelling, orthographic and morphological errors however, it is interesting to note that the difference in scores between these categories tends to be smaller than others; e.g. noun number (53.43 vs 64.96), orthography (62.77 vs 74.07), spelling (67.91 vs 75.21) and subject-verb agreement (66.67 vs 68.39).", "This suggests that an untrained system is already able to capture the majority of these error types.", "Oracle experiments Our FST-based composition cascade is designed to enrich the search space to allow the neural models to find better hypotheses.", "Tab.", "5 reports the oracle sentence error rate for different configurations, i.e. the fraction of reference sentences in the test set which are not in the FSTs.", "Expanding the SMT lattice significantly reduces the oracle error rate from 55.63% to 48.17%.", "We demonstrated that our FST-based approach to GEC outperforms prior work on LM-based GEC significantly, especially when combined with a neural LM.", "We also applied our approach to SMT lattices and reported much better relative gains over the SMT baselines than previous work on hybrid systems.", "Our results suggest that FSTs provide a powerful and effective framework for constraining neural GEC systems.", "This paper reports on research supported by the U.K. Engineering and Physical Sciences Research Council (EPSRC grant EP/L027623/1) and Cambridge Assessment, University of Cambridge." ]
[ "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "result", "abstain", "other", "abstain", "abstain", "abstain", "objective", "objective", "method", "method", "abstain", "abstain", "abstain", "result", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "other", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "result", "other" ]
[ "Suspense is a crucial ingredient of narrative fic-tion, engaging readers and making stories compelling.", "While there is a vast theoretical literature on suspense, it is computationally not well understood.", "We compare two ways for modelling suspense: surprise, a backward-looking measure of how unexpected the current state is given the story so far; and uncertainty reduction, a forward-looking measure of how unexpected the continuation of the story is.", "Both can be computed either directly over story representations or over their probability distributions.", "We propose a hierarchical language model that encodes stories and computes surprise and uncertainty reduction.", "Evaluating against short stories annotated with human suspense judgements, we find that uncertainty reduction over representations is the best predictor, resulting in near human accuracy.", "We also show that uncertainty reduction can be used to predict suspenseful events in movie synopses.", "As current NLP research expands to include longer, fictional texts, it becomes increasingly important to understand narrative structure.", "Previous work has analyzed narratives at the level of characters and plot events (e.g., Gorinski and Lapata, 2018; Martin et al., 2018).", "However, systems that process or generate narrative texts also have to take into account what makes stories compelling and enjoyable.", "We follow a literary tradition that makes And then?", "(Forster, 1985; Rabkin, 1973) the primary question and regards suspense as a crucial factor of storytelling.", "Studies show that suspense is important for keeping readers' attention (Khrypko and Andreae, 2011), promotes readers' immersion and suspension of disbelief (Hsu et al., 2014), and plays a big part in making stories enjoyable and interesting (Oliver, 1993; Schraw et al., 2001).", "Computationally less well understood, suspense has only sporadically been used in story generation systems (O'Neill and Riedl, 2014; Cheong and Young, 2014).", "Suspense, intuitively, is a feeling of anticipation that something risky or dangerous will occur; this includes the idea both of uncertainty and jeopardy.", "Take the play Romeo and Juliet: Dramatic suspense is created throughout the initial duel, the meeting at the masquerade ball, the marriage, the fight in which Tybalt is killed, and the sleeping potions leading to the death of Romeo and Juliet.", "At each moment, the audience is invested in something being at stake and wonders how it will end.", "This paper aims to model suspense in computational terms, with the ultimate goal of making it deployable in NLP systems that analyze or generate narrative fiction.", "We start from the assumption that concepts developed in psycholinguistics to model human language processing at the word level (Hale, 2001, 2006) can be generalised to the story level to capture suspense, the Hale model.", "This assumption is supported by the fact that economists have used similar concepts to model suspense in games (Ely et al., 2015; Li et al., 2018), the Ely model.", "Common to both approaches is the idea that suspense is a form of expectation: In games, we expect to win or lose instead in stories, we expect that the narrative will end a certain way.", "We will therefore compare two ways for modelling narrative suspense: surprise, a backward-looking measure of how unexpected the current state is given the story so far; and uncertainty reduction, a forward-looking and measure of how unexpected the continuation of the story is.", "Both measures can be computed either directly over story representations, or indirectly over the probability distributions over such representations.", "We propose a hierarchical language model based on Generative Pre-Training (GPT, Radford et al., 2018) to encode story-level representations and develop an 1764 inference scheme that uses these representations to compute both surprise and uncertainty reduction.", "For evaluation, we use the WritingPrompt corpus of short stories (Fan et al., 2018), part of which we annotate with human sentence-by-sentence judgements of suspense.", "We find that surprise over representations and over probability distributions both predict suspense judgements.", "However uncertainty reduction over representations is better, resulting in near human-level accuracy.", "We also show that our models can be used to predict turning points, i.e., major narrative events, in movie synopses (Pa-palampidi et al., 2019).", "In narratology, uncertainty over outcomes is traditionally seen as suspenseful (e.g., O'Neill, 2013; Zillmann, 1996; Abbott, 2008).", "Other authors claim that suspense can exist without uncertainty (e.g., Smuts, 2008; Hoeken and van Vliet, 2000; Gerrig, 1989) and that readers feel suspense even when they read a story for the second time (Dela-torre et al., 2018), which is unexpected if suspense is uncertainty; this is referred to as the paradox of suspense (Prieto-Pablos, 1998; Yanal, 1996).", "Considering Romeo and Juliet again, in the first view suspense is motivated by primarily by uncertainty over what will happen.", "Who will be hurt or killed in the fight?", "What will happen after marriage?", "However, at the beginning of the play we are told from forth the fatal loins of these two foes, a pair of star-crossed lovers take their life, and so the suspense is more about being invested in the plot than not knowing the outcome, aligning more with the second view: suspense can exist without uncertainty.", "We do not address the paradox of suspense directly in this paper, but we are guided by the debate to operationalise methods that encompass both views.", "The Hale model is closer to the traditional model of suspense as being about uncertainty.", "In contrast, the Ely model is more in line with the second view that uncertainty matters less than consequentially different outcomes.", "In NLP, suspense is studied most directly in natural language generation, with systems such as Dramatis (O'Neill and Riedl, 2014) and Suspenser (Cheong and Young, 2014), two planning-based story generators that use the theory of Gerrig and Bernardo (1994) that suspense is created when a protagonist faces obstacles that reduce successful outcomes.", "Our approach, in contrast, models suspense using general language models fine-tuned on stories, without planning and domain knowledge.", "The advantage is that the model can be trained on large volumes of available narrative text without requiring expensive annotations, making it more generalisable.", "Other work emphasises the role of characters and their development in story understanding (Bamman et al., 2014, 2013; Chaturvedi et al., 2017; Iyyer et al., 2016) or summarisation (Gorinski and Lap-ata, 2018).", "A further important element of narrative structure is plot, i.e., the sequence of events in which characters interact.", "Neural models have explicitly modelled events (Martin et al., 2018; Harrison et al., 2017; Rashkin et al., 2018) or the results of actions (Roemmele and Gordon, 2018; Liu et al., 2018a,b).", "On the other hand, some neural generation models (Fan et al., 2018) just use a hierarchical model on top of a language model; our architecture follows this approach.", "In order to formalise measures of suspense, we assume that a story consists of a sequence of sentences.", "These sentences are processed one by one, and the sentence at the current timepoint t is represented by an embedding e t (see Section 4 for how embeddings are computed).", "Each embedding is associated with a probability P ( e t ) .", "Continuations of the story are represented by a set of possible next sentences, whose embeddings are denoted by e it + 1 .", "The first measure of suspense we consider is surprise (Hale, 2001), which in the psycholinguistic literature has been successfully used to predict word-based processing effort (Demberg and Keller, 2008; Roark et al., 2009; Van Schijndel and Linzen, 2018a,b).", "Surprise is a backward-looking predictor: it measures how unexpected the current word is given the words that preceded it (i.e., the left context).", "Hale formalises surprise as the negative log of the conditional probability of the current word.", "For stories, we compute surprise over sentences.", "As our sentence embeddings e t include information about the left context e 1 ,..., e t 1 , we can write Hale surprise as: S Hale t = log P ( e t ) (1) An alternative measure for predicting word-by-word processing effort used in psycholinguistics is entropy reduction (Hale, 2006).", "This measure is 1765 forward-looking: it captures how much the current word changes our expectations about the words we will encounter next (i.e., the right context).", "Again, we compute entropy at the story level, i.e., over sentences instead of over words.", "Given a probability distribution over possible next sentences P ( e it + 1 ) , we calculate the entropy of that distribution.", "Entropy reduction is the change of that entropy from one sentence to the next: H t = i P ( e it + 1 ) log P ( e it + 1 ) U Hale t = H t 1 H t (2) Note that we follow Frank (2013) in computing entropy over surface strings, rather than over parse states as in Hale's original formulation.", "In the economics literature, Ely et al. (2015) have proposed two measures that are closely related to Hale surprise and entropy reduction.", "At the heart of their theory of suspense is the notion of belief in an end state.", "Games are a good example: the state of a tennis game changes with each point being played, making a win more or less likely.", "Ely et al. define surprise as the amount of change from the previous time step to the current time step.", "Intuitively, large state changes (e.g., one player suddenly comes close to winning) are more surprising than small ones.", "Representing the state at time t as e t , Ely surprise is defined as: S Ely t = ( e t e t 1 ) 2 (3) Ely et", "al.'s approach can be adapted for modelling suspense in stories if we assume that each sentence in a story changes the state (the characters, places, events in a story, etc.).", "States e t then become sentence embeddings, rather than beliefs in end states, and Ely surprise is the distance between the current embedding e t and the previous embedding e t 1 .", "In this paper, we will use L1 and L2 distances; other authors (Li et al., 2018) experiment with information gain and KL divergence, but found worse performance when modelling suspense in games.", "Just like Hale surprise, Ely surprise models backward-looking prediction, but over representations, rather than over probabilities.", "Ely et al. also introduce a measure of forward-looking prediction, which they define as the expected difference between the current state e t and the next state e t + 1 : U Ely t = E [( e t e it + 1 ) 2 ] = i P ( e it + 1 )( e t e it + 1 ) 2 (4) This is closely related to Hale entropy reduction, but again the entropy is computed over states (sen-tence embeddings in our case), rather than over probability distributions.", "Intuitively, this measure captures how much the uncertainty about the rest of the story is reduced by the current sentence.", "We refer to the forward-looking measures in Equations (2) and (4) as Hale and Ely uncertainty reduction, respectively.", "Ely et al. also suggest versions of their measures in which each state is weighted by a value t , thus accounting for the fact that some states may be more inherently suspenseful than others: S Ely t = t ( e t e t 1 ) 2 U Ely t = E [ t + 1 ( e t e it + 1 ) 2 ] (5) We stipulate that sentences with high emotional va-lence are more suspenseful, as emotional involvement heightens readers' experience of suspense.", "This can be captured in Ely et", "al.'s framework by assigning the s the scores of a sentiment classifier.", "We now need to show how to compute the surprise and uncertainty reduction measures introduced in the previous section.", "This involves building a model that processes stories sentence by sentence, and assigns each sentence an embedding that encodes the sentence and its preceding context, as well as a probability.", "These outputs can then be used to compute a surprise value for the sentence.", "Furthermore, the model needs to be able to generate a set of possible next sentences (story contin-uations), each with an embedding and a probability.", "Generating upcoming sentences is potentially very computationally expensive since the number of continuations grows exponentially with the number of future time steps.", "As an alternative, we can therefore sample possible next sentences from a corpus and use the model to assign them embeddings and probabilities.", "Both of these approaches will produce sets of upcoming sentences, which we can then use to compute uncertainty reduction.", "While we have so far only talked about the next sentences, we will also experiment with uncertainty reduction computed using longer rollouts.", "Our overall approach leverages contextualised language models, which are a powerful tool in NLP when pretrained on large amounts of text and fine tuned on a specific task (Peters et al., 2018; Devlin et al., 2019).", "Specifically, we use Generative Pre-Training (GPT, Radford et al., 2018), a model which has proved successful in generation tasks (Radford et al., 2019; See et al., 2019).", "Hierarchical Model Previous work found that hierarchical models show strong performance in story generation (Fan et al., 2018) and understanding tasks (Cai et al., 2017).", "The language model and hierarchical encoders we use are unidirectional, which matches the incremental way in which human readers process stories when they experience suspense.", "Figure 1 depicts the architecture of our hierarchical model.", "1 It builds a chain of representations that anticipates what will come next in a story, allowing us to infer measures of suspense.", "For a given sentence, we use GPT as our word encoder ( word enc in Figure 1) which turns each word in a sentence into a word embedding w i .", "Then, we use an RNN ( sent enc ) to turn the word embeddings of the sentences into a sentence embedding i .", "Each sentence is represented by the hidden state of its last word, which is then fed into a second RNN 1 Model code and scripts for evaluation are available at https://github.com/dwlmt/Story-Untangling/ tree/acl-2020-dec-submission ( story enc ) that computes a story embedding.", "The overall story representation is the hidden state of its last sentence.", "Crucially, this model also gives us e t , a contextualised representation of the current sentence at point t in the story, to compute surprise and uncertainty reduction.", "Model training includes a generative loss (cid:96) gen to improve the quality of the sentences generated by the model.", "We concatenate the word representations w j for all word embeddings in the latest sentence with the latest story embedding e max ( t ) .", "This is run through affine ELU layers to produce enriched word embedding representations, analogous to the Deep Fusion model (Gulcehre et al., 2015), with story state instead of a translation model.", "The related Cold Fusion approach (Sriram et al., 2018) proved inferior.", "Loss Functions To obtain the discriminatory loss (cid:96) disc for a particular sentence s in a batch, we compute the dot product of all the story embeddings e in the batch, and then take the cross-entropy across the batch with the correct next sentence: (cid:96) disc ( e i = s t + 1 ) = log exp ( e i = s t + 1 e t ) i exp ( e it + 1 e t ) (6) Modelled on Quick Thoughts (Logeswaran and Lee, 2018), this forces the model to maximise the dot product of the correct next sentence versus other sentences in the same story, and negative examples from other stories, and so encourages representations that anticipate what happens next.", "The generative loss in Equation (7) is a standard LM loss, where w j is the GPT word embeddings from the sentence and e max ( t ) is the story context that each word is concatenated with: (cid:96) gen = j log P ( w j w j 1 , w j 2 ,... ; e max ( t ) ) (7) The overall loss is (cid:96) disc + (cid:96) gen .", "More advanced generation losses (e.g., Zellers et al., 2019) could be used, but are an order of magnitude slower.", "We compute the measures of surprise and uncertainty reduction introduced in Section 3.1 using the output of the story encoder story enc .", "In addition to the contextualised sentence embeddings e t , this requires their probabilities P ( e t ) , and a distribution over alternative continuations P ( e i t + 1 ) .", "We implement a recursive beam search over a tree of future sentences in the story, looking between one and three sentences ahead (rollout).", "The 1767 probability is calculated using the same method as the discriminatory loss, but with the cosine similarity rather than the dot product of the embeddings e t and e it + 1 fed into a softmax function.", "We found that cosine outperformed dot product on inference as the resulting probability distribution over continuations is less concentrated.", "Dataset The overall goal of this work is to test whether the psycholinguistic and economic theories introduced in Section 3 are able to capture human intuition of suspense.", "For this, it is important to use actual stories which were written by authors with the aim of being engaging and interesting.", "Some of the story datasets used in NLP do not meet this criterion; for example ROC Cloze (Mostafazadeh et al., 2016) is not suitable because the stories are very short (five sentences), lack naturalness, and are written by crowdworkers to fulfill narrow objectives, rather than to elicit reader engagement and interest.", "A number of authors have also pointed out technical issues with such artificial corpora (Cai et al., 2017; Sharma et al., 2018).", "Instead, we use WritingPrompts (Fan et al., 2018), a corpus of circa 300k short stories from the /r/WritingPrompts subreddit.", "These stories were created as an exercise in creative writing, resulting in stories that are interesting, natural, and of suitable length.", "The original split of the data into 90% train, 5% development, and 5% test was used.", "Pre-processing steps are described in Appendix A. Annotation To evaluate the predictions of our model, we selected 100 stories each from the development and test sets of the WritingPrompts corpus, such that each story was between 25 and 75 sentence in length.", "Each sentence of these stories was judged for narrative suspense; five master workers from Amazon Mechanical Turk annotated each story after reading instructions and completing a training phase.", "They read one sentence at a time and provided a suspense judgement using the five-point scale consisting of Big Decrease in suspense (1% of the cases), Decrease (11%), Same (50%), Increase (31%), and Big Increase (7%).", "In contrast to prior work (Delatorre et al., 2018), a relative rather than absolute scale was used.", "Relative judgements are easier to make while reading, though in practice, the suspense curves generated are very similar, with a long upward trajectory and flattening or dip near the end.", "After finishing a story, annotators had GRU LSTM Loss 5.84 5.90 Discriminatory Acc.", "In the instructions, suspense was framed as dramatic tension , as pilot annotations showed that the term suspense was too closely associated with murder mystery and related genres.", "Annotators were asked to take the character's perspective when reading to achieve stronger inter-annotator agreement and align closely with literary notions of suspense.", "During training, all workers had to annotate a test story and achieve 85% accuracy before they could continue.", "Full instructions and the training story are in Appendix B. The inter-annotator agreement (Krippendorff, 2011) was 0 .", "52 and 0 .", "57 for the development and test sets, respectively.", "Given the inherently subjective nature of the task, this is substantial agreement.", "This was achieved after screening out and replacing annotators who had low agreement for the stories they annotated (mean < 0 . 35), showed suspiciously low reading times (mean RT < 600 ms per sentence), or whose story summaries indicated low-quality annotation.", "Training and Inference The training used SGD with Nesterov momentum (Sutskever et al., 2013) with a learning rate of 0 .", "01 and a momentum of 0 .", "9. Models were run with early stopping based on the mean of the accuracies of training tasks.", "For each batch, 50 sentence blocks from two different stories were chosen to ensure that the negative examples in the discriminatory loss include easy (other stories) and difficult (same story) sentences.", "We used the pretrained GPT weights but fine-tuned the encoder and decoder weights on our task.", "For the RNN components of our hierarchical model, we experimented with both GRU (Chung et al., 1768 2015) and LSTM (Hochreiter and Schmidhuber, 1997) variants.", "The GRU model had two layers in both sen enc and story enc ; the LSTM model had four layers each in sen enc and story enc .", "Both had two fusion layers and the size of the hidden layers for both model variants was 768.", "We give the results of both variants on the tasks of sentence generation and sentence discrimination in Table", "1. Both perform similarly, with slightly worse loss for the LSTM variant, but faster training and better generation accuracy.", "Overall, model performance is strong: the LSTM variant picks out the correct sentence 54% of the time and generates it 46% of the time.", "This indicates that our architecture successfully captures the structure of stories.", "At inference time, we obtained a set of story continuations either by random sampling or by generation.", "Random sampling means that n sentences were selected from the corpus and used as continuations.", "For generation, sentences were generated using topk sampling (with k = 50) using the GPT language model and the approach of Radford et al. (2019), which generates better output than beam search (Holtzman et al., 2018) and can outperform a decoder (See et al., 2019).", "For generation, we used up to 300 words as context, enriched with the story sentence embeddings from the corresponding points in the story.", "For rollouts of one sentence, we generated 100 possibilities at each step; for rollouts of two, 50 possibilities and rollouts of three, 25 possibilities.", "This keeps what is an expensive inference process manageable.", "Importance We follow Ely et al. in evaluating weighted versions of their surprise and uncertainty reduction measure S Ely t and U Ely t (see Equation (5)).", "We obtain the t values by taking the sentiment scores assigned by the VADER sentiment classifier (Hutto and Gilbert, 2014) to each sentence and multiplying them by 1 .", "0 for positive sentiment and 2 .", "0 for negative sentiment.", "The stronger negative weighting reflects the observation that negative consequences can be more important than positive ones (O'Neill, 2013; Kahneman and Tversky, 2013).", "Baselines We test a number of baselines as alternatives to surprise and uncertainty reduction derived from our hierarchical model.", "These baselines also reflect how much change occurs from one sentence to the next in a story: WordOverlap is the Jaccard similarity between the two sentences, GloveSim is the cosine similarity between the averaged Glove (Pennington et al., 2014) word embeddings of the two sentences, and GPTSim is the cosine similarity between the GPT embeddings of the two sentences.", "The baseline is the weighted VADER sentiment score.", "Task The annotator judgements are relative (amount of decrease/increase in suspense from sentence to sentence), but the model predictions are absolute values.", "We could convert the model predictions into discrete categories, but this would fail to capture the overall arc of the story.", "Instead, we convert the relative judgements into absolute suspense values, where J t = j 1 + + j t is the absolute value for sentence t and j 1 ,..., j t are the relative judgements for sentences 1 to t .", "We use 0 .", "2 for Big Decrease, 0 .", "1 for Decrease, 0 for Same, 0 .", "1 for Increase, and 0 .", "2 for Big Increase.", "2 Both the absolute suspense judgements and the model predictions are normalised by converting them to z -scores.", "To compare model predictions and absolute suspense values, we use Spearman's (Sen, 1968) and Kendall's (Kendall, 1975).", "Rank correlation is preferred because we are interested in whether human annotators and models view the same part of the story as more or less suspenseful; also, rank correlation methods are good at detecting trends.", "We compute and between the model predictions and the judgements of each of the annotators (i.e., five times for five annotators), and then take the average.", "We then average these values again over the 100 stories in the test or development sets.", "As the human upper bound, we compute the mean pairwise correlation of the five annotators.", "Results Figure 2 shows surprise and uncertainty reduction measures and human suspense judgements for an example story (text and further examples in Appendix C).", "We performed model selection using the correlations on the development set, which are given in Table", "2. We experimented with all the measures introduced in Section 3.1, computing sets of alternative sentences either us-2 These values were fitted with predictions (or cross-worker annotation) using 5-fold cross validation and an L1 loss to optimise the mapping.", "A constraint is placed so that Same is 0, increases are positive and decreases are negative with a minimum 0 .", "05 distance between.", "ing generated continuations (Gen) or continuations sampled from the corpus (Cor), except for S Ely , which can be computed without alternatives.", "We compared the LSTM and GRU variants (see Section 4) and experimented with rollouts of up to three sentences.", "We tried L1 and L2 distance for the Ely measures, but only report L1, which always performed better.", "Discussion On the development set (see Table 2), we observe that all baselines perform poorly, indicating that distance between simple sentence representations or raw sentiment values do not model suspense.", "We find that Hale surprise S Hale performs well, reaching a maximum of .675 on the development set.", "Hale uncertainty reduction U Hale , however, performs consistently poorly.", "Ely surprise S Ely also performs well, reaching as similar value as Hale surprise.", "Overall, Ely uncertainty reduction U Ely is the strongest performer, with = .", "698, numerically outperforming the human upper bound.", "Some other trends are clear from the development set: using GRUs reduces performance in all cases but one; rollout of more than one never leads to an improvement; sentiment weighting (prefix in the table) always reduces performance, as it introduces considerable noise (see Figure 2).", "We therefore eliminate the models that correspond to these settings when we evaluate on the test set.", "For the test set results in Table 3 we also report upper and lower confidence bounds computed using the Fisher Z -transformation ( p < 0 . 05).", "On the test set, U Ely again is the best measure, with a correlation statistically indistinguishable from human performance (based on CIs).", "We find that absolute correlations are higher on the test set, presumably Prediction Model Roll Human .553 .614 Baselines WordOverlap 1 .017 .026 GloveSim 1 .017 .029 GPTSim 1 .021 .031 1 .024 .036 S Hale -Gen GRU 1 .145 .182 LSTM 1 .434 .529 S Hale -Cor GRU 1 .177 .214 LSTM 1 .580 .675 U Hale -Gen GRU 1 .036 .055 LSTM 1 .009 .016 U Hale -Cor GRU 1 .048 .050 LSTM 1 .066 .094 S Ely GRU 1 .484 .607 LSTM 1 .427 .539 S Ely GRU 1 .089 .123 LSTM 1 .115 .156 U Ely -Gen GRU 1 .241 .161 2 .304 .399 LSTM 1 .610 .698 2 .393 .494 U Ely -Cor GRU 1 .229 .264 2 .512 .625 3 .515 .606 LSTM 1 .594 .678 2 .564 .651 3 .555 .645 U Ely -Gen GRU 1 .216 .124 2 .219 .216 LSTM 1 .474 .604 2 .316 .418 U Ely -Cor GRU 1 .205 .254 2 .365 .470 LSTM 1 .535 .642 2 .425 .534 Table 2: Development set results for WritingPrompts for generated (Gen) or corpus sampled (Cor) alternative continuations; indicates sentiment weighting.", "judgements on the WritingPrompts dataset.", "The overall best predictor is U Ely , uncertainty reduction computed over story representations.", "This measure combines the probability of continuation ( S Hale ) with distance between story embeddings ( S Ely ), which are both good predictors in their own right.", "This finding supports the theoretical claim that suspense is an expectation over the change in future states of a game or a story, as advanced by Ely et al. (2015).", "Task and Dataset An interesting question is whether the peaks in suspense in a story correspond to important narrative events.", "Such events are sometimes called turning points (TPs) and occur at certain positions in a movie according to screenwriting theory (Cutting, 2016).", "A corpus of movie synopses annotated with turning points is available in the form of the TRIPOD dataset (Papalampidi et al., 2019).", "We can therefore test if surprise or uncertainty reduction predict TPs in TRIPOD.", "As our model is trained on a corpus of short stories, this will also serve as an out-of-domain evaluation.", "Papalampidi et al. (2019) assume five TPs:", "1. Opportunity,", "2. Change of Plans,", "3. Point of no Return,", "4. Major Setback, and", "5. Climax.", "They derive a prior distribution of TP positions from their test set, and use this to constrain predicted turning points to windows around these prior positions.", "We follow this approach and select as the predicted TP the sentence with the highest surprise or uncertainty reduction value within a given constrained window.", "We report the same baselines as in the previous experiment, as well as the Theory Baseline, Dev D Test D Human Not reported 4.30 (3.43) Theory Baseline 9.65 (0.94) 7.47 (3.42) TAM 7.11 (1.71) 6.80 (2.63) WordOverlap 13.9 (1.45) 12.7 (3.13) GloveSim 10.2 (0.74) 10.4 (2.54) GPTSim 16.8 (1.47) 18.1 (4.71) 11.3 (1.24) 11.2 (2.67) S Hale -Gen 8.27 (0.68) 8.72 (2.27) U Hale -Gen 10.9 (1.02) 10.69 (3.66) S Ely 9.54 (0.56) 9.01 (1.92) S Ely 9.95 (0.78) 9.54 (2.76) U Ely -Gen 8.75 (0.76) 8.38 (1.53) U Ely -Cor 8.74 (0.76) 8.50 (1.69) U Ely -Gen 8.80 (0.61) 7.84 (3.34) U Ely -Cor 8.61 (0.68) 7.78 (1.61) Table 4: TP prediction on the TRIPOD development and test sets.", "which uses screenwriting theory to predict where in a movie a given TP should occur (e.g., Point of No Return theoretically occurs 50% through the movie).", "This baseline is hard to beat (Papalampidi et al., 2019).", "Results and Discussion Figure 3 plots both gold standard and predicted TPs for a sample movie synopsis (text and further examples in Appendix D).", "The results on the TRIPOD development and test sets are reported in Table 4 (we report both due to the small number of synopses in TRIPOD).", "We use our best LSTM model with a of rollout of one; the distance measure for Ely surprise and uncertainty reduction is now L2 distance, as it outperformed L1 on TRIPOD.", "We report results in terms of D , the normalised distance between gold standard and predicted TP positions.", "On the test set, the best performing model with D = 7 .", "78 is U Ely -Cor, with U Ely -Gen only slightly worse.", "It is outperformed by TAM, the best model of Papalampidi et al. (2019), which however requires TP annotation at training time.", "U Ely -Cor is close to the Theory Baseline on the test set, an impressive result given that our model has no TP supervision and is trained on a different domain.", "The fact that models with sentiment 1771 0 10 20 30 40 50 0 1 2 3 4 5 6 Sentence S u s p e n s e Figure 3: Movie 15 Minutes, S Hale , S Ely , U Ely , U Ely , theory baseline, TP annotations, triangles are predicted TPs.", "Our overall findings suggest that by implementing concepts from psycholinguistic and economic theory, we can predict human judgements of suspense in storytelling.", "That uncertainty reduction ( U Ely ) outperforms probability-only ( S Hale ) and state-only ( S Ely ) surprise suggests that, while consequential state change is of primary importance for suspense, the probability distribution over the states is also a necessary factor.", "Uncertainty reduction therefore captures the view of suspense as reducing paths to a desired outcome, with more consequential shifts as the story progresses (O'Neill and Riedl, 2014; Ely et al., 2015; Perreault, 2018).", "This is more in line with the Smuts (2008) Desire-Frustration view of suspense, where uncertainty is secondary.", "Strong psycholinguistic claims about suspense are difficult to make due to several weaknesses in our approach, which highlight directions for future research: the proposed model does not have a higher-level understanding of event structure; most likely it picks up the textual cues that accompany dramatic changes in the text.", "One strand of further work is therefore analysis: Text could be artificially manipulated using structural changes, for example by switching the order of sentences, mixing multiple stories, including a summary at the beginning that foreshadows the work, masking key suspenseful words, or paraphrasing.", "An analogue of this would be adversarial examples used in computer vision.", "Additional annotations, such as how certain readers are about the outcome of the story, may also be helpful in better understanding the relationship between suspense and uncertainty.", "Automated interpretability methods as proposed by Sundarara-jan et al. (2017), could shed further light on models' predictions.", "The recent success of language models in wide-ranging NLP tasks (e.g., Radford et al., 2019) has shown that language models are capable of learning semantically rich information implicitly.", "However, generating plausible future continuations is an essential part of the model.", "In text generation, Fan et al. (2019) have found that explicitly incorporating coreference and structured event representations into generation produces more coherent generated text.", "A more sophisticated model would incorporate similar ideas.", "Autoregressive models that generate step by step alternatives for future continuations are computationally impractical for longer rollouts and are not cognitively plausible.", "They also differ from the Ely et al. (2015) conception of suspense, which is in terms of Bayesian beliefs over a longer-term future state, not step by step.", "There is much recent work (e.g., Ha and Schmidhuber (2018); Gregor et al. (2019)), on state-space approaches that model beliefs as latent states using variational methods.", "In principle, these would avoid the brute-force calculation of a rollout and conceptually, anticipating longer-term states aligns with theories of suspense.", "This paper is a baseline that demonstrates how modern neural network models can implicitly represent text meaning and be useful in a narrative context without recourse to supervision.", "It provides a springboard to further interesting applications and research on suspense in storytelling.", "The authors would like to thank the anonymous reviewers, Pinelopi Papalampidi and David Hodges for reviews of the annotation task, the AMT annotators, and Mirella Lapata, Ida Szubert, and Elizabeth Nielson for comments on the paper.", "Wilmot's work is funded by an EPSRC doctoral training award." ]
[ "abstain", "abstain", "method", "abstain", "objective", "result", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "result", "abstain", "result", "other", "other", "other", "other", "other", "method", "abstain", "other", "other", "other", "objective", "other", "other", "other", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "other", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "other", "other" ]
[ "The ability to integrate context, including perceptual and temporal cues, plays a pivotal role in grounding the meaning of a linguistic utterance.", "In order to measure to what extent current vision-and-language models master this ability, we propose a new multimodal challenge, Image Retrieval from Contextual Descriptions (IMAGECODE ).", "In particular, models are tasked with retrieving the correct image from a set of 10 minimally contrastive candidates based on a contextual description.", "As such, each description contains only the details that help distinguish between images.", "Because of this, descriptions tend to be complex in terms of syntax and discourse and require drawing pragmatic inferences.", "Images are sourced from both static pictures and video frames.", "We benchmark several state-of-the-art models, including both cross-encoders such as ViLBERT and bi-encoders such as CLIP, on IMAGECODE .", "Our results reveal that these models dramatically lag behind human performance: the best variant achieves an accuracy of 20.9 on video frames and 59.4 on static pictures, compared with 90.8 in humans.", "Furthermore, we experiment with new model variants that are better equipped to incorporate visual and temporal context into their representations, which achieve modest gains.", "Our hope is that IMAGECODE will foster progress in grounded language understanding by encouraging models to focus on fine-grained visual differences.", "We make code and dataset publicly available.", "1 1 Introduction Natural languages are highly contextual (Fodor, 2001): for a listener, recovering the speaker's intended meaning requires integrating information from different streams, such as grounding in perception (Pecher and Zwaan, 2005), shared world knowledge, and temporal reasoning (Wilson and Sperber, 1998).", "These processes, more generally, 1 https://github.com/McGill-NLP/imagecode", "fall under the umbrella term of pragmatics (Grice, 1957).", "Despite recent progress in multimodal systems, it remains unclear to which extent they can handle settings where context plays a major role, such as in real-world communication.", "To this end, we present a new challenge that requires multimodal models to leverage context to retrieve images from text.", "In particular, given a contextual description and a set of minimally contrastive candidate images, i.e. differing only in some details, the model has to retrieve the target image.", "In order to discriminate between similar images, human annotators naturally produce highly nuanced and grammatically complex descriptions.", "An example of our new challenging dataset, Image Retrieval from Contextual Descriptions (IMAGECODE ), is shown in Figure 1.", "During the data collection process, sets of similar images are selected among static pictures from Open Images (Kuznetsova et al., 2020) and (a larger portion) among video frames from diverse domains.", "Including both types of images allows for diversifying the dataset while representing different degrees of visual similarity within each set.", "Next, we crowdsource a contextual description of a target image (presented together with the rest of the set) that contains only differences relevant for retrieval.", "After a filtering phase involving human retrievers, we obtain a large-scale dataset with 94,020 images and 21,202 descriptions associated with image sets of size 10.", "As a result of this annotation protocol, successfully completing the task requires models to integrate several kinds of context: i ) the image set, as the descriptions often only make sense in the context of several other images and are not suitable as stand-alone captions.", "In fact, aspects of the image that are very salient and that therefore would normally be emphasized are not useful in our proposed task.", "Instead, the focus of our descriptions are fine-grained details that help discriminate between images (see Figure 1); ii ) the speaker's intention.", "Due to their high degree of image similarity, contextual descriptions may be literally true for multiple images; however, once the speaker's intention is taken into account, the correct image can be determined by virtue of pragmatics, i.e. Grice's maxim of quality 2 (see Figure 2, Figure 7); iii ) temporal sequences: for video frames temporal reasoning is also required to compare different moments of an unfolding event.", "On our new dataset IMAGECODE , we benchmark a series of vision-and-language models that achieve state-of-the-art performance on other multimodal tasks, specifically ViLBERT (Lu et al., 2019) and UNITER (Chen et al., 2020) as two cross-encoder variants and CLIP as a strong bi-encoder (Radford et al., 2021).", "We report several findings.", "First, accuracy on static images is vastly superior than on video frames.", "Therefore, the degree of similarity among the candidate images has an overwhelming impact on retrieval performance.", "Second, all state-of-the-art models generally struggle with image retrieval from contextual descriptions, whereas humans consistently achieve high accuracy.", "Hence, we propose model variants capable of better taking context into account: i ) once an image-description pair is encoded, we refine this representation by attending to the other images in the set; 2 Note: While we do not model pragmatics explicitly in our baselines, we find that the IMAGECODE contains many examples suitable for pragmatic modeling ii ) we augment image encodings with temporal embeddings.", "Based on our results, models take advantage of this additional information fruitfully but only to a limited degree.", "Because of its challenging nature, due to the minimally contrastive images and complex descriptions, we believe that IMAGECODE will help make visio-linguistic models more context-aware and sensitive to fine-grained details.", "There is a long tradition of grounding language understanding on single images, in the form of visual question answering (Goyal et al., 2017; Hudson and Manning, 2019), visual dialogue (de Vries et al., 2017; Das et al., 2017), or visual entailment (Xie et al., 2019).", "Recently, more and more focus has been directed to settings where the visual context consists of multiple images, either conventional static pictures (Vedantam et al., 2017; Hu et al., 2019; Suhr et al., 2019; Forbes et al., 2019; Hendricks and Nematzadeh, 2021; Yan et al., 2021; Hosseinzadeh and Wang, 2021; Bogin et al., 2021; Liu et al., 2021), or video frames (Jhamtani and Berg-Kirkpatrick, 2018a; Bansal et al., 2020).", "While many of these benchmarks involve just two images, COVR (Bogin et al., 2021) and ISVQA (Bansal et al., 2020) provide more images, similar to our sets of 10 images.", "ISVQA and Spot-the-diff (Jhamtani and Berg-Kirkpatrick, 2018a) are most similar to our dataset, IMAGECODE .", "ISVQA is based on several video frames that are synthetic and cover a restricted do-main, with short questions for Visual Question Answering.", "Spot-the-diff provides two frames from surveillance video cameras and descriptions of all their differences.", "IMAGECODE is unique as", "a) we cover a wider range of domains;", "b) we construct image sets that are maximally similar while being distinguishable through natural language (Section 3) and", "c) we limit descriptions to relevant differences.", "This results in", "(a) diverse,", "(b) complex and", "(c) pragmatically informative descriptions.", "We do not claim to explicitly model pragmatics in this paper, i.e. with Rational Speech Acts (Goodman and Frank, 2016).", "Instead we present a dataset that is naturally suitable for pragmatic reasoning (Andreas and Klein, 2016; Cohn-Gordon et al., 2018) as a listener has to consider the context, assume a Gricean speaker and resolve ambiguities resulting from nuanced differences.", "The 3427 reasoning in our task and data collection is therefore also similar to ReferItGame and subsequent work (Kazemzadeh et al., 2014; Mao et al., 2016) where one crowdworker generates a referring expressing for an object in a single image and another worker picks an object based on the expression.", "Our data collection involves two steps with a human describer and retriever.", "The describer is given a set of 10 highly similar images S = [ I 1 , I 2 , ..., I 10 ] , one of them marked as the target image I t , and has to write a description D that clearly distinguishes I t from the other distractor images.", "In the second step, the retriever is given the same 10 images and the description from the first step and has to identify the target image based on the description.", "S and D are only added to our dataset if the retrieval is successful.", "Below, we outline the main stages of data collection: first, the collection of similar, contrastive images in Section 3.1.", "Then, the crowdsourcing of contextual descriptions in Section 3.2 and validation of the examples via image retrieval (Sec-tion 3.3).", "The final IMAGECODE dataset consists of 94,020 images (partitioned into 9,402 sets) and 21,202 contextual descriptions (16,594 in the train split, 2,302 and 2,306 in the validation and test split respectively).", "In the first stage, we collect sets of images that are highly similar but still distinguishable from each other by a human.", "To quantitatively measure the pairwise similarity of two images, we compute the Euclidean distance between their encodings extracted from a pre-trained CLIP model (Radford et al., 2021).", "3 To study the effect of different degrees of similarity, further variegate our dataset, and enable temporal reasoning, we source our candidate images from collections of static pictures as well as videos, as detailed below.", "Static Pictures.", "We obtain image sets from one of the largest repositories of static pictures, the Open Images Dataset V6 (Kuznetsova et al., 2020), containing 1.74M images.", "For each image, we retrieve the 9 closest images from the training set based on their CLIP encodings.", "We then randomly sample 4,845 of these image sets.", "3 We also experimented with ResNet-50 features, but we found CLIP results to be more similar to that of humans in preliminary experiments.", "Video Frames.", "As sources for our video frames, we use i ) Video-Storytelling (Li et al., 2019), covering social events (wedding, birthday, Christmas, camping); ii ) general-domain MSR-VTT (Xu et al., 2016); and iii ) YouCook (Das et al., 2013), covering cooking events.", "We choose these datasets as they contain publicly available and general-purpose videos (not specific to downstream tasks).", "We retain the original splits for train, validation, and test.", "To obtain disjoint sets of 10 similar frames, we first segment the videos into smaller scenes (also known as shots) via the scene detection functionality of ffmpeg (Tomar, 2006).", "Then, for each scene, we add its first frame to the set of selected images.", "We then iterate over every following frame and add it to the set if its pairwise Euclidean distance with each of the previously selected frames is larger than a threshold.", "4 Once the set contains 10 images, we reiterate the procedure for a new set.", "If the scene ends and the current set contains less than 10 images, the set is discarded.", "During this process, we additionally remove frames that i ) are too blurry, i.e. their BRISQUE score (Mittal et al., 2012) is larger than 0.65; or ii ) contain too much text, which is detected with the OCR tool Tesseract (Smith, 2007).", "5 We use all of YouCook's image sets and (due to cost constraints) randomly sample image sets from Video-Storytelling and MSR-VTT for crowdsourcing (cf. Table 1).", "We remark that image sets are further filtered at the final stage of annotation (Section 3.3).", "After creating sets of highly-similar images in Section 3.1, we request annotators from Amazon Mechanical Turk (AMT) to write contextual descriptions for each target image in a set.", "Each round, a set of images is presented in random order for static pictures and respecting temporal order for 4 The distance threshold was manually chosen as 0.35 based on qualitative results.", "video frames.", "This encourages annotators to take the dynamics of the event into account.", "We then (randomly) select 3 target images per set, and ask annotators to produce a description that discriminates them from the other images in the set.", "To encourage pragmatic reasoning, we do not ask for all the differences (just those sufficient for retrieval) and do not allow explicit mentions of other images (see Figure 2).", "We select high-quality annotators according to criteria in Appendix B and assign partly disjoint sets of annotators to train and test in order to avoid annotator bias (Geva et al., 2019).", "6 3.3 Human Validation via Image Retrieval Finally, we validate the annotation crowdsourced in Section 3.2 by asking AMT workers to retrieve the correct target image from a set given its contextual description.", "For the final dataset, we retained only the examples that", "i) were retrieved successfully in the training set by a single worker or", "ii) were retrieved successfully by at least 2 out of 3 workers in the validation and test sets.", "As a consequence, we filtered out 26.5% of the contextual descriptions generated in Section 3.2.", "Table 1 compares the number of examples retained at each stage throughout the dataset creation.", "7 6 For further details on crowdsourcing instructions, analysis of annotator bias and the AMT interface, please refer to Appendix C and Appendix D. 7 Again, the set of workers validating train and test sets were partly disjoint to avoid annotator bias.", "To quantify the reliability of the process outlined in Section 3, we report the inter-annotator agreement on our final dataset in Table 3.", "We use Krippendorff's as a metric (the higher the better), which accounts for incomplete data, since the number of annotators per example is not fixed.", "We treat the index of the target image either as a nominal variable for static images or as an ordinal variable for video frames.", "In both cases, we find a high degree of agreement.", "Moreover, in Table 3, we also report human accuracy the percentage of times an annotator retrieved the correct target image from a contextual description (as described in Section 3.3).", "This provides an upper ceiling for the model performances (see Section 6).", "In Table 4, we measure a series of statistics of the descriptions collected for IMAGECODE and compare them with other vision-and-language datasets", "with multiple naturalistic images (cf. Section 2), such as NLVR2 (Suhr et al., 2019) and Spot-the-diff (Jhamtani and Berg-Kirkpatrick, 2018b).", "8 In particular, we count the average description length, the number of distinct word types, the average dependency tree depth of each sentence, 9 and the average number of sentences per description.", "Based on these metrics, we find evidence that IMAGECODE 's descriptions are longer and more syntactically complex than in the other datasets.", "Moreover, they include multiple sentences (11.8% of examples have 3 or more).", "By calculating the average pairwise Euclidean distance between CLIP-based encodings of images in the same set, we find that video frames are more similar than static pictures as expected by a fac-tor of 1.13.", "Moreover, we find that descriptions of video frames mention human body parts (72.1%) more often than static pictures (30.2%).", "On the other hand, names of colors appear in descriptions of static pictures (61.4%) more frequently than 8 For comparability, we measured the statistics for all the datasets with the same tools.", "video frames (33.6%).", "10 Thus, annotators resort to different strategies to discriminate between different types of image sets, focusing on the aspects that vary the most.", "Finally, we identify 9 interesting and challenging phenomena in IMAGECODE and annotate whether they are present in 200 examples from the validation set.", "We provide the definition of each phenomenon, its frequency, and an illustrative example in Table 2.", "An example for each phenomena is given in Appendix G. For 4 of these phenomena unique to IMAGECODE , we further annotated 800 examples for the purpose of error analysis in Section 6.", "Inspecting these examples, we find a high number of cases where the visual context (47.0%) is required to complete the task.", "For instance, consider Figure 2: the description No bridesmaid visible at all. requires a retriever to resolve the co-references of the entities in 5 frames.", "In particular, the body parts of the bridesmaids (red boxes) visible in frames 2 and 4 would not be identifiable as such without frame 1 and 5, respectively (where they appear with matching dresses and flowers in their hands).", "A common example we find in the data are \"gradable\" scenarios, i.e. The person is looking down might be semantically true for more than one image but it fits best to the image where the person is looking down the most.", "Another group of phenomena characteristic for IMAGECODE originates from its minimally contrastive setup: annotators might focus on how an 10 We calculated these percentages based on a list of 171 body parts in English collected by Tjuka (2021) and a list of colors in English from games4esl.com.", "event unfolds over time ( temporal context ), on what is missing in a specific frame but visible in the others ( negation ), on what moved out of frame ( visibility / occlusion ), or on small regions and patches of pixels ( nuances ).", "Importantly, these phenomena are less prominent in static pictures than in video frames (cf. Table 2).", "In order to assess whether vision-and-language models can retrieve the correct image from a contextual description on a par with humans, we benchmark three state-of-the-art models that represent three main families of multimodal architectures (Bugliarello et al., 2021; Miech et al., 2021): i ) ViLBERT, a cross-encoder where language and vision streams can interact via cross-attention at intermediate layers (Lu et al., 2019); ii UNITER, a single-stream encoder where language and vision tokens are concatenated as inputs and processed with a single Transformer (Chen et al., 2020); iii ) CLIP, a bi-encoder where language and vision streams are independent (Radford et al., 2021).", "It is worth noting that ViLBERT and UNITER are more expressive due to their architecture, whereas CLIP boasts a higher parameter count, is pre-trained on a larger dataset and uses a contrastive objective.", "We evaluate these models under two different regimes:", "i) zero-shot inference, where pre-trained models are deployed on the IMAGECODE test set directly; and", "ii) fine-tuning , where the models are refined on the full training set before evaluation.", "We cast the training objective as binary classification for ViLBERT and as 10-class classification for CLIP.", "11 Crucially, in both cases, positive and negative examples during training are sampled at random independently from the image set they belong to (see the first column of Figure 3).", "Thus, the visual context of the other images in a set is only indirectly accessible at inference time, where the image with the highest probability is predicted.", "and temporal context into the model.", "First, we use an alternative objective where all three models are trained on 10-class classification, but the 1 positive and 9 negatives are sourced from the same image set.", "The consequence of including positive and negative examples from the same image set in the same mini-batch is providing a wider visual context.", "We refer to this variant as +C ONTEXTBATCH (second column of Figure 3).", "This setup only conveys the visual context as a weak signal, since the model has no chance to directly compare the images in the same set.", "Hence, we experiment with enhancing the architecture of vision-and-language models with a mechanism inspired by Bogin et al. (2021).", "In particular, given an encoder (CLIP, ViLBERT or UNITER), we obtain the representations of a contextual description x L R e (where e is the model hidden size) and of the images in a set ( x ( 1 ) V , . . . , x ( 10 ) V ) , x ( i ) V R e from their final layer.", "12 Then, we create a series of multimodal embeddings via element-wise multiplication: m = ( x L x ( 1 ) V , . . . , x L x ( 10 ) V ) .", "Finally, we feed these to a l -layer Transformer Tf R 10 e R 10 e to obtain context-aware multimodal embeddings ( Tf ( m ) 1 , . . . , Tf ( m ) 10 ) .", "Since each descriptionimage pair can now attend on the others in a set, the model can fully exploit the visual context.", "We obtain the score for the i -th pair through a linear classifier head W R 1 e .", "The target image is predicted as arg max i softmax [ W ( Tf ( m ) i + m ( i ) )] (1) Note that we add a highway layer from the input to the output of the Transformer.", "We label this model variant +C ONTEXTMODULE .", "Finally, in addition to visual context, we make models aware of the temporal context too, as shown in the fourth column of Figure 3.", "For video-based examples only, the multimodal embeddings of each description-image pair are summed with a learnable positional embedding t R e that reflects the temporal order of the frames.", "13 Thus, m = ( x L x ( 1 ) V t ( 1 ) , . . . , x L x ( 10 ) V t ( 10 ) ) .", "Multimodal embeddings are then fed to a Transformer as above.", "We label this variant encapsulating both visual and temporal context +T EMPORALEMBEDDINGS .", "For all CLIP experiments, we use a pre-trained model with the vision backbone VI T-B/16.", "14 We train the full models with a batch size of 360 examples (i.e., 36 image sets) for CLIP and 150 examples for ViLBERT/UNITER.", "We perform early stopping based on the validation accuracy with a maximum of 30 epochs.", "In the variants that adopt the base version of a model, we select a learning rate of 4 10 6 for CLIP, 5 10 6 for ViLBERT, 4 10 5 for ViL-BERT+C ONTEXTBATCH , 8 10 6 for UNITER, and 7 10 6 for UNITER++C ONTEXTBATCH .", "We find these values via hyper-parameter search on the range [ 10 4 , 10 7 ] .", "For CLIP variants that modify the model architecture, we adopt the following setup: first, we fine-tune the full model in the +C ONTEXTBATCH regime as detailed above.", "Afterwards, we freeze the encoder parameters and train the components responsible for processing the multimodal embeddings, described in Equation (1).", "More details are provided in Appendix F. For ViLBERT and UNITER we finetune the whole architecture at the same time.", "All descriptions in IMAGECODE exceeding the maximum length of the three models are truncated.", "Due to their negligible amount, this does not affect 14 https://github.com/openai/CLIP performance significantly.", "In Table 5, we report the performance of the models from Section 5 for all the test examples in IMAGECODE as well as for the subsets containing only video frames or static pictures (see Appendix E for validation scores).", "Note that the random chance baseline has an accuracy of 10%.", "In what follows, we compare the results across several dimensions.", "Zero-shot vs. fine-tuning.", "In the zero-shot setting, we observe that CLIP representations are surprisingly superior to UNITER/ViLBERT even though CLIP has separate streams to encode an image and its description.", "In the simplest fine-tuning setting (i.e., if negatives are randomly sampled independent of the image set), we find that overall there is only a small increase in performance compared to zero-shot inference.", "This demonstrates that in the regime where images in the same set do not appear in the same batch during training, models cannot extrapolate how to leverage the visual context at inference time.", "Adding context.", "For the fine-tuning regime, we observe instead a different trend once the visual context of the other images in a set is provided during training (+C ONTEXTBATCH ): CLIP and UNITER receive a significant boost in performance (i.e. +14.4% for CLIP), which is 3432 all video static ZERO-SHOTCLIP 22.4 15.6 47.8 FINE-TUNINGCLIP 24.3 17.1 51.3 +C ONTEXTBATCH 28.4 20.0 60.0 +C ONTEXTMODULE 27.7 19.6 58.4 +T EMPORALEMBEDDINGS 29.9 22.0 59.8 ZERO-SHOTUNITER 19.8 13.6 42.9 FINE-TUNINGUNITER 21.9 14.4 50.1 +C ONTEXTBATCH 24.8 17.4 52.8 +C ONTEXTMODULE 24.4 16.7 53.0 +T EMPORALEMBEDDINGS 25.7 19.1 50.5 ZERO-SHOT ViLBERT 19.3 13.5 40.8 FINE-TUNING ViLBERT 20.9 13.1 49.9 +C ONTEXTBATCH 20.9 15.0 42.7 +C ONTEXTMODULE 22.3 16.1 45.6 +T EMPORALEMBEDDINGS 24.5 18.0 49.3 Table 5: Performance (test accuracy) on IMAGECODE across two training regimes (zero-shot and fine-tuning), three models (CLIP, UNITER, ViLBERT) and 4 model variants.", "particularly accentuated for static pictures.", "On the other hand, ViLBERT's performance remains the same.", "Stacking a special module for con-textualizing multimodal representations on top of the encoders (+C ONTEXTMODULE ), instead, yields gains for ViLBERT compared to +C ONTEXTBATCH , whereas CLIP and UNITER are unaffected (slight drop).", "This shows that all models can exploit visual context, but different strategies (contrastive training or dedicated modules) may be necessary.", "Finally, all three models achieve the highest performance when fine-tuned with both visual and temporal context.", "Adding temporal positional embeddings on top of the contextual module (+T EMPORALEMBEDDINGS ) yields an accuracy of 29.9 for CLIP, 25.7 for UNITER and 24.5 for ViLBERT.", "Crucially, even the best-performing models lag significantly behind the (micro-averaged) human accuracy of 90.8 (cf. Table 3).", "Hence, despite some limited ability to integrate context, models are currently incapable of the fine-grained reasoning and pragmatic inferences needed to solve IMAGECODE .", "Pre-trained model.", "Across all model variants and training regimes, CLIP consistently achieves higher accuracy than ViLBERT or UNITER.", "This implies that a larger amount of parameters, pretraining examples or the contrastive objective are more beneficial than ViLBERT's or UNITER's more expressive model architecture.", "Thus, these results violate the expectations that attention between vision and language would be more suitable to jointly encode highly nuanced visual details and descriptions (Miech et al., 2021).", "Additionally UNITER slightly outperforms ViLBERT as its single-stream architecture might enable richer cross-modal interactions.", "Video frames vs. static pictures.", "The highest accuracy on the subset of the data with video frames (20.9) is far lower than that for static pictures (59.4).", "This confirms that videos represent the main challenge in IMAGECODE , both because of the higher similarity of images in a set and of the particular factors of variation that help differentiate among them (cf. Section 4.3 and examples in Appendix G).", "Additionally, model performance on video frames seems to increase more consistently as more context (both visual and temporal) is provided, whereas there is no clear trend in the case of static pictures.", "Error Analysis.", "On a broad level, we have seen that video frames are much more challenging for models.", "Next, to identify more fine-grained causes for the overall low performance of the vision-and-language models on IMAGECODE , we compute the Pearson's correlation between accuracy and a series of possible explanatory variables.", "In particular, we find a weak negative correlation with the number of tokens in the description ( r = 0 . 11 ) and a weak positive correlation with the average pair-wise Euclidean distance between CLIP encodings of the images in a set ( r = 0 . 22 ), which represents visual similarity.", "By focusing on the 1000 annotated examples in Table 2 we observe a stark drop from overall performance on the subset of examples containing nuances, visibility/occlusion, and negation (Fig-ure 4).", "This confirms insights from Kassner and Schtze (2020) and Hosseini et al. (2021) on the difficulty of modeling negation in text-only models.", "We created a new challenge, Image Retrieval from Contextual Descriptions (IMAGECODE ), which is designed to evaluate the ability of vision-and-language models to integrate visual, pragmatic, and temporal context into their predictions.", "In particular, given a complex and nuanced contextual description, a model is required to retrieve the corresponding image from a set of highly similar candidates.", "We benchmarked state-of-the-art bi-encoder and cross-encoder models, such as CLIP and ViLBERT.", "Moreover, we proposed new variants of these models that are more suitable to solve this task, by augmenting them with a module to attend on the other images in a set and temporal embeddings.", "We found that IMAGECODE is highly challenging for all variants: even the best model (28.9) lags behind human performance (90.8) dramatically.", "Images sourced from video frames display the largest gap in performance.", "The most challenging phenomena in IMAGECODE include pragmatics, negation, fine-grained distinctions between images, and occlusion among others.", "IMAGECODE wouldn't have been possible without the herculean effort of the Amazon Mechanical Turkers and their feedback on the interface.", "We also thank Emanuelle Bugliarello for his help with VOLTA , an excellent codebase for several vision and language models.", "We thank the members of SR's research group for their feedback on the ideas presented here.", "IMAGECODE is funded by the Mila-Samsung grant program.", "We thank Microsoft for providing us Azure credits.", "SR acknowledges the support of the NSERC Discovery Grant program and the Facebook CIFAR AI Chair program.", "We distribute the descriptions in IMAGECODE under MIT and adopt the licenses of the video and image sources on which our image sets build on top.", "We report details about crowdsourcing such as payment and selection criteria in Section 3.2 and Appendix B. For the tested model variants, we only train a single run for each hyperparameter setting due to long run times." ]
[ "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "method", "abstain", "abstain", "other", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "method", "result", "abstain", "objective", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "method", "other", "other", "method", "abstain", "other", "other", "other", "abstain", "method", "method", "other", "other", "other", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "other", "result", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "objective", "abstain", "abstain", "objective", "result", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "method", "method" ]
[ "Entity mentions embedded in longer entity mentions are referred to as nested entities.", "Most named entity recognition (NER) systems deal only with the flat entities and ignore the inner nested ones, which fails to capture finer-grained semantic information in underlying texts.", "To address this issue, we propose a novel neural model to identify nested entities by dynamically stacking flat NER layers.", "Each flat NER layer is based on the state-of-the-art flat NER model that captures sequential context representation with bidirectional long short-term memory (LSTM) layer and feeds it to the cascaded CRF layer.", "Our model merges the output of the LSTM layer in the current flat NER layer to build new representation for detected entities and subsequently feeds them into the next flat NER layer.", "This allows our model to extract outer entities by taking full advantage of information encoded in their corresponding inner entities, in an inside-to-outside way.", "Our model dynamically stacks the flat NER layers until no outer entities are extracted.", "Extensive evaluation shows that our dynamic model outperforms state-of-the-art feature-based systems on nested NER, achieving 74.7% and 72.2% on GENIA and ACE2005 datasets, respectively, in terms of F-score.", "1 1 Introduction The task of named entity recognition (NER) involves the extraction from text of names of entities pertaining to semantic types such as person (PER) , location (LOC) and geo-political entity (GPE) .", "NER has drawn the attention of many researchers as the first step towards NLP applications such as entity linking (Gupta et al., 2017), relation extraction (Miwa and Bansal, 2016), event 1 Code is available at https://github.com/ meizhiju/layered-bilstm-crf LOC PER GPE GPE The premier of the western Canadian province of British Columbia ...", "LOC PER GPE GPE The premier of the western Canadian province of British Columbia ...", "extraction (Feng et al., 2016) and co-reference resolution (Fragkou, 2017; Stone and Arora, 2017).", "Due to the properties of natural language, many named entities contain nested entities : embedded names which are included in other entities, illustrated in Figure 1. This phenomenon is quite common in many domains (Alex et al., 2007; Byrne, 2007; Wang, 2009; M`arquez et al., 2007).", "However, much of the work on NER copes only with non-nested entities which are also called flat entities and neglects nested entities.", "This leads to loss of potentially important information, with negative impacts on subsequent tasks.", "Traditional approaches to NER mainly involve two types of approaches: supervised learning (Ling and Weld, 2012; Marcinczuk, 2015; Leaman and Lu, 2016) and hybrid approaches (Bhasuran et al., 2016; Rocktaschel et al., 2012; Leaman et al., 2015) that combine supervised learning with rules.", "Such approaches require either domain knowledge or heavy feature-engineering.", "Recent advances in neural networks enable NER without depending on external knowledge resources through automated learning high-level and abstract features from text (Lample et al., 2016; Ma and Hovy, 2016; Pahuja et al., 2017; Strubell et al., 2017).", "In this paper, we propose a novel dynamic neural model for nested entity recognition, without relying on any external knowledge resources or linguistics features.", "Our model enables sequentially 1446 O B-DNA I-DNA I-DNA I-DNA OO B-protein O O O O Mouse interleukin-2 receptor alpha gene expression O B-Protein O O O O O B-DNA I-DNA I-DNA I-DNA OO O O Gold labels Embedding layer Flat NER layer Flat NER layer Flat NER layer Sequence Inner Outer DropoutDropout CRFCRFLSTM Unit LSTM Unit LabelLabel Word representation Word representation Flat NER Unit Flat NER Unit O B-DNA I-DNA I-DNA I-DNA OO B-protein O O O O Mouse interleukin-2 receptor alpha gene expression O B-Protein O O O O O B-DNA I-DNA I-DNA I-DNA OO O O Gold labels Embedding layer Flat NER layer Flat NER layer Flat NER layer Sequence Inner Outer Dropout CRFLSTM Unit Label Word representation Flat NER Unit Figure 2: Overview of our layered model architecture.", "interleukin-2 and interleukin-2 receptor alpha gene are nested entities.", "stacking flat NER layers from bottom to up and identifying entities in an end-to-end manner.", "The number of stacked layers depends on the level of entity nesting and dynamically adjusts to the input sequences as the nested level varies from different sequences.", "Given a sequence of words, our model first represents each word using a low-dimensional vector concatenated from its corresponding word and character sequence embeddings.", "Taking the sequence of the word representation as input, our flat NER layer enables capturing context representation by a long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) layer.", "The context representation is then fed to a CRF layer for label prediction.", "Subsequently, the context representation from the LSTM layer is merged to build representation for each detected entity, which is used as the input for the next flat NER layer.", "Our model stops detecting entities if no entities are predicted by the current flat NER layer.", "Through stacking flat NER layers in order, we are able to extract entities from inside to outside with sharing parameters among the different LSTM layers and CRF layers.", "We gain 3.9 and 9.1 percentage point improvements regarding F-score over the state-of-the-art feature-based model on two nested entity corpora: GENIA (Kim et al., 2003) and ACE2005 (Walker et al., 2006), and analyze contributions of inner entities to outer entity detection, drawing several key conclusions.", "In addition, experiments are conducted on a flatly annotated corpora JNLPBA (Kim et al., 2004).", "Our model can be a complete NER model as well for flat entities, on the condition that it is trained on annotations that do not account for nested entities.", "We obtain 75.55% in terms of F-score that is comparable to the state-of-the-art performance.", "Our nested NER model is designed based on a sequential stack of flat NER layers that detects nested entities in an end-to-end manner.", "Figure 2 provides the overview of our model.", "Our flat NER layers are inspired by the state-of-the-art model proposed in Lample et al. (2016).", "The layer utilizes one single bidirectional LSTM layer to represent word sequences and predict flat entities by putting one single CRF layer on top of the LSTM layer.", "Therefore, we refer to our model as Layered-BiLSTM-CRF model.", "If any entities are predicted, a new flat NER layer is introduced and the word sequence representation of each detected entity by the current flat NER layer is merged to compose a representation for the entity, which is then passed on to the new flat NER layer as its input.", "Otherwise, the model terminates stacking and hence finishes entity detection.", "In this section, we provide a brief description of the model architecture: the flat NER layers and their stacking, the embedding layer and their training.", "A flat NER layer consists of an LSTM layer and a CRF layer.", "The LSTM layer captures the bidi-1447 rectional context representation of sequences and subsequently feeds it to the CRF layer to globally decode label sequences.", "LSTM is a variant of recurrent neural networks (RNNs) (Goller and Kuchler, 1996) that incorporates a memory cell to remember the past information for a long period of time.", "This enables capturing long dependencies, thus reducing the gradient vanishing/explosion problem existing in RNNs.", "We employ bidirectional LSTM with no peephole connection.", "We refer the readers to Hochreiter and Schmidhuber (1997) for more details of LSTM used in our work.", "CRFs are used to globally predict label sequences for any given sequences.", "Given an input sequence X = ( x 1 , x 2 , . . . , x n ) which is the output from the LSTM layer, we maximize the log-probability during training.", "In decoding, we set transition costs between illegal transitions, e.g., transition from O to I-PER, as infinite to restrict illegal labels.", "The expected label sequence y = ( y 1 , y 2 , . . . , y n ) is predicted based on maximum scores in decoding.", "We stack a flat NER layer on the top of the current flat NER layer, aiming to extract outer entities.", "Concretely, we merge and average current context representation of the regions composed in the detected entities, as described in the following equation: m i = 1 end start + 1 end X i = start z i , (1) where z i denotes the representation of the i -th word from the flat NER layer, and m i is the merged representation for an entity.", "The region starts from a position start and ends at a position end of the sequence.", "This merged representation of detected entities allows us to treat each detected entity as a single token, and hence we are able to make the most of inner entity information to encourage outer entity recognition.", "If the region is detected as a non-entity, we keep the representation without any processing.", "The processed context representation of the flat NER layer is used as the input for the next flat NER layer.", "The input for the first NER layer is different from the remaining flat NER layers since the first layer", "has no previous layers.", "We thus represent each word by concatenating character sequence embeddings and word embeddings for the first flat NER layer.", "Figure 3 describes the architecture of the embedding layer to produce word representation.", "Following the successes of Ma and Hovy (2016) and Lample et al. (2016) in utilizing character embeddings on the flat NER task, we also represent each word with its character sequence to capture the orthographic and morphological features of the word.", "Each character is mapped to a randomly initialized vector through a character lookup table.", "We feed the character vectors comprising a word to a bidirectional LSTM layer and concatenate the forward and backward representation to obtain the word-level embedding.", "Differently from the character sequence embeddings, the pretrained word embeddings are used to initialize word embeddings.", "When evaluating or applying the model, words that are outside of the pretrained embeddings and training dataset are mapped to an unknown (UNK) embedding, which is randomly initialized during training.", "To train the UNK embedding, we replace words whose frequency is 1 in the training dataset with the UNK embedding with a probability 0.5.", "We prepare the gold labels based on the conventional BIO (Beginning, Inside, Out of entities) tagging scheme to represent a label attached to each word.", "As our model detects entities from inside to outside, we keep the same order in preparing the gold labels for each word sequence.", "We call it the detection order rule .", "Meantime, we define that each entity region in the sequence can only be tagged once with the same entity type, referred to as the non-duplicate rule .", "For instance, in Figure 2, interleukin-2 is tagged first while interleukin-2 receptor alpha gene is subsequently tagged following the above two rules.", "When assigning the label O to non-entity regions, we only follow the detection order rule.", "As a result, two gold label sequences { O, B-Protein, O, O, O, O } and { O, B-DNA, I-DNA, I-DNA, I-DNA, O } are assigned to the given word sequence Mouse interleukin-2 receptor alpha gene expression as shown in Figure 2. With these rules, the number of labels for each word equals the nested level of entities in the given word sequence.", "We employ mini-batch training and update the model parameters using back-propagation through time (BPTT) (Werbos, 1990) with Adam (Kingma and Ba, 2014).", "The model parameters include weights, bias, transition costs, and embeddings of characters.", "We disable updating the word embeddings.", "2 During training, early stopping, L2-regularization and dropout (Hinton et al., 2012) are used to prevent overfitting.", "Dropout is employed to the input of each flat NER layer.", "Hyper-parameters including batch size, number of hidden units in LSTM, character dimensions, dropout rate, Adam learning rate, gradient clipping and weight decay (L2) are all tuned with Bayesian optimization (Snoek et al., 2012).", "We employed three datasets for evaluation: GENIA 3 (Kim et al., 2003), ACE2005 4 (Walker et al., 2006) and JNLPBA 5 (Kim et al., 2004).", "We briefly explain the data and task settings and then introduce model and experimental settings.", "We performed nested entity extraction experiments on GENIA and ACE2005 while we con-2 We tried updating and disabling updating word embeddings.", "The former trial did not work.", "3 http://www.geniaproject.org/ genia-corpus/term-corpus 4 https://catalog.ldc.upenn.edu/ ldc2006t06 5 http://www.nactem.ac.uk/tsujii/GENIA/ ERtask/report.html ducted flat entity extraction on the JNLPBA dataset.", "For the details of data statistics and preprocessing, please refer to the supplementary materials.", "GENIA involves 36 fine-grained entity categories among total 2,000 MEDLINE abstracts.", "Following the same task settings as in Finkel and Manning (2009) and Lu and Roth (2015), we collapsed all DNA subcategories as DNA.", "The same setting was applied to RNA, protein, cell line and cell type categories.", "We used same test portion as Finkel and Manning (2009), Lu and Roth (2015) and Muis and Lu (2017) for the direct comparison.", "ACE2005 contains 7 fine-grained entity categories.", "We made same modifications described in Lu and Roth (2015) and Muis and Lu (2017) by keeping files from bn, bw, nw and wl and spitting them into training, development and testing datasets at random following same ratio 8:1:1, respectively.", "JNLPBA defines both training and testing datasets.", "These two datasets are composed of 2,000 and 404 MEDLINE abstracts, respectively.", "JNLPBA is originally from the GENIA corpus.", "However, only flat and topmost entities in JNLPBA are kept while nested and discontinuous entities are removed.", "Like our preprocessing on the GENIA corpus, subcategories are collapsed and only 5 entity types are finally reserved.", "We randomly chose the 90% sentences of the original training dataset as our training dataset and the remaining as our development dataset.", "Precision (P), recall (R) and F-score (F) were used for the evaluation metrics in our tasks.", "We define that if the numbers of gold entities and predictions are all zeros, the evaluation metrics all equal one hundred percent.", "Our model was implemented with Chainer 6 (Tokui et al., 2015).", "We initialized word embeddings in GENIA and JNLPBA with the pretrained embeddings trained on MEDLINE abstracts (Chiu et al., 2016).", "For ACE2005, we initialized each word with the pretrained embeddings which are trained by Miwa and Bansal (2016).", "Except for the word embeddings, parameters of word embeddings were initialized with a normal distribution.", "For LSTM, we initialized hidden states, cell state and all the bias terms as 0 except for the forget gate 6 https://chainer.org/ 1449 bias that was set as 1. For other hyper-parameters, we chose the best hyper-parameters via Bayesian optimization.", "We refer the readers to the supplemental material for the settings of the hyper-parameters of the models and Bayesian optimization.", "For ablation tests, we compared with our layered-BiLSTM-CRF model with two models that produce the input for next flat NER layer in different ways.", "The first model is called layered-BiLSTM-CRF w/o layered out-of-entities which uses the input of the current flat NER layer for out-of-entity words.", "We name the second model as layered-BiLSTM-CRF w/o layered LSTM as it skips all intermediate LSTM layers and only uses output of embedding layer to build the input for the next flat NER layer.", "Please refer to supplemental material for the introduced two models.", "7 To investigate the effectiveness of our model on different nested levels of entities, we evaluated the model performance on each flat NER layer on GENIA and ACE2005 test datasets.", "8 When calculating precision and recall measurements, we collected the predictions and gold entities from the corresponding flat NER layer.", "Since predicted entities on a specific flat NER layer might be from other flat NER layers, we defined extended precision (EP), extended recall (ER) and extended F-score (EF) to measure the performance.", "We calculated EP by comparing the predicted entities in a specific flat NER layer with all the gold entities, and ER by comparing the gold entities in a specific flat NER layer with all the predicted entities.", "EF was calculated in the same way with F. In addition to experiments on nested GENIA and ACE2005 datasets, flat entity recognition was conducted on the JNLPBA dataset.", "We trained our flat model that only kept the first flat NER layer and removed the following stacking layers.", "We follow the hyper-parameters settings by Lample et al. (2016) for this evaluation.", "7 We examined the contributions of predicted labels of the current flat NER layer to the next flat NER layer.", "For this, we introduced label embeddings into each test by combining the embedding with context representation.", "Experiments show that appending label embedding hurts the performance of our model while gain slight improvements in the rest 2 models on development datasets.", "We removed entities which were predicted in previous flat NER layers during evaluation.", "Table 1 presents the comparisons of our model with related work including the state-of-the-art feature-based model by Muis and Lu (2017).", "Our model outperforms the state-of-the-art models with 74.7% and 72.2% in terms of F-score, achieving the new state-of-the-art in the nested NER tasks.", "For GENIA, our model gained more improvement in terms of recall with enabling extract more nested entities without reducing precision.", "On ACE2005, we improved recall with 12.2 percentage points and obtained 5.1% relative error reductions.", "Compared with GENIA, our model gained more improvements in ACE2005 in terms of F-score.", "Two possible reasons account for it.", "One reason is that ACE2005 contains more deeper nested entities (maximum nested level is 5) than GENIA (maximum nested level is 3) on the test dataset.", "This allows our model to capture the potentially nested' relations among nested entities.", "The other reason is that ACE2005 has more nested entities (37.45%) compared with GENIA (21.56%).", "Table 2 shows the results of models on the development datasets of GENIA and ACE2005, respectively.", "From this table, we can see that our model, which only utilizes context representation for preparation of input for the next flat NER layer, performs better than the rest two models.", "This demonstrates that introducing input of the current flat NER layer such as skipping either representation for any non-entity or words or all intermediate LSTM layers hurts performance.", "Compared with the layered-BiLSTM-CRF model, the drop of the performance in the layered-BiLSTM-CRF w/o layered out-of-entities model reflects the skip of representation for out-of-entity words leads to the decline in performance.", "This is because the representation of non-entity words didn't incorporate the current context representation as we used the input rather than the output to represent them.", "By analogy, the layered BiLSTM-CRF w/o layer LSTM model skips representation for both entities and non-entity words, resulting in worse performance.", "This is because, when skipping all intermediate LSTM layers, input of the first flat NER layer,", "i.e., word embeddings, is passed to the remaining flat NER layers.", "Since word embeddings do not contain context representation, we fail to incorporate the context representation when we use 1450 Settings GENIA ACE2005 P (%) R (%) F (%) P (%) R (%) F (%) Finkel and Manning (2009) 75.4 65.9 70.3 -Lu and Roth (2015) 72.5 65.2 68.7 66.3 59.2 62.5 Muis and Lu (2017) 75.4 66.8 70.8 69.1 58.1 63.1 Our model 78.5 71.3 74.7 74.2 70.3 72.2 Table 1: Comparisons of our model with the state-of-the-art models on nested NER.", "the word embeddings as the input for the flat NER layers.", "Therefore, we have no chance to take advantage of the context representation and instead we only manage to use the word embeddings as the input for flat NER layers in this case.", "Table 3 and Table 4 describe the performance for each entity type in GENIA and ACE2005 test datasets, respectively.", "In GENIA, our model performed best in recognizing entities with type RNA .", "This is because most of the entities pertaining to RNA mainly end up either with mRNA or RNA .", "These two words are informative indicators of RNA entities.", "For entities in rest entity types, their performances are close to the overall performance.", "One possible reason is that there are many instances to model them.", "This also accounts for the high performances of entity types such as PER , GPE in ACE2005.", "The small amounts of instances of entity types like FAC in ACE2005 is one reason for their under overall performances.", "We refer readers to supplemental material for statistics details.", "When evaluating our model on top level which contains only outermost entities, the precision, recall and F-score were 78.19%, 75.17% and 76.65% on GENIA test dataset.", "For ACE2005, the corresponding precision, recall and F-score were 68.37%, 68.57% and 68.47%.", "Compared with the overall performance listed in Table 1, we obtained higher top level performance on GENIA but lower performance in ACE2005.", "We discuss details of this phenomena in the following tables.", "Table 5 shows the performances of each flat NER layer in GENIA test dataset.", "Among all the stacking flat NER layers, our model resulted in the best performance regarding standard evaluation metrics on the first flat NER layer which contains the predictions for the gold innermost entities.", "When the model went to deeper flat NER layers, the performance dropped gradually as the number of gold entities decreased.", "However, the performance for predictions on each flat 1451 Layer P (%) R (%) F (%) EP (%) ER (%) EF (%) #Predictions #Gold Entities Layer 1 72.86 69.82 71.31 78.46 71.06 74.57 4,783 4,991 Layer 2 56.88 27.59 37.15 81.15 73.98 77.39 276 569 Layer 3 0.00 0.00 0.00 0.00 60.00 0.00 1 15 Table 5: Results of layer evaluation on GENIA test dataset.", "NER layer was different in terms of extended evaluation metrics.", "For the first two flat NER layers, performance of extended evaluation is better than the performance of standard evaluation.", "It indicates that gold entities correspond to some of the predictions on the specific flat NER layer are from other flat NER layers.", "This may lead to the zero performances for the last flat NER layer.", "In addition, performance on the second flat NER layer was higher than it was on the first flat NER layer in terms of extended F-score.", "This demonstrates that our model is able to obtain higher performance on top level of entities than innermost entities.", "Table 6 lists the results of each flat NER layer on ACE2005 test dataset.", "Similar to GENIA, the first flat NER layer achieved better performance than the rest flat NER layers.", "Performances decreased in a bottom-to-up manner regarding model architecture.", "This phenomena was the same with the extended evaluation performances, which reflects that some of the predictions in a specific flat NER layer were detected in other flat NER layers.", "Unlike rising tendency (except last flat NER layer) regarding extend F-score in GENIA, performance in ACE2005 was in downtrend.", "This accounts for the fact that F-score on top level was lower than it on the fist flat NER layer.", "Even though the decline trend in extended F-score, the first flat NER layer contained the largest proportion of predictions for the gold entities, the overall performance on all nested entities showed in Table 1 was still high.", "Unlike GENIA, our model in ACE2005 stopped before reaching the maximum nested level of entities.", "It indicates our model failed to model the appropriate nested levels.", "This is one of the reasons that account for the zero predictions on the last flat NER layer.", "One reason is that our model The sparse instances on the high nested levels could be another reason that resulted in the zero performances on the last flat NER layer.", "Compared with the state-of-the-art work on JNLPBA (Gridach, 2017) which achieved 75.87% in terms of F-score, our model obtained 75.55% in F-score.", "Since both the model by Gridach (2017) and our flat model are based on Lample et al. (2016), so it is reasonable that both models were able to get comparable performance.", "We showed the error types and their statistics both for all nested entities and each flat NER layer on GENIA and ACE2005 test datasets.", "From ACE2005 test dataset, 28% of predictions were incorrect in 200 sentences which were selected at random.", "Among these errors, 39% of them were because their text spans were assigned with other entity types.", "We call this type of errors type error .", "The main reason is that most of them are pronouns and co-refer to other entities which are absent in the sentence.", "Taking this sentence whether that is true now, we can not say as an example, we is annotated as ORG while our model labeled it as PER .", "Lack of context information such as the absence of co-referent entities leads our model to make the wrong decisions.", "In addition, 30% of the errors were caused by that incorrect predictions were predicted as only parts of gold entities with correct entity types.", "This error type is referred to as partial prediction error .", "This might be due to these gold entities tend to clauses or inde-1452 pendent sentences, thus possibly containing many modifiers.", "For example, in this sentence A man who has been to Baghdad many times and can tell us with great knowledge exactly what it's going to be like to fight on those avenues in that sprawling city of Baghdad Judy ., A man who has been to Baghdad many times and can tell us with great knowledge exactly what it's going to be like to fight on those avenues in that sprawling city of Baghdad is annotated as PER while our model could only extract A man who has been to Baghdad many times and predicted it as PER .", "Errors on the first flat NER layer, we got 41% in type error and 11% of partial prediction error, respectively.", "Apart from this, our model recognized predictions from other flat NER layers, leading to 5% errors.", "We define this error type as layer error .", "Unlike the first flat NER layer, 26% of errors were caused by layer error.", "Additionally, 17% of the errors belong to type error.", "In particular, 22% errors were due to the type error.", "As for the last flat NER layer, 40% errors were caused by partial prediction error.", "The rest errors were different from the mentioned error types.", "One possible reason is that we have less gold entities to train this flat NER layer compared with previous flat NER layers.", "Another reason might be the error propagation.", "Similarly, 200 sentences were randomly selected from GENIA test dataset.", "We got 20% errors of predictions in the subset.", "Among these errors, 17% and 24% of errors were separately due to type error and partial prediction error.", "In addition, 24% of the predictions on the first flat NER layer were incorrect.", "Among them, the top error types were layer error, partial prediction error and type error, accounting for 21%, 18% and 13%, respectively.", "Errors on the second flat NER layer were mainly caused by type error and the and partial prediction error.", "The success of neural networks has boosted the performance of flat named NER in different domains (Lample et al., 2016; Ma and Hovy, 2016; Gridach, 2017; Strubell et al., 2017).", "Such models achieved the state of the art without any handcrafted features and external knowledge resources.", "Contrary to flat NER, much fewer attempts have emphasized the nested entity recognition.", "Existing approaches to nested NER (Shen et al., 2003; Alex et al., 2007; Finkel and Manning, 2009; Lu and Roth, 2015; Xu and Jiang, 2016; Muis and Lu, 2017) mainly rely on hand-crafted features.", "They also failed to take advantage of the dependencies among nested entities.", "Our model enables capturing dependencies and automatic learning high-level abstract features from texts.", "Early work regarding nested NER involve mainly hybrid systems that combined rules with supervised learning algorithms.", "For example, Shen et al. (2003), Zhou et al. (2004) and Zhang et al. (2004) employed a Hidden Markov Model to GENIA to extract inner entities and then used rule-based methods to obtain the outer entities.", "Furthermore, Gu (2006) extracted nested entities based on SVM which were trained separately on both inner entities and outermost entities without putting the hidden relations between nested entities into consideration.", "All these methods failed to capture the dependencies between nested entities.", "One trial work is that Alex et al. (2007) separately built a inside-out and outside-in layered CRFs which were able to use the current guesses as the input for next layer.", "They also cascaded separate CRFs of each entity type by using output from previous CRFs as features of current CRFs, yielding best performance in their work.", "One of the main drawbacks in the cascading approach was that it failed to handle nested entities sharing the same entity type, which were quite common in natural languages.", "Finkel and Manning (2009) proposed a discriminative constituency tree to represent each sentence where the root node was used for connection.", "All entities were treated as phrases and represented as subtrees following the whole tree structure.", "Unlike our linguistic features independent model, Finkel and Manning (2009) used a CRF-based approach driven by entity-level features to detect nested entities Later on, Lu and Roth (2015) built hyper-graphs that allow edges to connect multiple nodes to represent both the nested entities and their references (a.k.a. mentions).", "One issue in their approach is the spurious structures of hyper-graphs as they enumerate combinations of nodes, types and boundaries to represent entities.", "In addition, they fail to encode the dependencies among embedded entities using hyper-graphs.", "In contrast, our model enables nested entity representation by merging representation of multiple tokens composed in the entity and considers it as the longer 1453 entity representation.", "This allows us to represent outer entities based on inner entity representation, thus managing to capture the relations between inner and outer entities, and hence overcoming the spurious entity structure problem.", "As an improvement in overcoming spurious structure issue in Lu and Roth (2015), Muis and Lu (2017) further incorporated mention separators along with features to yield better performance on nested entities.", "Both Lu and Roth (2015) and Muis and Lu (2017) rely on hand-crafted features to extract nested entities without incorporating hidden dependencies in nested entities.", "In contrast, we make the most of dependencies of nested entities in our model to encourage outer entity recognition by automatic learning of high-level and abstract features from sequences.", "Shared tasks dealing with nested entities like SemEval-2007 Task 9 9 and GermEval-2014 10 were held in order to advance the state-of-the-art on this issue.", "Additionally, as subtasks in KBP 2015 11 and KBP 2016 12 , one of the aims in tri-lingual Entity Discovery and Linking Track (EDL) track was extracting nested entities from textual documents varying from English, Chinese and Spanish.", "Following this task, Xu and Jiang (2016) firstly developed a new tagging scheme which is based on fixed-size ordinally-forgetting encoding (FOFE) method for text fragment representation.", "All the entities along their contexts were represented using this novel tagging scheme.", "Different from the extensively used LSTM-RNNs in sequence labeling task, a feed-forward neural network was used to predict labels on entity level for each fragment in any of given sequences.", "Additionally, Li et al. (2017) used the model proposed in Lample et al. (2016) to the extract both flat entities and components composed in nested and discontinuous entities.", "Another BiLSTM was applied to combine the components to get nested and discontinuous entities.", "However, these methods failed to capture and utilize the inner entity representation to facilitate outer entity detection.", "This paper presented a dynamic layered model which takes full advantage of inner entity information to encourage outer entity recognition in an end-to-end manner.", "Our model is based on a flat NER layer consisting of LSTM and CRF, so our model is able to capture context representation of input sequences and globally decode predicted labels at a flat NER layer without relying on feature-engineering.", "Our model automatically stacks the flat NER layers with sharing the parameters of LSTM and CRF in the layers.", "The stacking continues until the current flat NER layer predicts sequences as all outside of entities, which enables stopping dynamically stacked flat NER layers.", "Each flat NER layer receives the merged context representation as input for outer entity recognition, based on the predicted entities from the previous flat NER layer.", "With this dynamic end-to-end model, our model is able to outperform existing models, achieving the-state-of-art on two nested NER tasks.", "In addition, the model can be flexibly simplified as a flat NER model by removing components cascaded after the first NER layer.", "Extensive evaluation shows that utilization of inner entities significantly encourages outer entities detection with improvements of 3.9 and 9.1 percentage points in F-score on GENIA and ACE2005, respectively.", "Additionally, utilization of only current context representation contributes to the performance improvement than use of context representation from multi-layers.", "We thank the anonymous reviewers for their valuable comments.", "The first author is finan-cially supported by the University of Manchesters 2016 Presidents Doctoral Scholar Award.", "Sophia Ananiadou acknowledges BBSRC BB/P025684/1 Japan Partnering Award and BB/M006891/1 Empathy.", "This research has also been carried out with funding from AIRC/AIST and results obtained from a project commissioned by the New Energy and Industrial Technology Development Organization (NEDO)." ]
[ "abstain", "abstain", "objective", "abstain", "objective", "method", "method", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "result", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "objective", "abstain", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "method", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "other", "other", "other" ]
[ "The graph-to-sequence (Graph2Seq) learning aims to transduce graph-structured representations to word sequences for text generation.", "Recent studies propose various models to encode graph structure.", "However, most previous works ignore the indirect relations between distance nodes, or treat indirect relations and direct relations in the same way.", "In this paper, we propose the Heterogeneous Graph Transformer to independently model the different relations in the individual subgraphs of the original graph, including direct relations, indirect relations and multiple possible relations between nodes.", "Experimental results show that our model strongly outperforms the state of the art on all four standard benchmarks of AMR-to-text generation and syntax-based neural machine translation.", "Graph-to-sequence (Graph2Seq) learning has attracted lots of attention in recent years.", "Many Natural Language Process (NLP) problems involve learning from not only sequential data but also more complex structured data, such as semantic graphs.", "For example, AMR-to-text generation is a task of generating text from Abstract Meaning Representation (AMR) graphs, where nodes denote semantic concepts and edges refer to relations between concepts (see Figure 1", "(a)).", "In addition, it has been shown that even if the sequential input can be augmented by additional structural information, bringing benefits for some tasks, such as semantic parsing (Pust et al., 2015; Guo and Lu, 2018) and machine translation (Bastings et al., 2017).", "Therefore, Xu et al. (2018b) introduced the Graph2Seq problems which aim to generate target sequence from graph-structured data.", "capture the inherent structure in the given graph and learn good representations for generating the target text.", "Early work relies on statistical methods or sequence-to-sequence (Seq2Seq) models where input graphs are linearized (Lu et al., 2009; Song et al., 2017; Konstas et al., 2017).", "Recent studies propose various models based on graph neural network (GNN) to encode graphs (Xu et al., 2018b; Beck et al., 2018; Guo et al., 2019; Damonte and Cohen, 2019; Ribeiro et al., 2019).", "However, these approaches only consider the relations between directly connected nodes, ignore the indirect relations between distance nodes.", "Inspired by the success of Transformer (Vaswani et al., 2017) which can learn the dependencies between all tokens without regard to their distance, the current state-of-the-art Graph2Seq models (Zhu et al., 2019; Cai and Lam, 2020) are based on Transformer and learn the relations between all nodes no matter they are connected or not.", "These approaches use shortest relation path between nodes to encode semantic relationships.", "However, they ignore the information of nodes in the relation path and encode the direct relations and indirect relations without distinction.", "It may disturb the information propagation process when aggregate information from direct neighbors.", "To solve the issues above, we propose the Heterogeneous Graph Transformer (HetGT) to encode the graph, which independently model the different relations in the individual subgraphs of the original graph.", "HetGT is adapted from Transformer and it also employs an encoder-decoder architecture.", "Following Beck et al. (2018), we first transform the input into its corresponding Levi graph which is a heterogeneous graph (contains different types of edges).", "Then we split the transformed graph into multiple subgraphs according to its heterogeneity, which corresponds to different representation subspaces of the graph.", "For updating the node representations, attention mechanisms are used for inde-Figure 1:", "(a) An example of AMR graph for the sentence of Here it is a country with the freedom of speech .", "(b) Its corresponding extended Levi graph with three types of edges.", "(c) The architecture of HetGT encoder.", "pendently aggregating information in different subgraphs.", "Finally, the representations of each node obtained in different subgraphs are concatenated together and a parameterized linear transformation is applied.", "In this way, HetGT could adaptively model the various relations in the graph independently, avoiding the information loss caused by mixing all of them.", "Moreover, we introduce the jump connection in our model, which significantly improves the model performance.", "We evaluate our model on four benchmark datasets of two Graph2Seq tasks: the AMR-to-text generation and the syntax-based Neural Machine Translation (NMT).", "In terms of various evaluation metrics, our model strongly outperforms the state-of-the-art (SOTA) results on both two tasks.", "Particularly, in AMR-to-text generation, our model improves the BLEU scores of the SOTA by about 2 .", "2 points and 2 .", "3 points on two benchmark datasets (LDC2015E86 and LDC2017T10).", "In syntax-based NMT, our model surpasses the SOTA by about 4 .", "1 and 2 .", "2 BLEU scores for English-German and English-Czech on News Commentary v11 datasets from the WMT16 translation task.", "Our contributions can be summarized as follows: We propose the Heterogeneous Graph Transformer (HetGT) which adaptively models the various relations in different representation subgraphs.", "We analyze the shortcomings of the residual connection and introduce a better connectivity method around encoder layers.", "Experimental results show that our model achieves new state-of-the-art performance on four benchmark datasets of two Graph2Seq tasks.", "In this section, we will first begin with a brief review of the Transformer which is the basis of our model.", "Then we will introduce the graph transformation process.", "Finally, we will detail the whole architecture of HetGT.", "The Transformer employs an encoder-decoder architecture, consisting of stacked encoder and decoder layers.", "Encoder layers consist of two sublayers: a self-attention mechanism and a position-wise feed-forward network.", "Self-attention mechanism employs h attention heads.", "Each attention head operates on an input sequence x = ( x 1 , ..., x n ) of n elements where x i R d x , and computes a new sequence z = ( z 1 , ..., z n ) of the same length where z R d z .", "Finally, the results from all the attention heads are concatenated together and a parameterized linear transformation is applied to get the output of the self-attention sublayer.", "Each output element z i is computed as the weighted sum of Figure 2: An example of graph structure and its extension to subword units.", "linearly transformed input elements: z i = n (cid:88) j =1 ij (cid:0) x j WV (cid:1) (1) where ij is weight coefficient and computed by a softmax function: ij = softmax ( e ij ) = exp e ij (cid:80) nk =1 exp e ik (2) And e ij is computed using a compatibility function that compares two input elements: e ij = (cid:0) x i WQ (cid:1) (cid:0) x j WK (cid:1) T d z (3) Scaled dot product was chosen for the compatibility function.", "WV , WQ , WV R d x d z are layer-specific trainable parameter matrices.", "Meanwhile, these parameter matrices are unique per attention head.", "Following Beck et al. (2018), we transform the original graph into the Levi graph.", "The transformation equivalently turns edges into additional nodes so we can encode the original edge labels in the same way as for nodes.", "We also add a reverse edge between each pair of connected nodes as well as a self-loop edge for each node.", "These strategies can make the model benefit from the information propagation from different directions (See Figure 1", "(b)).", "In order to alleviate the data sparsity problem in the corpus, we further introduce the Byte Pair Encoding (BPE) (Sennrich et al., 2016) into the Levi Graph.", "We split the original node into multiple subword nodes.", "Besides adding default connections, we also add the reverse and self-loop edges among subwords.", "For example, the word country in Figure 2 is segmented into co@@, un@@, try with three types of edges between them.", "Finally, we transform the AMR graph into the extended Levi Graph which can be seen as a heterogeneous graph, as it has different types of edges.", "Our model is also an encoder-decoder architecture, consisting of stacked encoder and decoder layers.", "Given a preprocessed extended Levi graph, we split the extended Levi graph into multiple subgraphs according to its heterogeneity.", "In each graph encoder block, the node representation in different subgraphs is updated based on its neighbor nodes in the current subgraph.", "Then all the representations of this node in different subgraphs will be combined to get its final representation.", "In this way, the model can attend to information from different representation subgraphs and adaptively model the various relations.", "The learned representations of all nodes at the last block are fed to the sequence decoder for sequence generation.", "The architecture of HetGT encoder is shown in Figure 1", "(c).", "Due to the limitation of space the decoder is omitted in the figure.", "We will describe it in Section 2.3.2.", "Unlike previous Transformer-based Graph2Seq models using relative position encoding to incorporate structural information, we use a simpler way to encode the graph structure.", "As Transformer treats the sentence as a fully-connected graph, we directly mask the non-neighbor nodes' attention when updating each node's representation.", "Specifically, we mask the attention ij for node j / N i , where N i is the set of neighbors of node i in the graph.", "So given the input sequence x = ( x 1 , ..., x n ) , the output representation of node i denoted as z i in each attention head is computed as follows: z i = (cid:88) j N i ij (cid:0) x j WV (cid:1) (4) where ij represents the attention score of node j to i which is computed using scaled dot-product function as in Equation 2.", "We also investigate another way to compute attention scores.", "We use the additive form of attention instead of scaled dot-product attention, which is similar to graph attention network (Velickovic et al., 2018).", "The additive form of attention shows better performance and trainability in some tasks (Chen et al., 2019).", "The attention coefficient ij is computed as follows: ij = softmax ( e ij ) = exp e ij (cid:80) k N i exp e ik e ij = LeakyReLU (cid:0) a T [ x i WV ; x j WV ] (cid:1) (5) where a R 2 d z is a weight vector.", "LeakyReLU (Girshick et al., 2014) is used as the activation function.", "Motivated by the success of the multi-head mechanism, we propose the heterogeneous mechanism.", "Considering a sentence, multi-head attention allows the model to implicitly attend to information from different representation subspaces at different positions.", "Correspondingly, our heterogeneous mechanism makes the model explicitly attend to information in different subgraphs, corresponding to different representation subspaces of the graph, which enhances the models' encoding capability.", "As stated above, the extended Levi graph is a heterogeneous graph which contains different types of edges.", "For example, in Figure 1", "(b), the edge type vocabulary for the Levi graph of the AMR graph is T = { default, reverse, self } .", "Specifically, we first group all edge types into a single one to get a homogeneous subgraph referred to connected subgraph.", "The connected subgraph is actually an undirected graph which contains the complete connected information in the original graph.", "Then we split the input graph into multiple subgraphs according to the edge types.", "Besides learning the directly connected relations, we introduce a fully-connected subgraph to learn the implicit relationships between indirectly connected nodes.", "Finally, we get the set of subgraphs including M elements G sub = { fully-connected, connected, default, reverse } .", "For AMR graph M = 4 (For NMT M = 6 , we will detail it in section 3.1).", "Note that we do not have a subgraph only containing self edges.", "Instead, we add the self-loop edges into all subgraphs.", "We think it is more helpful for information propagation than constructing an independent self-connected subgraph.", "Now the output z in each encoder layer is computed as follows: z = FFN (cid:16) concat (cid:16) z G sub 1 , ..., z G sub M (cid:17) WO (cid:17) z G sub m i = (cid:88) j N G sub m i ij (cid:0) x j WV (cid:1) , m [1 , M ] (6) where WO R Md z d z is the parameter matrix.", "NG sub m i is the set of neighbors in the m -th subgraph Figure 3: Different layer aggregation methods: residual (left), jump (middle), dense (right).", "of node i .", "ij is computed as Equation 2 or Equation 5.", "FFN is a feed-forward network which consists of two linear transformations with a ReLU activation in between.", "We also employ the residual connections between sublayers as well as layer normalization.", "Note that the heterogeneous mechanism is independent of the model architecture, so it can be applied to any other graph models which may bring benefits.", "For decoder, we follow the standard implementation of the sequential Transformer decoder to generate the text sequence.", "The decoder layers consist of three sublayers: self-attention followed by encoder-decoder attention, followed by a position-wise feed-forward layer.", "As stated above, our model consists of stacked encoder layers.", "A better information propagation between encoder layers may bring better performance.", "Therefore, we investigate three different layer aggregation methods, which are illustrated in Figure", "3. When updating the representation of each node at l -th layer, recent approaches aggregate the neighbors first and then combine the aggregated result with the node's representation from ( l 1) -th layer.", "This strategy can be viewed as a form of a skip connection between different layers (Xu et al., 2018a): z ( l ) N i = AGGREGATE (cid:16) { z ( l 1) j , j N i } (cid:17) z ( l ) i = COMBINE (cid:16) z ( l ) N i , z ( l 1) i (cid:17) (7) The residual connection is another well-known skip connection which uses the identity mapping as the combine function to help signals propagate (He et al., 2016).", "However, these skip connections cannot adaptively adjust the neighborhood size of the final-layer representation independently.", "If we skip a layer for z ( l ) i , all subsequent units such as z ( l + j ) i using this representation will be using this skip implicitly.", "Thus, to selectively aggregate the outputs of previous layers at the last, we introduce the Jumping Knowledge architecture (Xu et al., 2018a) in our model.", "At the last layer L of the encoder, we combine all the outputs of previous encoder layers by concatenation to help the model selectively aggregate all of those intermediate representations.", "where W jump R ( Ld z + d x ) d z .", "Furthermore, to better improve information propagation, dense connectivity can be introduced as well.", "With dense connectivity, the nodes in l -th layer not only take input from ( l 1) -th layer but also draw information from all preceding layers: z ( l ) i = Concat (cid:16) z ( l 1) i , ..., z (1) i , x i (cid:17) W ( l ) dense (9) where W ( l ) dense R d ( l ) d z .", "d ( l ) = d x + d z ( l 1) .", "Dense connectivity are also introduced in previous researches (Huang et al., 2017; Guo et al., 2019).", "We build and test our model on two typical Graph2Seq learning tasks.", "One is AMR-to-text generation and the other is syntax-based NMT.", "Table 1 presents the statistics of four datasets of the two tasks.", "For AMR-to-text generation, we use two standard benchmarks LDC2015E86 (AMR15) and LDC2017T10 (AMR17).", "These two datasets contain 16K and 36K training instances, respectively, and share the development and test set.", "Each instance contains a sentence and an AMR graph.", "In the preprocessing steps, we apply entity sim-plification and anonymization in the same way as Konstas et al. (2017).", "Then we transform each preprocessed AMR graph into its extended Levi graph as described in Section 2.2.", "For the syntax-based NMT, we take syntactic trees of source texts as inputs.", "We evaluate our model on both English-German (En-De) and English-Czech (En-Cs) News Commentary v11 datasets from the WMT16 translation task 1 .", "Both sides are tokenized and split into subwords using BPE with 8000 merge operations.", "English text is parsed using SyntaxNet (Alberti et al., 2017).", "Then 1 http://www.statmt.org/wmt16/translation-task.html Dataset Train Dev Test LDC2015E86 (AMR15) 16,833 1,368 1,371 LDC2017T10 (AMR17) 36,521 1,368 1,371 English-Czech (En-Cs) 181,112 2,656 2,999 English-German (En-De) 226,822 2,169 2,999 Table 1: The statistics of four datasets.", "we transform the labeled dependency tree into the extended Levi graph as described in Section 2.2.", "Unlike AMR-to-text generation, in NMT task the input sentence contains significant sequential information.", "This information is lost when treating the sentence as a graph.", "Guo et al. (2019) consider this information by adding sequential connections between each word node.", "In our model, we also add forward and backward edges in the extended Levi graph.", "Thus, the edge types vocabulary for the extended Levi graph of the dependency tree is T = { default, reverse, self, forward, backward } .", "So the set of subgraphs for NMT is G sub = { fully-connected, connected, default, reverse, forward, backward } .", "Note that we do not change the model architecture in the NMT tasks.", "However, we still get good results, which indicates the effectiveness of our model on Graph2Seq tasks.", "Except for introducing BPE into Levi graph, the above preprocessing steps are following Bastings et al. (2017).", "We refer to them for further information on the preprocessing steps.", "Both our encoder and decoder have 6 layers with 512 -dimensional word embeddings and hidden states.", "We employ 8 heads and dropout with a rate of 0 .", "3 .", "For optimization, we use Adam optimizer with 2 = 0 .", "998 and set batch size to 4096 tokens.", "Meanwhile, we increase learning rate linearly for the first warmup steps , and decrease it thereafter proportionally to the inverse square root of the step number.", "We set warmup steps to 8000 .", "The similar learning rate schedule is adopted in (Vaswani et al., 2017).", "Our implementation uses the openNMT library (Klein et al., 2017).", "We train the models for 250K steps on a single GeForce GTX 1080 Ti GPU.", "Our code is available at https://github.com/QAQ-v/HetGT.", "For performance evaluation, we use BLEU (Pa-pineni et al., 2002), METEOR (Denkowski and Lavie, 2014) and sentence-level CHRF++ (Popovic, 2015) with default hyperparameter settings as evaluation metrics.", "Meanwhile, we use the tools in Neubig et al. (2019) for the statistical significance tests.", "Our baseline is the original Transformer 2 .", "For AMR-to-text generation, Transformer takes linearized graphs as inputs.", "For syntax-based NMT, Transformer is trained on the preprocessed translation dataset without syntactic information.", "We also compare the performance of HetGT with previous single/ensenmble approaches which can be grouped into three categories: (1) Recurrent neu-2 Parameters were chosen following the OpenNMT FAQ: http://opennmt.net/OpenNMT-py/FAQ.html#how-do-iuse-the-transformer-model ral network (RNN) based methods (GGNN2Seq, GraphLSTM); (2) Graph neural network (GNN) based methods (GCNSEQ, DGCN, G2S-GGNN); (3) The Transformer based methods (Structural Transformer, GTransformer).", "The ensemble models are denoted by subscripts in Table 2 and Table", "3. 3.4 Results on AMR-to-text Generation Table 2 presents the results of our single model and previous single/ensemble models on the test sets of AMR15 and AMR17.", "We can see that our Transformer baseline outperforms most previous single models, and our best single model HetGT additive outperforms the Transformer baseline by a large margin ( 6 . 15 BLEU and 6 . 44 BLEU) on both benchmarks.", "It demonstrates the importance of incorporating structural information.", "Meanwhile, HetGT additive gets an improvement of 2 .", "18 and 2 .", "28 BLEU points over the latest SoTA results (Zhu et al., 2019) on AMR15 and AMR17, respectively.", "Previous models can capture the structural information but most of them ignore heterogeneous information.", "These results indicate that the heterogeneity in the graph carries lots of useful information for the downstream tasks, and our model can make good use of it.", "Furthermore, our best single model still has better results than previous ensemble models on both two datasets.", "Note that additive attention based model HetGT additive is significantly better that dot-product attention based model HetGT dot-product in AMR-to-text generation.", "It may be attributed to that the additive attention has less parameters and is easier to train on the small dataset.", "Table 3 presents the results of our single model and previous single/ensemble models on the test sets for En-De and En-Cs language pairs.", "We can see that our Transformer baseline already outperforms all previous results even though some of them are Transformer based.", "It shows the effectiveness of Transformer for NMT tasks.", "Meanwhile, even without changing the model architecture for the NMT tasks, our single model surpasses Transformer baseline by 2 .", "26 and 1 .", "46 BLEU points on the En-De and En-Cs tasks, respectively, and our model surpasses previous best models by 4 .", "14 and 2 .", "19 BLEU points.", "In syntax-based NMT where the dataset is larger than AMR-to-text generation, the HetGT dot-product gets comparable results compared to the HetGT additive , and even outperforms the HetGT additive in terms of METEOR and CHRF++ on the language pair En-De.", "We think on the larger datasets the HetGT dot-product will get better results than the HetGT additive .", "Firstly, we compare the performance of three layer-aggregation methods discussed in Section 2.3.3.", "The results are shown in Table", "4. We can see the jump connection is the most effective method.", "However, the dense connection performs the worst.", "We think the reason is that dense connection introduce lots of extra parameters which are harder to learn.", "In this section, we also use AMR15 as our benchmark to investigate how each subgraph influences the final results of our best model HetGT additive .", "Table 5 shows the results of removing or only keeping the specific subgraph.", "Only keeping the fully-connected subgraph essentially is what the Transformer baseline does.", "It means the model does not consider the inherent structural information in inputs.", "Obviously, it cannot get a good result.", "In addition, only keeping the connected subgraph does not perform well even it considers the structural information.", "It demonstrates that the heterogeneous information in the graph is helpful for learning the representation of the graph.", "When removing any subgraph, the performance of the model will decrease.", "It demonstrates that each subgraph has contributed to the final results.", "At last, we remove BPE, and we get 29 .", "84 BLEU score which is still better than previous SoTA that also uses BPE.", "Note that when we remove the connected subgraph, the results do not have statistically significant changes ( p = 0 . 293 ).", "We think the reason is that the left subgraphs already contain the full information of the original graph because the connected subgraph is obtained by grouping all edge types into a single one.", "Except that, all the other results have statistically significant changes ( p 0 . 05 ).", "We perform case studies for better understanding the model performance.", "We compare the outputs of Transformer baseline and our HetGT additive .", "The results are presented in Table 6.", "In the first simple example, our Transformer baseline and HetGT additive can generate the target sequence without mistakes.", "In the second example which is more complicated, the Transformer baseline fails to identify the possessor of opinion and the subject of agreed while our model successfully recognizes them.", "However, we find the there is a common problem: the sentences they generate all have some duplication.", "We will explore this issue further in the future work.", "Early researches for Graph2Seq learning tasks are based on statistical methods and neural seq2seq", "model.", "Lu et al. (2009) propose an NLG approach built on top of tree conditional random fields to use the tree-structured meaning representation.", "Song et al. (2017) use synchronous node replacement grammar to generate text.", "Konstas et al. (2017) linearize the input graph and feed it to the seq2seq model for text-to-AMR parsing and AMR-to-text generation.", "However, linearizing AMR graphs into sequences may incurs in loss of information.", "Recent efforts consider to capture the structural information in the encoder.", "Beck et al. (2018) employ Gated Graph Neural Networks (GGNN) as the encoder and Song et al. (2018) propose the graph-state LSTM to incorporate the graph structure.", "Their works belong to the family of recurrent neural network (RNN).", "In addition, there are some works are build upon the GNN.", "Damonte and Cohen (2019) propose stacking encoders including LSTM and GCN.", "Guo et al. (2019) introduce the densely connected GCN to encode richer local and non-local information for better graph representation.", "Recent studies also extend Transformer to encode structure information.", "Shaw et al. (2018) propose the relation-aware self-attention which learns explicit embeddings for pair-wise relationships between input elements.", "Zhu et al. (2019) and Cai and Lam (2020) both extend the relation-aware self-attention to generate text from AMR graph.", "Our model is also based on Transformer.", "However, we do not employ the relative position encoding to incorporate structural information.", "Instead, we directly mask the non-neighbor nodes attention when updating each nodes representation.", "Moreover, we introduce the heterogeneous information and jump connection to help model learn a better graph representation, bringing substantial gains in the model performance.", "In this paper, we propose the Heterogeneous Graph Transformer (HetGT) for Graph2Seq learning.", "Our proposed heterogeneous mechanism can adaptively model the different representation subgraphs.", "Experimental results show that HetGT strongly outperforms the state of the art performances on four benchmark datasets of AMR-to-text generation and syntax-based neural machine translation tasks.", "There are two directions for future works.", "One is to investigate how the other graph models can benefit from our proposed heterogeneous mechanism.", "also like to proposed model", "Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik.", "2014.", "Rich feature hierarchies for accurate object detection and semantic segmentation.", "In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 580587.", "Zhijiang Guo and Wei Lu.", "2018.", "Better transition-based AMR parsing with a refined search space.", "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing , pages 17121722, Brussels, Belgium.", "Association for Computational Linguistics.", "Zhijiang Guo, Yan Zhang, Zhiyang Teng, and Wei Lu.", "2019.", "Densely connected graph convolutional networks for graph-to-sequence learning.", "Transactions of the Association for Computational Linguistics , 7:297312.", "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.", "2016.", "Identity mappings in deep residual networks.", "In European conference on computer vision , pages 630645.", "Springer.", "G. Huang, Z. Liu, L. v. d.", "Maaten, and K. Q. Weinberger.", "2017.", "Densely connected convolutional networks.", "In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , pages 2261 2269.", "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senel-lart, and Alexander M. Rush.", "2017.", "OpenNMT: Open-source toolkit for neural machine translation.", "In Proc.", "ACL .", "Graham Neubig, Zi-Yi Dou, Junjie Hu, Paul Michel, Danish Pruthi, Xinyi Wang, and John Wieting.", "2019.", "compare-mt: A tool for holistic comparison of language generation systems.", "CoRR , abs/1903.07926.", "Maja Popovic.", "2015.", "chrF: character n-gram f-score for automatic MT evaluation.", "This work was supported by National Natural Science Foundation of China (61772036) and Key Laboratory of Science, Technology and Standard in Press Industry (Key Laboratory of Intelligent Press Media Technology).", "We thank the anonymous reviewers for their helpful comments.", "Xiaojun Wan is the corresponding author." ]
[ "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "objective", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "method", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "method", "method", "abstain", "objective", "objective", "abstain", "abstain", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other" ]
[ "We present a new approach to the design of deep networks for natural language processing (NLP), based on the general technique of Tensor Product Representations (TPRs) for encoding and processing symbol structures in distributed neural networks.", "A network architecture the Tensor Product Generation Network ( TPGN ) is proposed which is capable in principle of carrying out TPR computation, but which uses unconstrained deep learning to design its internal representations.", "Instantiated in a model for image-caption generation, TPGN outperforms LSTM baselines when evaluated on the COCO dataset.", "The TPR-capable structure enables interpretation of internal representations and operations, which prove to contain considerable grammatical content.", "Our caption-generation model can be interpreted as generating sequences of grammatical categories and retrieving words by their categories from a plan encoded as a distributed representation.", "In this paper we introduce a new architecture for natural language processing (NLP).", "On what type of principles can a computational architecture be founded?", "It would seem a sound principle to require that the hypothesis space for learning which an architecture provides include network hypotheses that are independently known to be suitable for performing the target task.", "Our proposed architecture makes available to deep learning network configurations that perform natural language generation by use of Tensor Product Representations (TPRs) (Smolensky and Legendre, 2006).", "Whether learning will create TPRs is unknown in advance, but what we can say with certainty is that the hypothesis space being searched during learnLD is currently at Citadel.", "ing includes TPRs as one appropriate solution to the problem.", "TPRs are a general method for generating vector-space embeddings of complex symbol structures.", "Prior work has proved that TPRs enable powerful symbol processing to be carried out using neural network computation (Smolen-sky, 2012).", "This includes generating parse trees that conform to a grammar (Cho et al., 2017), although incorporating such capabilities into deep learning networks such as those developed here remains for future work.", "The architecture presented here relies on simpler use of TPRs to generate sentences; grammars are not explicitly encoded here.", "We test the proposed architecture by applying it to image-caption generation (on the MS-COCO dataset, (COCO, 2017)).", "The results improve upon a baseline deploying a state-of-the-art LSTM architecture (Vinyals et al., 2015), and the TPR foundations of the architecture provide greater interpretability.", "Section 2 of the paper reviews TPR.", "Section 3 presents the proposed architecture, the Tensor Product Generation Network (TPGN).", "Section 4 describes the particular model we study for image captioning, and Section 5 presents the experimental results.", "Importantly, what the model has learned is interpreted in Section 5.3.", "Section 6 discusses the relation of the new model to previous work and Section 7 concludes.", "The central idea of TPRs (Smolensky, 1990) can be appreciated by contrasting the TPR for a word string with a bag-of-words (BoW) vector-space embedding.", "In a BoW embedding, the vector that encodes Jay saw Kay is the same as the one that encodes Kay saw Jay : J + K + s where 1263 J , K , s are respectively the vector embeddings of the words Jay , Kay , saw .", "A TPR embedding that avoids this confusion starts by analyzing Jay saw Kay as the set { Jay / SUBJ , Kay / OBJ , saw / VERB } .", "(Other analyses are possible: see Section 3.)", "Next we choose an embedding in a vector space VF for Jay , Kay , saw as in the BoW case: J , K , s .", "Then comes the step unique to TPRs: we choose an embedding in a vector space VR for the roles SUBJ , OBJ , VERB : r SUBJ , r OBJ , r VERB .", "Crucially, r SUBJ 6 = r OBJ .", "Finally, the TPR for Jay saw Kay is the following vector in VF VR : v Jay saw Kay = J r SUBJ + K r OBJ + s r VERB (1) Each word is tagged with the role it fills in the sentence; Jay and Kay fill different roles.", "This TPR avoids the BoW confusion: v Jay saw Kay 6 = v Kay saw Jay because J r SUBJ + K r OBJ 6 = J r OBJ + K r SUBJ .", "In the terminology of TPRs, in Jay saw Kay , Jay is the filler of the role SUBJ , and J r SUBJ is the vector embedding of the filler/role binding Jay / SUBJ .", "In the vector space embedding, the binding operation is the tensor or generalized outer product ; i.e., J r SUBJ is a tensor with 2 indices defined by: [ J r SUBJ ] [ J ] [ r SUBJ ] .", "The tensor product can be used recursively, which is essential for the TPR embedding of recursive structures such as trees and for the computation of recursive functions over TPRs.", "However, in the present context, recursion will not be required, in which case the tensor product can be regarded as simply the matrix outer product (which cannot be used recursively); we can regard J r SUBJ as the matrix product Jr > SUBJ .", "Then Equation 1 becomes v Jay saw Kay = Jr > SUBJ + Kr > OBJ + sr > VERB (2) Note that the set of matrices (or the set of tensors with any fixed number of indices) is a vector space; thus Jay saw Kay 7 v Jay saw Kay is a vector-space embedding of the symbol structures constituting sentences.", "Whether we regard v Jay saw Kay as a 2-index tensor or as a matrix, we can call it simply a vector' since it is an element of a vector space: in the context of TPRs, vector' is used in a general sense and should not be taken to imply a single-indexed array.", "Crucial to the computational power of TPRs and to the architecture we propose here is the notion of unbinding .", "Just as an outer product the tensor product can be used to bind the vector embedding a filler Jay to the vector embedding a role SUBJ , J r SUBJ or Jr > SUBJ , so an inner product can be used to take the vector embedding a structure and unbind a role contained within that structure, yielding the symbol that fills the role.", "In the simplest case of orthonormal role vectors r i , to unbind role SUBJ in Jay saw Kay we can compute the matrix-vector product: v Jay saw Kay r SUBJ = J (because r > i r j = ij when the role vectors are orthonormal).", "A similar situation obtains when the role vectors are not orthonormal, provided they are not linearly dependent: for each role such as SUBJ there is an unbinding vector u SUBJ such that r > i u j = ij so we get: v Jay saw Kay u SUBJ = J .", "A role vector such as r SUBJ and its unbinding vector u SUBJ are said to be duals of each other.", "(If R is the matrix in which each column is a role vector r j , then R is invertible when the role vectors are linearly indepen-dent; then the unbinding vectors u i are the rows of R 1 . When the r j are orthonormal, u i = r i . Replacing the matrix inverse with the pseudo-inverse allows approximate unbinding if the role vectors are linearly dependent.)", "We can now see how TPRs can be used to generate a sentence one word at a time.", "We start with the TPR for the sentence, e.g., v Jay saw Kay .", "From this vector we unbind the role of the first word, which is SUBJ : the embedding of the first word is thus v Jay saw Kay u SUBJ = J , the embedding of Jay .", "Next we take the TPR for the sentence and unbind the role of the second word, which is VERB : the embedding of the second word is then v Jay saw Kay u VERB = s , the embedding of saw .", "And so on.", "To accomplish this, we need two representations to generate the t th word:", "(i) the TPR of the sentence, S (or of the string of not-yet-produced words, S t ) and", "(ii) the unbinding vector for the t th word, u t .", "The architecture we propose will therefore be a recurrent network containing two subnetworks:", "(i) a subnet S hosting the representation S t , and a", "(ii) a subnet U hosting the unbinding vector u t .", "This is shown in Fig. 1. 3 A TPR-capable generation architecture As Fig. 1 shows, the proposed Tensor Product Generation Network architecture (the dashed box labeled N ) is designed to support the technique 1264 Figure 1: Architecture of TPGN, a TPR-capable generation network.", "for generation just described: the architecture is TPR-capable .", "There is a sentence-encoding subnetwork S which could host a TPR of the sentence to be generated, and an unbinding subnetwork U which could output a sequence of unbinding vectors u t ; at time t , the embedding f t of the word produced, x t , could then be extracted from S t via the matrix-vector product (shown in the fig-ure by 2 ): f t = S t u t .", "The lexical-decoding subnetwork L converts the embedding vector f t to the 1-hot vector x t corresponding to the word x t .", "Unlike some other work (Palangi et al., 2017), TPGN is not constrained to literally learn TPRs.", "The representations that will actually be housed in S and U are determined by end-to-end deep learning on a task: the bubbles in Fig. 1 show what would be the meanings of S t , u t and f t if an actual TPR scheme were instantiated in the architecture.", "The learned representations S t will not be proven to literally be TPRs, but by analyzing the unbinding vectors u t the network learns, we will gain insight into the process by which the learned matrices S t give rise to the generated sentence.", "The task studied here is image captioning; Fig. 1 shows that the input to this TPGN model is an image, preprocessed by a CNN which produces the initial representation in S , S 0 .", "This vector S 0 drives the entire caption-generation process: it contains all the image-specific information for producing the caption.", "(We will call a caption a sentence even though it may in fact be just a noun phrase.)", "The two subnets S and U are mutually-connected LSTMs (Hochreiter and Schmidhuber, 1997): see Fig. 2. The internal hidden state of U , p t , is sent as input to S ; U also produces output, the unbinding vector u t .", "The internal hidden state of S , S t , is sent as input to U , and also produced as output.", "As stated above, these two outputs are multiplied together to produce the embedding vector f t = S t u t of the output word x t .", "Furthermore, the 1-hot encoding x t of x t is fed back at the next time step to serve as input to both S and U .", "What type of roles might the unbinding vectors be unbinding?", "A TPR for a caption could in principle be built upon positional roles , syn-tactic/semantic roles , or some combination of the two.", "In the caption a man standing in a room with a suitcase , the initial a and man might respectively occupy the positional roles of POS ( ITION ) 1 and POS 2 ; standing might occupy the syntactic role of VERB ; in the role of SPATIAL -P( REPOSITION ); while a room with a suitcase might fill a 5-role schema DET ( ERMINER ) 1 N( OUN ) 1 P DET 2 N 2 .", "In fact we will provide evidence in Sec. 5.3.2 that our network learns just this kind of hybrid role decomposition; further evidence for these particular roles is presented elsewhere.", "What form of information does the sentence-encoding subnetwork S need to encode in S ?", "Continuing with the example of the previous paragraph, S needs to be some approximation to the TPR summing several filler/role binding matrices.", "In one of these bindings, a filler vector f a which the lexical subnetwork L will map to the article a is bound (via the outer product) to a role vector r POS 1 which is the dual of the first unbinding vector produced by the unbinding subnetwork U : u POS 1 .", "In the first iteration of generation the model computes S 1 u POS 1 = f a , which L then maps to a .", "Analogously, another binding approximately contained in S 2 is f man r > POS 2 .", "There are corresponding approximate bindings for the remaining words 1265 of the caption; these employ syntactic/semantic roles.", "One example is f standing r > V .", "At iteration 3, U decides the next word should be a verb, so it generates the unbinding vector u V which when multiplied by the current output of S , the matrix S 3 , yields a filler vector f standing which L maps to the output standing .", "S decided the caption should deploy standing as a verb and included in S an approximation to the binding f standing r > V .", "It similarly decided the caption should deploy in as a spatial preposition, approximately including in S the binding f in r > SPATIAL-P ; and so on for the other words in their respective roles in the caption.", "As stated above, the unbinding subnetwork U and the sentence-encoding subnetwork S of Fig. 1 are each implemented as (1-layer, 1-directional) LSTMs (see Fig. 2); the lexical subnetwork L is implemented as a linear transformation followed by a softmax operation.", "In the equations below, the LSTM variables internal to the S subnet are indexed by 1 (e.g., the forget-, input-, and output-gates are respectively f 1 , i 1 , o 1 ) while those of the unbinding subnet U are indexed by 2. Thus the state updating equations for S are, for t = 1 , , T = caption length: f 1 ,t = g ( W 1 ,f p t 1 D 1 ,f W e x t 1 + U 1 ,f S t 1 ) (3) i 1 ,t = g ( W 1 ,i p t 1 D 1 ,i W e x t 1 + U 1 ,i S t 1 ) (4) o 1 ,t = g ( W 1 ,o p t 1 D 1 ,o W e x t 1 + U 1 ,o S t 1 ) (5) g 1 ,t = h ( W 1 ,c p t 1 D 1 ,c W e x t 1 + U 1 ,c S t 1 ) (6) c 1 ,t = f 1 ,t (cid:12) c 1 ,t 1 + i 1 ,t (cid:12) g 1 ,t (7) S t = o 1 ,t (cid:12) h ( c 1 ,t ) (8) Here f 1 ,t , i 1 ,t , o 1 ,t , g 1 ,t , c 1 ,t , S t R d d , p t R d ; g ( ) is the (element-wise) logistic sigmoid function; h ( ) is the hyperbolic tangent function; the operator (cid:12) denotes the Hadamard (element-wise) product; W 1 ,f , W 1 ,i , W 1 ,o , W 1 ,c R ( d d ) d , D 1 ,f , D 1 ,i , D 1 ,o , D 1 ,c R ( d d ) d , U 1 ,f , U 1 ,i , U 1 ,o , U 1 ,c R ( d d ) ( d d ) .", "For clarity, biases included throughout the model are omitted from all equations in this paper.", "The initial state S 0 is initialized by: S 0 = C s ( v v ) (9) where v R 2048 is the vector of visual features extracted from the current image by ResNet (Gan et al., 2017) and v is the mean of all such vectors; C s R ( d d ) 2048 .", "On the output side, x t RV is a 1-hot vector with dimension equal to the size of the caption vocabulary, V , and W e R d V is a word embedding matrix, the i -th column of which is the embedding vector of the i -th word in the vocabulary; it is obtained by the Stanford GLoVe algorithm with zero mean (Pennington et al., 2017).", "x 0 is initialized as the one-hot vector corresponding to a start-of-sentence symbol.", "f 2 ,t = g ( S t 1 w 2 ,f D 2 ,f W e x t 1 + U 2 ,f p t 1 ) (10) i 2 ,t = g ( S t 1 w 2 ,i D 2 ,i W e x t 1 + U 2 ,i p t 1 ) (11) o 2 ,t = g ( S t 1 w 2 ,o D 2 ,o W e x t 1 + U 2 ,o p t 1 ) (12) g 2 ,t = h ( S t 1 w 2 ,c D 2 ,c W e x t 1 + U 2 ,c p t 1 ) (13) c 2 ,t = f 2 ,t (cid:12) c 2 ,t 1 + i 2 ,t (cid:12) g 2 ,t (14) p t = o 2 ,t (cid:12) h ( c 2 ,t ) (15)", "Here w 2 ,f , w 2 ,i , w 2 ,o , w 2 ,c R d , D 2 ,f , D 2 ,i D 2 ,o , D 2 ,c R d d , and U 2 ,f , U 2 ,i , U 2 ,o , U 2 ,c d d", "The initial state p 0 is the zero vector.", "R .", "The dimensionality of the crucial vectors shown in Fig. 1, u t and f t , is increased from d 1 to d 2 1 as follows.", "A block-diagonal d 2 d 2 matrix S t is created by placing d copies of the d d matrix S t as blocks along the principal diagonal.", "This matrix is the output of the sentence-encoding subnetwork S .", "Now the filler vector' f t R d 2 unbound' from the sentence representation S t with the un-binding vector' u t is obtained by Eq.", "(16).", "Here u t R d 2 , the output of the unbinding subnetwork U , is computed as in Eq.", "(17), where W u R d 2 d is U 's output weight matrix.", "where s ( ) is the softmax function and W x RV d 2 is the overall output weight matrix.", "Since W x plays the role of a word de-embedding matrix, we can set W x = ( W e ) > (19) where W e is the word-embedding matrix.", "Since W e is pre-defined, we directly set W x by Eq.", "(19) without training L through Eq.", "(18).", "Note that S and U are learned jointly through end-to-end training as shown in Algorithm 1. 1266 Figure 2: The sentence-encoding subnet S and the unbinding subnet U are inter-connected LSTMs; v encodes the visual input while the x t encode the words of the output caption.", "Algorithm 1 End-to-end training of S and U Input: Image feature vector v ( i ) and corresponding caption X ( i ) = [ x ( i ) 1 , , x ( i ) T ] ( i = 1 , , N ), where N is the total number of samples.", "Output: W 1 ,f , W 1 ,i , W 1 ,o , W 1 ,c , C s , D 1 ,f , D 1 ,i , D 1 ,o , D 1 ,c , U 1 ,f , U 1 ,i , U 1 ,o , U 1 ,c , w 2 ,f , w 2 ,i , w 2 ,o , w 2 ,c , D 2 ,f , D 2 ,i , D 2 ,o , D 2 ,c , U 2 ,f , U 2 ,i , U 2 ,o , U 2 ,c , W u , W x .", "1: Initialize S 0 by (9); 2: Initialize x 0 as the one-hot vector corresponding to the start-of-sentence symbol; 3: Initialize p 0 as the zero vector; 4: Randomly initialize weights W 1 ,f , W 1 ,i , W 1 ,o , W 1 ,c , C s , D 1 ,f , D 1 ,i , D 1 ,o , D 1 ,c , U 1 ,f , U 1 ,i , U 1 ,o , U 1 ,c , w 2 ,f , w 2 ,i , w 2 ,o , w 2 ,c , D 2 ,f , D 2 ,i , D 2 ,o , D 2 ,c , U 2 ,f , U 2 ,i , U 2 ,o , U 2 ,c , W u , W x ; 5: for n from 1 to N do 6: for t from 1 to T do 7: Calculate (3) (8) to obtain S t ; 8: Calculate (10) (15) to obtain p t ; 9: Calculate (17) to obtain u t ; 10: Calculate (16) to obtain f t ; 11: Calculate (18) to obtain x t ; 12: Update weights W 1 ,f , W 1 ,i , W 1 ,o , W 1 ,c , C s , D 1 ,f , D 1 ,i , D 1 ,o , D 1 ,c , U 1 ,f , U 1 ,i , U 1 ,o , U 1 ,c , w 2 ,f , w 2 ,i , w 2 ,o , w 2 ,c , D 2 ,f , D 2 ,i , D 2 ,o , D 2 ,c , U 2 ,f , U 2 ,i , U 2 ,o , U 2 ,c , W u by the back-propagation algorithm; 13: end for 14: end for 5 Experimental results 5.1 Dataset To evaluate the performance of our proposed model, we use the COCO dataset (COCO, 2017).", "The COCO dataset contains 123,287 images, each of which is annotated with at least 5 captions.", "We use the same pre-defined splits as in (Karpathy and Fei-Fei, 2015; Gan et al., 2017): 113,287 images for training, 5,000 images for validation, and 5,000 images for testing.", "We use the same vocabulary as that employed in (Gan et al., 2017), which consists of 8,791 words.", "For the CNN of Fig. 1, we used ResNet-152 (He et al., 2016), pretrained on the ImageNet dataset.", "The feature vector v has 2048 dimensions.", "Word embedding vectors in W e are downloaded from the web (Pennington et al., 2017).", "The model is implemented in TensorFlow (Abadi et al., 2015) with the default settings for random initialization and optimization by backpropagation.", "In our experiments, we choose d = 25 (where d is the dimension of vector p t ).", "The dimension of S t is 625 625 (while S t is 25 25 ); the vocabulary size V = 8 , 791 ; the dimension of u t and f t is d 2 = 625 .", "The main evaluation results on the MS COCO dataset are reported in Table 5.2.", "The widely-used BLEU (Papineni et al., 2002), METEOR (Banerjee and Lavie, 2005), and CIDEr (Vedan-tam et al., 2015) metrics are reported in our quantitative evaluation of the performance of the proposed model.", "In evaluation, our baseline is the widely used CNN-LSTM captioning method originally proposed in (Vinyals et al., 2015).", "For comparison, we include results in that paper in the first line of Table 5.2.", "We also re-implemented the model using the latest ResNet features and report the results in the second line of Table 5.2.", "Our re-implementation of the CNN-LSTM method matches the performance reported in (Gan et al., 2017), showing that the baseline is a state-of-the-art implementation.", "For TPGN, we use parameter settings in a similar range to those in (Gan et al., 2017).", "TPGN has comparable, although slightly 1267 Methods METEOR BLEU-1 BLEU-2 BLEU-3 BLEU-4 CIDEr NIC (Vinyals et al., 2015) 0.237 0.666 0.461 0.329 0.246 0.855 CNN-LSTM 0.238 0.698 0.525 0.390 0.292 0.889 TPGN 0.243 0.709 0.539 0.406 0.305 0.909 Table 1: Performance of the proposed TPGN model on the COCO dataset.", "more, parameters than the CNN-LSTM.", "The training time of TPGN is roughly 50% more than the CNN-LSTM model.", "The weights in TPGN are updated at every mini-batch; in the experiments, we use a batch size of 64 images.", "As shown in Table 5.2, compared to the CNN-LSTM baseline, the proposed TPGN appreciably outperforms the benchmark schemes in all metrics across the board.", "The improvement in BLEUn is greater for greater n ; TPGN particularly improves generation of longer subsequences.", "The results attest to the effectiveness of the TPGN architecture.", "It is worth mentioning that this paper is aimed at developing a Tensor Product Representation (TPR) inspired network to replace the core layers in an LSTM; therefore, it is directly comparable to an LSTM baseline.", "So in the experiments, we focus on comparison to a strong CNN-LSTM baseline.", "We acknowledge that more recent papers (Xu et al., 2017; Rennie et al., 2017; Yao et al., 2017; Lu et al., 2017; Gan et al., 2017) reported better performance on the task of image captioning.", "Performance improvements in these more recent models are mainly due to using better image features such as those obtained by Region-based Convolutional Neural Networks (R-CNN), or using reinforcement learning (RL) to directly optimize metrics such as CIDEr, or using more complex attention mechanisms (Gan et al., 2017) to provide a better context vector for caption generation, or using an ensemble of multiple LSTMs, among others.", "However, the LSTM is still playing a core role in these works and we believe improvement over the core LSTM, in both performance and interpretability, is still very valuable; that is why we compare the proposed TPGN with a state-of-the-art native LSTM (the second line of Table 5.2).", "To get a sense of how the sentence encodings S t learned by TPGN approximate TPRs, we now investigate the meaning of the role-unbinding vector", "vector u t the model uses to unbind from S t via Eq.", "(16) the filler vector f t that produces via Eq.", "(18) the one-hot vector x t of the t th generated caption word.", "The meaning of an unbinding vector is the meaning of the role it unbinds.", "Interpreting the unbinding vectors reveals the meaning of the roles in a TPR that S approximates.", "We run the TPGN model with 5,000 test images as input, and obtain the unbinding vector u t used to generate each word x t in the caption of a test image.", "We plot 1,000 unbinding vectors u t , which correspond to the first 1,000 words in the resulting captions of these 5,000 test images.", "There are 17 parts of speech (POS) in these 1,000 words.", "The POS tags are obtained by the Stanford Parser (Manning, 2017).", "We use the Embedding Projector in Tensor-Board (Google, 2017) to plot 1,000 unbinding vectors u t with a custom linear projection in Tensor-Board to reduce 625 dimensions of u t to 2 dimensions shown in Fig. 3 through Fig. 7.", "Fig. 3 shows the unbinding vectors of 1000 words; different POS tags of words are represented by different colors.", "In fact, we can partition the 625-dim space of u t into 17 regions, each of which 1268 contains 76.3% words of the same type of POS on average; i.e., each region is dominated by words of one POS type.", "This clearly indicates that each unbinding vector contains important grammatical information about the word it generates .", "As examples, Fig. 4 to Fig. 7 show the distribution of the unbinding vectors of nouns, verbs, adjectives, and prepositions, respectively.", "Furthermore, we show that the subject and the object of a sentence can be distinguished based on u t in (Huang et al., 2018).", "Since the previous section indicates that there is a clustering structure for u t , in this section we partition u t into N u clusters and examine the grammar roles played by u t .", "First, we run the trained TPGN model on the 113,287 training images, obtaining the role-Figure 6: Unbinding vectors of 55 adjectives in red and 945 words of other types of POS in grey.", "unbinding vector u t used to generate each word x t in the caption sentence.", "There are approximately 1.2 million u t vectors over all the training images.", "We apply the K-means clustering algorithm to these vectors to obtain N u clusters and the centroid i of each cluster i ( i = 0 , , N u 1 ).", "Then, we run the TPGN model with 5,000 test images as input, and obtain the role vector u t of each word x t in the caption sentence of a test image.", "Using the nearest neighbor rule, we obtain the index i of the cluster that each u t is assigned to.", "The partitioning of the unbinding vectors u t into N u = 2 clusters exposes the most fundamental distinction made by the roles.", "We find that the vectors assigned to Cluster 1 generate words which are nouns, pronouns, indefinite and definite articles, and adjectives, while the vectors assigned to Cluster 0 generate verbs, prepositions, conjunctions, and adverbs.", "Thus Cluster 1 contains the noun-related words, Cluster 0 the verb-like words 1269 Category N w N r P c Nouns 16683 16115 0.969 Pronouns 462 442 0.957 Indefinite articles 7248 7107 0.981 Definite articles 797 762 0.956 Adjectives 2543 2237 0.880 Verbs 3558 3409 0.958 Prepositions & conjunctions 8184 7859 0.960 Adverbs 13 8 0.615 Table 2: Conformity to N/V generalization ( N u = 2) .", "Cross-cutting this distinction is another dimension, however: the initial word in a caption (always a determiner) is sometimes generated with a Cluster 1 unbinding vector, sometimes with a Cluster 0 vector.", "Outside the caption-initial position, exceptions to the nominal/verbal Cluster 1/0 generalization are rare, as attested by the high rates of conformity to the generalization shown in Table 5.3.1.", "Table 5.3.1 shows the likelihood of correctness of this N/V' generalization for the words in 5,000 sentences captioned for the 5,000 test images; N w is the number of words in the category, N r is the number of words conforming to the generalization, and P c = N r /N w is the proportion conforming.", "We use the Natural Language Toolkit (NLTK, 2017) to identify the part of speech of each word in the captions.", "A similar analysis with N u = 10 clusters reveals the results shown in Table 5.3.1; these results concern the first 100 captions, which were inspected manually to identify interpretable patterns.", "(More comprehensive results will be discussed elsewhere.)", "The clusters can be interpreted as falling into 3 groups (see Table 5.3.1).", "Clusters 2 and 3 are clearly positional roles: every initial word is generated by a role-unbinding vector from Cluster 2, and such vectors are not used elsewhere in the string.", "The same holds for Cluster 3 and the second caption word.", "For caption words after the second word, position is replaced by syntactic/semantic properties for interpretation purposes.", "The vector clusters aside from 2 and 3 generate words with a dominant grammatical category: for example, unbinding vectors assigned to the cluster 4 generate words that are 91% likely to be prepositions, and 72% likely to be spatial prepositions.", "Cluster 7 generates 88% nouns and 9% adjectives, with the remaining 3% scattered across other categories.", "As Table 5.3.1 shows, clusters 1, 5, 7, 9 are primarily nominal, and 0, 4, 6, and 8 primarily verbal.", "(Only cluster 5 spans the N/V divide.) 6 Related work This work follows a great deal of recent caption-generation literature in exploiting end-to-end deep learning with a CNN image-analysis front end producing a distributed representation that is then used to drive a natural-language generation process, typically using RNNs (Mao et al., 2015; Vinyals et al., 2015; Devlin et al., 2015; Chen and Zitnick, 2015; Donahue et al., 2015; Karpathy and Fei-Fei, 2015; Kiros et al., 2014a,b; Xu et al., 2017; Rennie et al., 2017; Yao et al., 2017; Lu et al., 2017).", "Our grammatical interpretation of the structural roles of words in sentences makes contact with other work that incorporates deep learning into grammatically-structured networks (Tai et al., 2015; Kumar et al., 2016; Kong et al., 2017; Andreas et al., 2015; Yogatama et al., 2016; Maillard et al., 2017; Socher et al., 2010; Pollack, 1990).", "Here, the network is not itself structured to match the grammatical structure of sentences being processed; the structure is fixed, but is designed to support the learning of distributed representations that incorporate structure internal to the representations themselves filler/role structure.", "TPRs are also used in NLP in (Palangi et al., 2017) but there the representation of each individual input word is constrained to be a literal TPR filler/role binding.", "(The idea of using the outer product to construct internal representations was also explored in (Fukui et al.,", "2016).) Here, by contrast, the learned representations are not themselves constrained, but the global structure of the network is designed to display the somewhat abstract property of being TPR-capable: the archi-1270 tecture uses the TPR unbinding operation of the matrix-vector product to extract individual words for sequential output.", "Tensor Product Representation (TPR) (Smolen-sky, 1990) is a general technique for constructing vector embeddings of complex symbol structures in such a way that powerful symbolic functions can be computed using hand-designed neural network computation.", "Integrating TPR with deep learning is a largely open problem for which the work presented here proposes a general approach: design deep architectures that are TPR-capable TPR computation is within the scope of the capabilities of the architecture in principle.", "For natural language generation, we proposed such an architecture, the Tensor Product Generation Network (TPGN): it embodies the TPR operation of unbinding which is used to extract particular symbols (e.g., words) from complex structures (e.g., sentences).", "The architecture can be interpreted as containing a part that encodes a sentence and a part that selects one structural role at a time to extract from the sentence.", "We applied the approach to image-caption generation, developing a TPGN model that was evaluated on the COCO dataset, on which it outperformed LSTM baselines on a range of standard metrics.", "Unlike standard LSTMs, however, the TPGN model admits a level of interpretability: we can see which roles are being unbound by the unbinding vectors generated internally within the model.", "We find such roles contain considerable grammatical information, enabling POS tag prediction for the words they generate and displaying clustering by POS." ]
[ "objective", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "objective", "abstain", "result", "result", "abstain", "abstain", "result", "result", "objective", "result", "abstain", "method", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "result", "result" ]
[ "Cross-lingual word vectors are typically obtained by fitting an orthogonal matrix that maps the entries of a bilingual dictionary from a source to a target vector space.", "Word vectors, however, are most commonly used for sentence or document-level representations that are calculated as the weighted average of word embeddings.", "In this paper, we propose an alternative to word-level mapping that better reflects sentence-level cross-lingual similarity.", "We incorporate context in the transformation matrix by directly mapping the averaged embeddings of aligned sentences in a parallel corpus.", "We also implement cross-lingual mapping of deep contextualized word embeddings using parallel sentences with word alignments.", "In our experiments, both approaches resulted in cross-lingual sentence embeddings that outperformed context-independent word mapping in sentence translation retrieval.", "Furthermore, the sentence-level transformation could be used for word-level mapping without loss in word translation quality.", "Cross-lingual word vector models aim to embed words from multiple languages into a shared vector space to enable cross-lingual transfer and dictionary expansion (Upadhyay et al., 2016).", "One of the most common and effective approaches for obtaining bilingual word embeddings is by fitting a linear transformation matrix on the entries of a bilingual seed dictionary (Mikolov et al., 2013).", "This approach is versatile and scalable: multilingual embeddings can be obtained by mapping the vector spaces of multiple languages into a shared target language, typically English.", "In addition, imposing an orthogonality constraint on the mapping ensures that the original pair-wise distances are preserved after the transformation and results in better word translation retrieval (Artetxe et al., 2016; Smith et al., 2017).", "While word vector spaces tend to be globally consistent across language variations (Aldarmaki et al., 2018), individual words like homographs with unrelated senses (e.g. bank', coast') and phrasal verbs (stand up', stand out') are likely to behave less consistently in multilingual vector spaces due to their different usage distributions.", "Consequently, using such words in the alignment dictionary may result in suboptimal overall mapping.", "We propose two approaches to counteract this effect by incorporating sentential context in the mapping process without explicit word sense disambiguation or additional linguistic resources.", "The first approach is based on the recently proposed contextualized embeddings from language models, ELMo (Peters et al., 2018).", "Using a parallel corpus with word-alignments, we extract contextualized embeddings to construct a context-aware dictionary for mapping.", "The second approach is to learn a transformation between sentence embeddings rather than individual word embeddings.", "Since these embeddings include context that spans full sentences, we surmise that a mapping learned at this level would be more robust to individual word misalignments.", "We used a constrained set of parallel sentences ranging from one hundred to a million sentences for alignment.", "We then evaluated the resultant mappings on sentence translation retrieval among English, Spanish, and German as test languages.", "Our results show that context-aware mappings sig-nificantly outperform context-independent cross-lingual word mappings using reasonably-sized parallel corpora, particularly when using contextualized word embeddings.", "In addition, when averaging static word embeddings, the sentence-level mapping can still be used for word-level mapping without loss in word translation quality.", "For cross-lingual alignment, we follow the popular approach of fitting a linear transformation matrix between word vector spaces that are independently trained for each language.", "Aligning monolingual word vector spaces using a seed dictionary was originally proposed in Mikolov et al. (2013).", "In Artetxe et al. (2016) and Smith et al. (2017), it was shown that imposing an orthogonality constraint on the transformation leads to better word translation quality.", "Recently, contextualized word embeddings were proposed, where a sequential neural network is trained as a language model and then used to extract context-sensitive word representations from the hidden states (Pe-ters et al., 2018).", "We use parallel text in order to align independently-trained contextualized embeddings across languages.", "Schuster et al. (2019) independently proposed a cross-lingual alignment approach for contextualized embeddings without the use of parallel text.", "Given a dictionary of source to target pairs (cid:104) x, y (cid:105) and matrix representations X and Y whose columns are vector representations of the corresponding dictionary items, we seek to find an orthogonal transformation matrix R that minimizes the distances between the transformed vectors in RX and Y .", "Formally, R = arg min R (cid:107) RX Y (cid:107)", "where (cid:107) .", "(cid:107) denotes the Frobenius norm.", "The orthogonality constraint ensures that pair-wise distances in the original source vector space are preserved after the transformation.", "As shown in (Schonemann, 1966), the solution can be found by singular value decomposition of Y XTY XT = U VT Then, R = UVT (2) The resultant transformation, R , can then be used to transform additional vectors in the source vector space.", "The quality of the transformation depends on the size and accuracy of the initial dictionary, and it is typically evaluated on word translation precision using nearest neighbor search (Smith et al., 2017).", "Word embeddings in a given language tend to have similar structures as their translations in a target language (Aldarmaki et al., 2018), which enables orthogonal mappings of word vector spaces to generalize well across various languages.", "However, items in bilingual dictionaries typically refer to specific word senses.", "In a given dictionary pair, the source word may have multiple senses that are not consistent with its aligned target translation (and vise versa), which could result in suboptimal global mappings.", "Intuitively, better mappings could be obtained using sense-disambiguated word embeddings, which could be approximated from context.", "ELMo (Embeddings from Language Models) is a recently-proposed deep model for obtaining contextualized word embeddings, which are calculated as the hidden states of a bi-LSTM network trained as a language model (Peters et al., 2018).", "The network can be used in lieu of static word embeddings within other models, which yields better performance in a range of tasks, including word sense disambiguation.", "Sentence embeddings can be obtained from ELMo by averaging the contextualized word embeddings (Perone et al., 2018).", "Since ELMo generates dynamic, context-dependent vectors, we cannot use a simple word-level dictionary to map the model across languages.", "Instead, we use a parallel corpus with word alignments, i.e using an IBM Model (Brown et al., 1993), to extract a dynamic dictionary of aligned contextualized word embeddings.", "Depending on the size of the parallel corpus, a large dictionary can be extracted to learn an orthogonal mapping as described in Section 3.1, which is then applied post-hoc on newly generated contextualized embeddings.", "An alternative general approach for obtaining a context-aware mapping is to learn sentence-level transformations.", "Intuitively, a sentence is less ambiguous than stand-alone words since the words are interpreted within a specific context, so a mapping learned at the sentence-level is likely to be less sensitive to individual word inconsistencies.", "Therefore, we learn the mapping as described in Section 3.1 using a dictionary of aligned sentence embeddings.", "Over a large parallel corpus, the aggregate mapping can yield a more optimal global solution compared to word-level mapping.", "This approach can be applied using any model capable of generating monolingual sentence embeddings.", "In this work, we use the average of word vectors in each sentence, where the word vectors are either static or contextualized.", "For inference, monolingual sentence embeddings are generated first, then mapped to the target space using the sentence-level transformation matrix.", "1 4 Experiments We used skip-gram with subword information, i.e FastText (Bojanowski et al., 2017), for the static word embeddings, and ELMo for contextualized word embeddings.", "Sentence embeddings were calculated from ELMo as the arithmetic average of the contextualized embeddings 2 .", "For FastText, we applied weighted averaging using smooth inverse frequency (Arora et al., 2017), which works better for sentence similarity compared to other averaging schemes (Aldarmaki and Diab, 2018).", "We trained and aligned all models using the same monolingual and parallel datasets.", "For monolingual training, we used the 1 Billion Word benchmark (Chelba et al., 2014) for English, and equivalent subsets of 400 million tokens from WMT'13 (Bojar et al., 2013) news crawl data.", "We trained monolingual ELMo and FastText with default parameters.", "We used the WMT'13 common-crawl data for cross-lingual mapping, and the WMT'13 test sets for evaluating sentence translation retrieval.", "For all datasets, the only preprocessing we performed was tokenization.", "We evaluated the cross-lingual mapping approaches on sentence translation retrieval, where we calculate the accuracy of retrieving the correct translation from the target side of a test parallel corpus using nearest neighbor search with cosine similarity.", "To assess the minimum bilingual data 1 Since we use vector averaging, it doesn't matter whether we apply the learned transformation to the word embeddings before averaging, or to the sentence embeddings after averaging.", "requirements of each approach and measure how the various models respond to additional data, we split the training parallel corpus into smaller subsets of increasing sizes, starting from 100 to a million sentences (we double the size at each step).", "Data splits and evaluation scripts are available at https://github.com/h-aldarmaki/ sent_translation_retrieval .", "For ELMo, word embeddings need to be calculated from context, so we extracted a dictionary of contextualized words from the parallel corpora by first applying word-level alignments using Fast Align (Dyer et al., 2013).", "We then calculated the contextualized embeddings for source and target sentences, and extracted a dictionary from the aligned words that have a one-to-one alignment (i.e. we excluded phrasal alignments).", "Since this can result in a very large dictionary, we capped the number of dictionary words at 1M for efficiency.", "For a fair comparison with FastText word-level mapping, we extracted a dictionary from word alignment probabilities using the same parallel sets.", "For each word in the source language, we extracted its translation as the word with the maximum alignment probability if the maximum was unique 3 .", "As a baseline, we used static dictionaries from (Conneau et al., 2017) to obtain word-level mappings (dict) .", "All alignments were performed from the source languages to English.", "Sentence translation retrieval results in all language directions are shown in Figure 1 (note the x-axis denotes the size of the alignment corpus in log scale).", "The arrows indicate the translation direction from source to target, with en for English, es for Spanish, and de for German.", "For clarity, the legend shows the average accuracies in the final step (1M).", "Overall, ELMo word alignment resulted in the highest sentence translation retrieval accuracies, even with small amounts of training data; it exceeded the static dictionary baseline at around 2K parallel sentences.", "Sentence-level mapping outperformed word-level mapping only when additional parallel data were used (over 50K sen-tences).", "With 1M sentences, sentence-level mapping of FastText yielded an increase of 3% in all directions.", "Sentence-level ELMo underperformed in the en directions until we used 100K sentences, where we observed a sharp increase in accuracy compared to the previous step of 50K sentences.", "For ELMo, we note particular improvements in zero-shot translation retrieval between the source languages: es and de , where ELMo-based models performed much higher than FastText.", "The opposite is true for the en directions, although the difference is not as notable.", "This is an interesting Language pair Mapping level word sentence From source language to en: es-en k=1 56.46 54.43 k=5 70.93 68.97 de-en k=1 50.00 ] 47.85 k=5 63.45 62.69 From en to source language: en-es k=1 56.98 57.52 k=5 72.68 72.15 en-de k=1 42.32 43.27 k=5 63.99 62.84 Translation between source languages: de-es k=1 36.14 37.07 k=5 53.72 54.85 es-de k=1 31.55 34.22 k=5 51.37 52.07 Average k=1 45.58 45.73 k=5 62.69 62.26 Table 1: Word translation precision at k (%) using k nearest neighbor search, with k { 1 , 5 } .", "observation and may indicate that contextualized dictionaries result in a more balanced mapping, while context-independent embeddings overfit the mapping to the specific direction used for alignment.", "Cross-lingual word embeddings are typically evaluated in word-translation retrieval: the precision of correctly retrieving a translation from the vocabulary of another language.", "Since this is a context-free task, we evaluated the performance of static word embeddings, FastText, using word vs. sentence mapping (with 1M parallel sentences).", "The transformation matrix learned at the sentence level is used to transform the word embeddings.", "We used the dictionaries from (Conneau et al., 2017).", "We also evaluated on the SemEval'17 cross-lingual word similarity task (Camacho-Collados et al., 2017), which is measured using the average of Pearson and Spearman correlation coefficients against human judgements.", "As shown in Tables 1 and 2, the mapping learned at the sentence-level yields equivalent performance to word-level mapping.", "While word-level mapping was slightly better in translating from source languages (German and Spanish) to English, the sentence-level mapping was better when translating between the source languages.", "In the word similarity task, sentence-level mappings performed slightly better in two out of the three cases.", "Overall, the performance of both models are comparable, which indicates that a single transformation matrix learned at the sentence-level can be used for both word and sentence-level tasks.", "We introduced alternatives to the popular word mapping approach that incorporate context in the mapping process.", "Given parallel corpora, context-aware mappings were learned by mapping aligned contextualized word embeddings or directly mapping the parallel sentence embeddings.", "Experimental results showed significant gains in sentence translation retrieval using contextualized mappings compared to context-independent word mapping.", "While word-level mappings worked better with smaller parallel corpora, the performance of sentence-level mapping continued to increase with additional data until it outperformed word-level mapping.", "In future work, we will explore the viability of the sentence mapping approach on other sentence embedding models." ]
[ "abstain", "abstain", "objective", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "method", "method", "result", "abstain", "method", "other", "other", "other", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "other", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective" ]
[ "Natural language processing often faces the problem of data diversity such as different domains, themes, styles and so on.", "Therefore, a single language model (LM) is insuffi-cient to learn all knowledge from diverse samples.", "To solve this problem, we firstly propose an autoencoding topic model with mixture prior (mATM) to perform clustering for the data, where the clusters defined in semantic space describe the data diversity.", "Having obtained the clustering assignment for each sample, we develop the ensemble LM (En-sLM) with the technique of weight modulation.", "Specifically, EnsLM contains a backbone which is adjusted by a few modulated weights to fit for different sample clusters.", "As a result, the backbone learns the shared knowledge among all clusters while modulated weights extract the cluster-specific features.", "EnsLM can be trained jointly with mATM with flexi-ble LM backbone.", "We evaluate the effectiveness of both mATM and EnsLM on different language understanding and generative tasks.", "It is common knowledge in modern natural language processing (NLP) that natural language varies greatly across domains, themes, styles, genres and many other linguistic nuances (Van der Wees et al., 2015; van der Wees, 2017; Niu et al., 2017).", "Generally, we call such nature of language as data diversity.", "Many existing works (Liu et al., 2017; Cai and Wan, 2019; Hu et al., 2019) have illustrated that data diversity will affect the performance of LMs if we just train a single LM over the entire dataset, even though fine-tuning a pretrained LM (that has been pre-training on a very large corpus) such as Bert (Devlin et al., 2019) on current task (Aharoni and Goldberg, 2020).", "The domain diversity in dataset is a very common type of data diversity.", "In some cases, if we can obtain a well-defined domain label for each sample, some works (Jiang et al., 2020; Du et al., 2020; Wright and Augenstein, 2020) try to consider the multi-domain property of data in developing the LMs.", "However, these pre-defined domain labels are not always accurate or even available (Aharoni and Goldberg, 2020), especially for the wild datasets, in which data come from different sources, such as internet news, product reviews, and daily conversation.", "To this end, we hope to develop a LM that can explore the diversity from data automatically.", "Data selection is a commonly used strategy to handle diversity in data (Moore and Lewis, 2010; Axelrod et al., 2011; Duh et al., 2013; Silva et al., 2018; Aharoni and Goldberg, 2020).", "This kind of method is developed from an assumption that samples belonging to the same cluster should own similar characteristics.", "According to the clustering assignment, models can select suitable data for training a LM for each cluster separately.", "Although, to some extend, data selection is an efficient strategy to alleviate the problem of data diversity, it may bring two disadvantages as follows.", "Firstly, the process of data selection is independent of the LM learning.", "In other words, the gradient signal generated by LM's training loss can not affect the data selection.", "Secondly, data selection only tells the hard cluster belongings of samples, ignoring a fact that some samples may belong to more than one clusters with soft (weighted) assignment.", "Inspired by their works and to move beyond, in this paper, we find the semantics learned by topic modeling (Blei et al., 2003; Srivastava and Sutton, 2017) can infer sample clusters to a certain extent via K-means, but is not good enough, as shown in Fig. 1a .", "To jointly consider the clustering and topic modeling for better clustering (as shown in Fig. 1b) and for joint training with the following LM, we firstly introduce an autoencoding topic model with mixture priors (mATM).", "For each sample in the corpus, mATM can infer a soft clustering assignment.", "In order to jointly consider the learning of mATM with various LMs, we employ the weight modulation methods (Cong et al., 2020; Wen et al., 2020).", "Specifically, as shown in Fig. 3, given a LM as backbone, for each layer (convolutional or fully-connected), we introduce some modulated parameters.", "Guided by clustering assignment inferred from mATM, these parameters modulate the backbone single LM to multiple LMs, corresponding to different clusters.", "Therefore, our proposed model can be seen as a type of ensemble learning, and hence we call it ensemble language model (EnsLM).", "Our proposed mATM and EnsLM enjoy the following distinguished properties: The mATM learns the mixture-prior latent semantic space to define a soft clustering assignment for each sample.", "Guided by clustering assignments that describe the data diversity, EnsLM learns both shared and cluster-specific knowledge by weight modulations.", "Joint training of mATM and EnsLM improves the performance of both on many NLP tasks.", "For NLP, topic modeling (TM) (Blei et al., 2003; Zhou et al., 2012) and LMs are two common regimes with their own advantages.", "TM can discover the interpretable global semantics that are topics, while with pre-training on large corpus, LMs recently achieve the SOTA performance on many NLP tasks with more focuses on local dependencies.", "Therefore, some works consider to combine them to obtain benefits from both.", "Dieng et al. (2016) and Wang et al. (2020) incorporate the TM with RNN-based model to capture the long-range dependencies.", "To move beyond single-layer TM for RNNs, Guo et al. (2020) propose the recurrent hierarchical topic-guided RNN with the help of multi-layer TM (Zhou et al., 2015; Zhang et al., 2018).", "To extract explicit document semantics for summarization, Wang et al. (2020) propose three different modules to plug knowledge from TM into Transformer-based LMs (Vaswani et al., 2017; Devlin et al., 2018).", "Our work can be seen as a parallel work to combine their advantages together but focuses on dealing with data diversity in NLP without the ground-truth information such as domain labels.", "Meanwhile, our work can be applied for different LMs including CNNs, RNNs, and Transformer-based models.", "We firstly describe one of the most popular topic models, latent Dirichlet allocation (LDA) (Blei et al., 2003), and its autoencoding inference (Sri-vastava and Sutton, 2017).", "Inspired by them, in order to jointly consider topic learning and sample clustering, we propose the autoencoding topic model with mixture prior (mATM).", "For a document containing D words as w = { w d } Dd =1 , given K topics = [ 1 , , K ] where k is a probability distribution over the vocabulary, LDA defines the generative process of w in Algorithm 1, where RK + is the topic proportion with as the prior parameter.", "After collapsing Algorithm 1 Generative process of LDA for each document w do Draw topic proportion Dirichlet ( ) for each word at position d do Sample a topic i d Multinomial (1 , ) Sample a word w d Multinomial (1 , i d ) i d , given and , we can represent the conditional likelihood of w d as w d | , Multinomial (1 , ) .", "Given , a popular approximation for efficient inference of LDA is mean-field variational inference, which tries to maximize the evidence lower", "where q ( ) is the variational posterior.", "In particular, Srivastava and Sutton (2017) propose the autoencoding variational inference (AEVB) (Kingma and Welling, 2013) for LDA by using Laplace approximation (Hennig et al., 2012) for the Dirichlet prior, and building logistic-normal (LN) encoding posterior.", "As shown in Fig. 1, we find that running clustering method such as K-means on semantic space can not achieve satisfactory results.", "For jointly considering the learning of topics and sample clustering, we propose the mATM.", "Suppose the number of clusters is C , and the clustering prior parameter is = [ 1 , , C ] with (cid:80) Cc =1 c = 1 , shown in Fig. 2a, mATM defines the generative process of w in Algorithm 2.", "Com-Algorithm 2 Generative process of mATM for each document w do Draw cluster index z Categorical ( ) Draw topic proportion Dirichlet ( z ) for each word at position d do Sample a topic i d Multinomial (1 , ) Sample a word w d Multinomial (1 , i d ) pared with LDA, mATM has a mixture Dirichlet prior with parameters { c } Cc =1 .", "In other words, mATM assumes that the of different documents may come from different clusters, which is the basic thought to discover the data diversity from corpus automatically.", "In order to infer the parameters in mATM and further develop the EnsLM by mATM, we introduce AEVB for mATM, whose detailed structure is shown in Fig. 2b.", "Although Dirichlet prior of is important to learn interpretable topics (Wallach et al., 2009), it is difficult to handle it within AEVB since AEVB needs effective reparameterization (RT) function for distributions.", "Inspired by the success of the Laplace approximation for Dirichlet distribution, we propose the mixture LN (mLN) distribution as the approximation of mixture Dirichlet distribution.", "Specifically, Srivastava and Sutton (2017) have proved that a Dirichlet distribution p ( | ) can be well approximated by LN distribution as p ( | , ) = LN ( , ) , (3) where the elements in mean vector and diagonal covariance matrix are k = log k 1 KK (cid:88) i =1 log i k = 1 k (cid:18) 1 2 K (cid:19) + 1 K 2 K (cid:88) i =1 1 i .", "(4) To go further, for inference of mATM, we construct the mLN distribution as p ( | , ) = C (cid:88) c =1 c LN ( c , c ) ck = log ck 1 KK (cid:88) i =1 log ci ck = 1 ck (cid:18) 1 2 K (cid:19) + 1 K 2 K (cid:88) i =1 1 ci , (5) which is used to approximate the mixture Dirichlet prior p ( |{ c , c } Cc =1 ) in mATM.", "Therefore, for each document, the prior of can be written as (cid:81) Cc =1 LN ( c , c ) z c .", "In practice, we build the c and c as c = f W c ( z ) , c = f W c ( z ) , (6) where z = [ z 1 , , z C ] .", "After collapsing { i d } Dd =1 in mATM as (1) in LDA, given topics , for document w , there are two latent variables that need to be inferred: and z", "LN posterior for .", "We build the variational posterior of as LN distribution q ( ) = LN ( (cid:48) , (cid:48) ) with (cid:48) = f W ( x ) , (cid:48) = diag ( f W ( x )) , where diag converts a vector to a diagonal matrix, f W ( ) and f W ( ) are two encoding networks, and x is a type of representation for document w such as original words or bag of words (Bow) vector.", "Morevoer, LN distribution has easy RT function as Normal distribution.", "Gumbel softmax (GS) posterior for z .", "As categorical variable, z is difficult to build variational posterior under AEVB with accurate RT function.", "Instead, we employ GS distribution (Jang et al., 2016) as the variational posterior of z for efficient gradient propagation.", "Specifically, suppose the posterior of z is Categorical ( (cid:48) ) , after obtaining C i.i.d samples { g 1 , , g C } drawn from Gumbel (0 , 1) , then z can be sampled as z = arg max c exp ((log( (cid:48) c ) + g c ) / ) (cid:80) Oo =1 exp ((log( (cid:48) o ) + g o ) / ) (7) where is the temperature parameter.", "In order to build encoder for (cid:48) , we let (cid:48) = f W ( , w ) .", "For efficient gradient propagation, rather than sampling z from arg max as (7), we obtain the variational posterior of soft assignment vector z = [ z 1 , , z C ] as q ( z ) : [ q ( z )] c = exp ((log( (cid:48) c ) + g c ) / ) (cid:80) Oo =1 exp ((log( (cid:48) o ) + g o ) / ) .", "Besides the benefit of efficient gradient back-propagation, the soft assignment in (8) provides clustering belonging weights.", "In the following EnsLM, this property is useful for some ambiguous samples that may belong to different clusters.", "Similarly with Srivastava and Sutton (2017), instead of sampling from Dirichlet posterior in", "LDA, we parameterize it as = softmax ( W t ) , where W t = [ w 1 , , w K ] and softmax is operated for each topic { w k } Kk =1 to ensure them on a probability simplex.", "Therefore, as shown in Fig. 2, all the parameters of mATM are 1 = { W , W c , W , W c , W , W t } that can be learned by maximizing the ELBO in (9).", "Recently, various advanced LMs for language understanding and generation have been introduced, most of which do not consider the data diversities in the corpus.", "In this paper, having obtained the clustering assignment vector z from mATM, given a single LM as backbone, we propose the ensemble LM (EnsLM) via z -guided weight modulation.", "In other words, the EnsLM can modulate the backbone single LM to fit for different clusters.", "Although LMs have many different types, basically, all of them build on convolutional (such as in CNN (Johnson and Zhang, 2015)) or fully-connected (such as in Transformer (Vaswani et al., 2017)) operations (ignoring the bias) as", "where, H 1 RI x I y C in and H (cid:48) 1 RC in are the input features, W R k x k y C in C out and W (cid:48) RC in C out are the convolutional kernel or full-connected weights 1 .", "Suppose the number of clusters (domains) in mATM is C , given a LM as backbone, we introduce a few modulation parameters to modulate the original parameters W or W (cid:48) for different clusters.", "Specifically, shown in Fig. 3, for a convolutional or fully-connected layer in (10), suppose that there are two dictionaries of modulation parameters as: A = [ 1 , , C ] RC in CB = [ 1 , , C ] RC out C , (11) where { c } Cc =1 RC in and { c } Cc =1 RC out .", "For a document w whose feature at current layer is H 1 , after archiving its domain assignment z RC 1 1 Fully-connected layer can be also seen as a convolution layer where the convolutional kernel is W (cid:48) R 1 1 C in C out ( I x = I y = 1 ) Base parameters Modulatedparameters Figure 3: Illustration of weight modulation in EnsLM.", "Convolution : H 2 = f (( W (cid:12) ) H 1 ) Fully-connection : H (cid:48) 2 = f (( W (cid:48) T (cid:12) ) H (cid:48) 1 ) , (12)", "where = T , = A z RC in 1 , = B z RC out 1 , and (cid:12) denotes matrix element-wise product (with broadcasting for convolution).", "Explanation of (12) .", "Intuitively, W and W (cid:48) act as the backbone parameters in the original single LM, and is the modulated parameters, which moves the backbone to fit different domains.", "If z is drawn from (7) that means z is a one-hot vector, then it denotes that and are chosen from the dictionaries A and B , correspondingly.", "If z is drawn from (8) that means z is a soft assignment vector, then it denotes that and are weighted summation of all elements in A and B , correspondingly.", "In practice, we use the soft assignment vector since i ) it brings efficient gradient propagation during joint training of mATM and EnsLM, and ii ) it considers the fact that there are some domain ambiguous samples in the dataset.", "It is interesting to note that although EnsLM is developed for the problem that ground-truth priors of data diversity (such as domain label) is unavailable, it can be also used when we know the priors.", "For this scenario, rather than inferring the clustering assignment z from mATM via (8), we directly set z as the real one-hot assignment vector, which is illustrated in experiment in Sec. 5.2.", "Different from some strategies such as data selection that separate the calculation of assignment and the training of LM, our proposed mATM and EnsLM can be jointly trained in one framework.", "Specifically, given a training set containing N sample { w n } Nn =1 , suppose that there is a label { y n } Nn =1 for each sample.", "It should be noted that labels { y n } Nn =1 can be different for different tasks, such as labels for document classification, golden summarization for abstractive summarization, or document itself for generation.", "As a result, the loss for joint training of mATM and EnsLM can be written as L = N (cid:88) n =1 E q ( n ) q ( z n ) [log p ( w n | n , , z n )] E q ( z n ) [ LLM ( w n , y n , z n )] KL [ q ( n ) || p ( n )] KL [ q ( z n ) | p ( z n )] , (13) where, without loss of generality, LLM denotes the loss for LM.", "All learnable parameters are i ) parameters of mATM: mATM = { W , W , W u , W u , W } and ii ) parameters of LM: LM .", "These parameters can be jointly trained by stochastic gradient descend with low-variance gradient estimation since LN and GS distributions have easy RT function.", "In this section, we evaluate the effectiveness and ef-ficiency of our proposed mATM and EnsLM on different NLP tasks including document clusters, text classification, language generation and abstractive document summarization.", "Our code is available at https://github.com/BoChenGroup/EnsLM 5.1 Document clusters The basic idea of mATM and EnsLM is that mATM can automatically discover the sample clusters which describe the data diversity.", "Therefore, we firstly evaluate the document clustering performance of mATM.", "Datasets Following Yao et al. (2019), we consider two widely used document clustering datasets, 20News and R8 .", "This two datasets 2 can be found in the open source code of Yao et al. (2019).", "2 https://github.com/yao8839836/text gcn 20News has 20 classes and consists of 18,846 documents with a vocabulary size of 61,188, partitioned into a training set of 11,314 documents and a test set of 7,532 ones.", "R8 is a subset of the Reuters 21578 dataset, which has 8 classes and was split into 5,485 training and 2,189 test documents.", "For these two datasets, we remove the stop words and use the 2,000 most frequent terms as the vocabulary.", "For all methods, we set the number of clusters as the number of classes.", "Comparison models and implementation details To verify the effectiveness of mATM for clustering, three types of document clustering models are compared.", "i ) Raw+kmeans performs K-means on raw BoW vectors, and PCA+kmeans uses PCA extract low-dimensional features and then uses K-means for clustering; ii ) Train a topic model and then perform K-means for clustering on topic proportions, where we consider LDA+kmeans (Blei et al., 2003), AVITM+kmeans (Srivastava and Sutton, 2017), and PFA+kmeans (Zhou et al., 2012); iii) Deep neural network based clustering methods, including Deep clustering (Xie et al., 2016), and DCN (Yang et al., 2017), which jointly consider the feature extracting and clustering.", "Besides Raw+kmeans performing clustering on original inputs, others are on a latent feature space (For topic modeling, feature is the topic proportion).", "Following (Xie et al., 2016; Yang et al., 2017), the dimension of feature space equals to the number of clusters.", "Results Following Yang et al. (2017), since we know the ground-truth label and set the clustering number as the number of classes, we measure the", "clustering performance by accuracy (AC) and normalized mutual information (NMI), both of which are the higher the better.", "The results are shown in Table 1.", "Compared with the Base+kmeans, PCA+kmeans performs better since it extracts effective principal components.", "Benefiting from the learning of semantics for documents, the second group including three types of topic modeling outperforms PCA.", "Compared with the first two groups, the third group jointly considers the feature learning and clustering, thus achieving higher AC and NMI.", "Combined the advantages of topic modeling in extracting efficient features from documents and joint learning of feature extractor and clustering, mATM gets the SOTA performance for document clustering tasks on these two datasets.", "The clustering results support our motivation of using mATM to discover the data diversity.", "In the following experiments, we evaluate the performance of both mATM and EnsLM on different language understanding and generation tasks.", "Sentiment classification (positive or negative) for different products is a fundamental language understanding task in NLP.", "For this task, the data diversity mainly arises from different domains (prod-ucts) (Blitzer et al., 2007), which brings the problem that data from different domains may have different distributions.", "Datasets To evaluate the performance of mATM and EnsLM in capturing the multi-domain property for sentiment classification, following Cai and Wan (2019), we perform experiments on the dataset released by Liu et al. (2017), which consists of product and movie reviews in 16 different domains.", "The data in each domain is randomly split into training set, development set and test set according to the proportion of 70% , 10% , 20% , whose statistics of the 16 datasets are listed in Appendix A.1.", "Comparison models and implementation details Following (Cai and Wan, 2019), we firstly consider three base models, BiLSTM (Adhikari et al., 2019), TextCNN (Kim, 2014) and BERT (Devlin et al., 2019), which perform classification on every domains separately.", "Secondly, combining data from different domains together, we train the above three models named as BiLSTM-mix , TextCNN-mix and DocBERT-mix .", "Having obtained the ground-truth domain label, the previous works regard the multi-domain problem as the multi-task learning (MTL) including DA-MTL (Zheng et al., 2018), ASP-MTL (Liu et al., 2017),and MDAE (Cai and Wan, 2019).", "All these works are developed from BiLSTM model.", "For our proposed EnsLM, we use TextCNN, BiLSTM and DocBERT as the backbone of EnsLM.", "We perform experiments on two types of EnsLM: i ) with ground-truth (GT) domain label, we directly set z as the one-hot assignment vector (do not infer z from mATM), which is named as BiLSTM-EnsLM-GT , TextCNN-EnsLM-GT , and BERT-EnsLM-GT ; ii ) without GT domain label, we use mATM to infer z , which is named as BiLSTM-EnsLM-mATM , TextCNN-EnsLM-mATM , and BERT-EnsLM-mATM .", "For model using mATM, we set the number of topics as 16 .", "More detailed settings and implementation details can be found in Appendix B.1.", "Results The results of averaged accuracy on all domains are given in Table 2, where the results except ours are obtained from Cai and Wan (2019).", "Comparing results on the first row, we can see that joint training models on all domains outperform separate training on each domain.", "Compared with BiLSTM-mix, having obtained the GT domain label, DA-MTL, ASP-MTL and MDAE (all of them are developed based on BiLSTM) consider the real domain knowledge in word embedding, feature extractor and attention layers, achieving higher accuracy.", "Similarly, with GT domain label, three models equipped with our proposed EnsLM performs better than their basic counterparts with a large margin.", "Assuming that GT domain labels are unavailable, we use mATM to infer the clustering assignment to guide the learning of EnsLM, which obtains the SOTA performance on all three basic models, even better than the models using GT domain label.", "We attribute it to the fact that com-Table 3: Comparison of perplexity on four datasets.", "pared with the hard GT domain label, mATM infers the soft clustering assignment, which not only reflect the domain characteristic of samples but also describe the samples having confused domain characteristics.", "For example samples from DVD may be similar with the ones from Electronics.", "Datasets In order to verify the effectiveness of our model on datasets of different lengths, we consider four publicly available corpora: APNEWS, IMDB, BNC, and COCO.", "Following Lau et al. (2017), we tokenize words and sentences using Stanford CoreNLP (Klein and Manning, 2003), lowercase all word tokens, and filter out word tokens that occur less than 10 times.", "For the topic model, we additionally exclude stopwords.", "All these corpora are partitioned into training, validation, and testing sets, whose summary statistics are provided in Appendix A.2.", "Comparison models and implementation details We consider the following baseline models: LSTM , A standard LSTM language model (Hochreiter and Schmidhuber, 1997); Tansnsformer-XL enables learning dependency beyond a fixed length by introducing a recurrence mechanism and a novel position encoding scheme into the Transformer architecture (Dai et al., 2019); TGVAE (Wang et al., 2019), combines a variational auto-encoder based natural sequence model with a neural topic model; rGBN-RNN (Guo et al., 2020), extracts recurrent hierarchical semantic structure via a dynamic deep topic model to guide natural language generation; GPT-2 (Radford et al., 2019) is a generative pre-training of a Transformer-based LM on a diverse set of unlabeled text.", "For our proposed model, GPT-2-EnsLM-mATM first uses mATM to infer semantic clusters for each sample, and then introduce this diversity information to pre-trained GPT2 by efficient weight modulation naturally.", "In the experiments, we use the Adam optimizer (Kingma and Ba, 2014) with learning rate 10 6 .", "The length of an input sample is limited to Cluster # Representive topics Original sentences Generated sentences 1 ['kite', 'flying', 'sky', 'air', 'holding'] ['man', 'child', 'people', 'person', 'young'] ['beach', 'water', 'outside', 'near', 'park'] A child flyinga pink kite on the beach.", "We set the mini-batch size as 8, the number of training epochs as", "5. The clustering number of mATM is set to 64 for the first three datasets, while 80 for COCO dataset.", "More detailed settings and implementation details can be found in Appendix B.2", "Results For fair comparison, we use standard language model perplexity as the evaluation metric.", "The results of all models on four datasets are given in Table 3, where the results of existing models are obtained from Guo et al. (2020).", "In the first group, Transformer-XL gets better result, which shows that the transformer-based model have better modeling capabilities.", "In terms of capturing the document global semantic information, the second group can improve performance significantly, which indicates that the topic model is effective in capturing document global information.", "Pre-training on massive data, the GPT-2 can obtains better results compared with above models.", "Although GPT-2 gets a good result, the GPT-2-EnsLM-mATM can improve performance significantly by capturing data diversity.", "It illustrates that even pre-training on large scale of corpus, EnsLM can further improve the performance of pretrained LM via exploring data diversity.", "A similar phenomenon also appeared in the experiments conducted by Gururangan et al. (2020) Sentence generation of EnsLM Given the learned GPT-2-EnsLM-mATM, we can sample the sentences conditioned on semantic clusters.", "Shown in the in Fig. 5, we select the top-3 topics to represent this cluster, and select original sentences according to the clustering results.", "we can see that most of the generated sentences conditioned on a semantic clusters are highly related to the given topics in terms of their semantic meanings but not necessarily in key words, indicating the LM is successfully guided by the cluster assignment.", "These Table 4: ROUGE scores on CNN/DM and Xsum test set, where the results are cited from Liu and Lapata (2019) and Wang et al. (2020) .", "observations suggest that GPT-2-EnsLM-mATM has successfully captured syntax and global semantics simultaneously for natural language generation.", "Similar to Fig. 5, we also provide other semantic clusters generated sentences in Appendix C. 5.4 Abstractive summarization Datasets We evaluate the effectiveness and ef-ficiency of proposed model on two benchmark datasets, including the CNN/DailyMail (CNN/DM) (Hermann et al., 2015) and the XSum (Narayan et al., 2018).", "The summary styles of these datasets varies from highlights, composed of several sentences, to very brief one sentence.", "See more detailed descriptions in Appendix A.3.", "We perform data pre-processing following Liu and Lapata (2019).", "Comparison models and implementation details We consider some baseline models, including LSTM based models PTGEN and PT-GEN+Cov (See et al., 2017); Transformer based models Tansformer , BertSUM (Liu and Lapata, 2019); and BertSUM+TA which combine pretrained model with topic model (Wang et al., 2020).", "We combine EnsLM with BertSUM on the abstractive summarization task.", "The clustering number of mATM is set to 64 for all datasets.", "Given BertSUM checkpoints 3 on CNN/DM and XSum provided by Liu and Lapata (2019), we further fine-tune Bert-SUM+EnsLM.", "Besides, we adopt the settings in the BertSUM.", "Following Liu and Lapata (2019), in the test stage, we use beam search with size 5, select the top-3 checkpoints based on their evaluation loss on the validation set, and report the averaged results on the test set.", "More detailed settings and implementation details can be found in Appendix B.3.", "Results ROUGE scores on CNN/DM, XSum have been exhibited in Tables 4, respectively.", "Focusing on the models without pre-training in the first group, Transformer achieves better performance compared with LSTM-based model, attributing to stronger sequence modeling capabilities.", "Further, the outperformance of BertSUM illustrates the fact that the combination of a pretrained Bert encoder and a Transformer decoder is a better choice of sequence-to-sequence structure.", "Despite owning the same structure as the BertSUM, the BertSUM+TA employs a topic model to capture global document segment diversity, and achieving higher scores.", "Different from BertSUM+TA that introduces document semantic diversity by adding topic information, BertSUM+mATM combines BertSUM with EnsLM model, result in a better performance.", "Compared with BertSUM+TA, the performance improvement of our model is not enough promising is because they have been incorporated the topical information into the BertSum model which considering the segment diversity and contextual information.", "Note that the performance of our model improves significantly compared with BertSum, which can prove the effectiveness of our model.", "In this paper, we first propose mATM to infer latent semantic clusters from raw text corpus, and then combine it with LM with efficient weight modulation, resulting in a more powerful EnsLM, which can be naturally extended to other LMs.", "In the future, we will study the effectiveness of EnsLM on other NLP tasks, such as the multi domain translation, and investigate whether EnsLM can be applied to the pre-training stage of Transformer.", "Bo Chen acknowledges the support of NSFC (61771361), Shaanxi Youth Innovation Team Project, the 111 Project (No. B18039) and the Program for Oversea Talent by Chinese Central Government.", "We acknowledge all the anonymous reviewers for their valuable comments and suggestions." ]
[ "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "method", "result", "abstain", "objective", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "objective", "other", "other" ]
[ "We propose a novel data-augmentation technique for neural machine translation based on ROTk ciphertexts.", "ROTk is a simple letter substitution cipher that replaces a letter in the plaintext with the k th letter after it in the alphabet.", "We first generate multiple ROTk ciphertexts using different values of k for the plaintext which is the source side of the parallel data.", "We then leverage this enciphered training data along with the original parallel data via multi-source training to improve neural machine translation.", "Our method, CipherDAug , uses a co-regularization-inspired training procedure, requires no external data sources other than the original training data, and uses a standard Transformer to outperform strong data augmentation techniques on several datasets by a significant margin.", "This technique combines easily with existing approaches to data augmentation, and yields particularly strong results in low-resource settings.", "1 1 Introduction One naturally wonders if the problem of translation could conceivably be treated as a problem in cryptography.", "[...] frequencies of letters, letter combinations, [...] etc., [...] are to some significant degree independent of the language used (Weaver, 1949) Indeed, to a system which treats inputs as atomic identifiers, the alphabet behind these identifiers is irrelevant.", "Distributional properties are of sole importance, and changes in the underlying encoding should be transparent provided these properties are preserved.", "In light of this, a bijective cipher such as ROTk (Figure 1) is in effect invisible to modern NLP techniques: distributional features are invariant under such a cipher, guaranteeing that the meaning of an enciphered text is the same as the un-enciphered text, given the key.", "This work exploits this fact to develop a novel approach to data 1 Our code is available at https://github.com/ protonish/cipherdaug-nmt PLAIN abcdefghijklmnopqrstuvwxyz ROT1 bcdefghijklmnopqrstuvwxyza ROT2 cdefghijklmnopqrstuvwxyzab ROT3 defghijklmnopqrstuvwxyzabc SRC : es ist diese pyramide.", "Data augmentation is a simple regularization-inspired technique to improve generalization in neural machine translation (NMT) models.", "These models (Bahdanau et al., 2015; Vaswani et al., 2017) learn powerful representational spaces (Ra-ganato and Tiedemann, 2018; Voita et al., 2019; Kudugunta et al., 2019) which scale to large numbers of languages and massive datasets (Aharoni et al., 2019).", "However, in the absence of data augmentation, their complexity makes them susceptible to memorization and poor generalization.", "Data augmentation for NMT requires producing new, high-quality parallel training data.", "This is not trivial as slight modifications to a sequence can have drastic syntactic or semantic effects, and changes to a source sentence generally require corresponding changes to its translation.", "Existing techniques suffer various limitations: back-translation (Sennrich et al., 2016b; Edunov et al., 2018; Xia 201 et al., 2019a; Nguyen et al., 2019) can yield semantically poor results due to its use of trained models that are susceptible to errors (Edunov et al., 2018).", "Word replacement approaches (Gao et al., 2019; Liu et al., 2021; Takase and Kiyono, 2021; Belinkov and Bisk, 2018; Sennrich et al., 2016a; Guo et al., 2020a; Wu et al., 2021a) may ignore context cues or fracture alignments between sequences.", "This paper overcomes these limitations by exploiting the invariance of distributional features under ROTk ciphers.", "We contribute a novel data augmentation technique which creates enciphered copies of the source side of a parallel dataset.", "We then leverage this enciphered training data along with the original parallel data via multi-source training to improve neural machine translation.", "We also provide a co-regularization-inspired training procedure which exploits this enciphered data to outperform existing strong NMT data augmentation techniques across a wide range of experiments and analyses.", "Our technique can be flexibly combined with existing augmentation techniques, and does not rely on any external data.", "A ROTk cipher (Figure 1) produces a ciphertext by replacing each letter of its input ( plaintext ) with the k th letter after it in the alphabet.", "Past work (Dou and Knight, 2012; Dou et al., 2014) has explicitly used decipherment techniques (Kambhatla et al., 2018) to improve machine translation.", "We emphasize that decipherment itself is not the purpose of the present work: rather, we use ciphers simply to re-encode data while preserving its meaning.", "This is possible because ROTk is a 1:1 cipher where each ciphertext symbol corresponds to a unique plaintext symbol; this means it will preserve distributional features from the plaintext.", "This makes ROTk cryptographically weak, but suitable for use in data augmentation.", "Concretely, given a set of n training samples D = { ( x i , y i ) } ni =1 and a set of keys K , we use Algorithm 1 to generate | K | n new samples; giving ( | K | + 1) n samples when added to the training set.", "The ciphertexts produced by Algorithm 1 are guaranteed to be lexically diverse, not only from the plaintext but also from one another.", "Given this fact, we can naively regard each D k as a different language and formulate a multi-Algorithm 1 Cipher-Augment Training Data Training data D = { x i , y i } ni =1 Set of cipher keys K = { k 1 , k 2 ,", "lingual training setting (Johnson et al., 2017).", "For a plaintext sample x i , ciphertext samples { ROT k j ( x i ) , ..., ROT k | K | ( x i ) } , and target sequence y i , the multi-source model is trained by minimizing the cross-entropy L iNLL = log p ( y i | x i ) | K | (cid:88) j log p ( y i | ROTk j ( x i )) (1) where | K | is the number of distinct keys used to generate ciphertexts.", "While this yields a multilingual model, this formulation does not allow explicit interaction between a plaintext sample and the corresponding ciphertexts.", "To allow such interactions, we design another model that relies on inherent pivoting between sources and enciphered sources.", "We achieve this by adding ROTk ( source ) source as a translation direction; following Johnson et al. (2017) we prepend the appropriate target token to all source sentences and train to minimize the objective L iNLL = log p ( y i | x i ) | K | (cid:88) j [ log p ( y i | ROTk j ( x i )) + log p ( x i | ROTk j ( x i )) ] (2) We refer to (2) as the naive model.", "Discussion.", "In this setting the decoder must learn the distributions of both the true target language and the source language.", "This may lead to quicker saturation of the decoder and sub-optimal use of its capacity, which must now be shared between two languages; this is a notorious property of many-to-many multilingual NMT (Aharoni et al., 2019).", "To better leverage the equivalence between plain-and ciphertext data, we take inspiration from multiview learning (Xu et al., 2013).", "We rethink enciphered samples as different views of the authentic source samples which can be exploited for co-202 training (Blum and Mitchell, 1998).", "This is motivated by the observation that plain and enciphered samples have identical sentence length, grammar, and (most importantly) sentential semantics.", "Given an enciphered source cipher ( x i ) we model the loss for a plaintext sample ( x i , y i ) as L i = 1 L iNLL ( p ( y i | x i ) ) (cid:124) (cid:123)(cid:122) (cid:125) anchor source x-entropy + 2 L iNLL ( p ( y i | cipher ( x i )) ) (cid:124) (cid:123)(cid:122) (cid:125) cipher source x-entropy + L idist ( p ( y i | x i ) , p ( y i | cipher ( x i )) ) (cid:124) (cid:123)(cid:122) (cid:125) agreement loss, see (4) (3) where the original source language sentence x i is called the anchor here since it is always paired with each enciphered version.", "The first two terms are conventional negative log-likelihoods, to encourage the model to generate the appropriate target for both x i and cipher ( x i ) .", "The third term is the agreement loss , measured as the pairwise symmetric KL divergence 2 between the output distributions for x i and cipher ( x i ) : L idist ( p ( y i | x i ) , p ( y i | cipher ( x i )) ) = 1 2[ D iKL ( p flat ( y i | x i ) || p ( y i | cipher ( x i )) ) + D iKL ( p flat ( y i | cipher ( x i )) || p ( y i | x i )) ] (4) This term allows explicit interactions between plainand ciphertexts by way of co-regularization .", "Co-regularization relies on the assumption that the target functions in each view agree on labels of most examples (Sindhwani et al., 2005) and constrains the model to consider only solutions which capture this agreement.", "In cases where there are many output classes and the model predictions strongly favour certain of these classes, (4) may have an outsized influence on model behaviour.", "As a precautionary measure, we use a softmax temperature to flatten the model predictions, based on a similar technique in knowledge distillation (Hinton et al., 2015) and multi-view regularization (Wang et al., 2021).", "The flattened prediction for an ( x, y ) pair is given by p flat ( x | y ) = exp ( z y ) / (cid:80) y j exp ( z y j ) / (5) where z y is the logit for the output label y .", "A higher value of produces a softer, more even distribution over output classes.", "Datasets We use the widely studied IWSLT14 De En and IWSLT17 Fr En language pairs as our small-sized datasets.", "3 For high-resource experiments, we evaluate on the standard WMT14 En De set of 4.5M sentence pairs.", "4 We also extend our experiments to the extremely low-resource pair Sk En from the multilingual TED dataset (Qi et al., 2018) with 61k training samples, and dev and test splits of size 2271 and 2245 respectively.", "Ciphertext Generation and Vocabularies.", "We use a variant of ROTk which preserves whitespace, numerals, special characters, and punctuation.", "As a result, these characters appear the same in both plainand ciphertexts.", "For our naive approach, we encipher the German side of the IWSLT14 dataset with up to 20 keys {1,2,3,4,5, . . . ,20} .", "For our main experiments, we encipher the source side of every translation direction 5 with key {1} for WMT experiments and keys {1,2} for the rest.", "6 We use sentencepiece (Kudo and Richardson, 2018) to tokenize text into byte-pair encodings 3 The De En data has a train/dev/test split of about 170k/7k/7k.", "The Fr En data has a 236k/890/1210 split using dev2010 and tst2015 .", "newstest2013 and test on newstest2014 5 In all generated ciphertexts, the source alphabet is preserved, only the distribution of characters is changed.", "The target side is never altered.", "(BPE; Sennrich et al. 2016c) by jointly learning subwords on the source, enciphered-source, and target sides.", "We tune the number of BPE merges as recommended by Ding et al. (2019); the resulting subword vocabulary sizes for each dataset are tabulated in Table 1.", "In all experiments, we set the loss weight hyperparameters 1 , 2 to 1, and to 5.", "Section 4.1 shows an ablation over to justify this setting.", "We find that softmax temperature = 1 works well for all experiments; = 2 results in more stable training for larger datasets.", "Evaluation We evaluate on BLEU scores 7 (Pa-pineni et al., 2002).", "Following previous work (Vaswani et al., 2017; Nguyen et al., 2019; Xu et al., 2021), we compute tokenized BLEU with multi_bleu.perl 8 for IWSLT14 and TED datasets, additionally apply compound-splitting for WMT14 En-De 9 and SacreBLEU 10 (Post, 2018) for IWSLT17 datasets.", "For all experiments, we perform significance tests based on bootstrap resampling (Clark et al., 2011) using the compare-mt toolkit (Neubig et al., 2019).", "Baselines Our main baselines are strong and widely used data-augmentation techniques that do not use external data.", "We compare CipherDAug to back-translation-based data-diversification (Nguyen et al., 2019), word replacement techniques like SwitchOut (Wang et al., 2018), WordDrop (Sennrich et al., 2016a), and RAML (Norouzi et al., 2016), and the subword-regularization technique BPE-Dropout (Provilkov et al., 2020).", "See supplemental sections A.1 and A.2 for further baseline and implementation details.", "7 Decoder beam size 4 and length penalty 0.6 for WMT, and 5 and 1.0 for all other experiments.", "8 mosesdecoder/scripts/generic/multi-bleu.perl 9 tensorflow/tensor2tensor/utils/get_ende_bleu.sh 10 SacreBLEU signature: nrefs:1|case:mixed| eff:no|tok:13a|smooth:exp|version:2.0.0 Model De En Transformer 34.91 + Word Dropout 34.83 + SwitchOut 34.82 + RAML 35.11 + RAML + Switchout 35.17 + RAML + WordDrop 35.47 Naive Multi-Source Equation (1) Equation (2) 2 keys 35.45 35.85 5 keys 35.65 35.98 10 keys 33.70 35.42 20 keys 32.95 34.75 5 keys + RAML + Switchout -36.17 5 keys + RAML + WordDrop -36.63 CipherDAug 1 key 36.21 CipherDAug 2 keys 37.60 Table 2: Results on the IWSLT14 De-En validation set comparing the naive approach and CipherDAug.", "Simply using 2 enciphered sources gives a BLEU score of 35.45, which nearly matches the performance of the best baseline, RAML+SwitchOut, at 35.47.", "Adding the ROTk (source) source direction further improves the score to 35.85.", "Adding the ROT-k (source) source direction consistently yields better results than the vanilla multi-source model, but increasing the number of keys has a less consistent effect.", "We hypothesize that more keys are generally beneficial, but that the model becomes saturated when too many are used.", "Based on these observations, we limit later experiments to 2 keys.", "We observe further gains by combining the naive method with the two best performing baselines.", "This emphasizes that ciphertext-based augmentation is orthogonal to other data-augmentation methods and can be seamlessly combined with these to yield greater improvements.", "We present our main results in Table 3.", "While using a single key improves significantly over the Transformer model, augmenting with 2 keys outperforms all baselines.", "Table 4 shows additional comparisons against approaches that introduce architectural improvements to the transformer (such as MAT; Fan et al. 2020) or that require large pre-trained models, like BiBERT (Xu et al., 2021).", "our method yields stronger improvements over the standard Transformer than any other data augmentation technique (Table 3).", "This includes strong methods such RAML+SwitchOut and data diversification, which report improvements as high as 1.8 and 1.9 BLEU points respectively.", "Data diversification involves training a total of 7 different models for forward and backward translation on the source and target data.", "By contrast, CipherDAug trains a single model, and improves the baseline transformer by 2.9 BLEU points on IWSLT14 De En and about 2.2 BLEU points on the smaller datasets.", "On WMT14 En De, our method using 1 key improves by 0.6 BLEU over the baseline transformer and significantly outperforms word replacement methods like SwitchOut and WordDropout.", "12 Wu et al. 2020 introduce a new model architecture for mixing subword representations that involves a two-stage training process.", "CipherDAug, on the other hand, only uses a vanilla Transformer that is trained end-to-end.", "Low-resource setting The Sk En dataset is uniquely challenging as it has only 61k pairs of training samples.", "This dataset is generally paired with a related high-resource language pair such as Cs-En (Neubig and Hu, 2018), or trained in a massively multilingual setting (Aharoni et al., 2019) with 58 other languages from the multilingual TED dataset (Qi et al., 2018).", "Xia et al. (2019b) introduced a generalized data augmentation technique that works in this multilingual setting and leverages over 2M monolingual sentences for each language using back-translation.", "Applying CipherDAug to this dataset (Table 5) yields significant improvements over these methods, achieving 32.62 BLEU on Sk En and 24.61 on En Sk.", "Discussion On the relatively larger WMT14 dataset (4.5M), despite improving significantly over the baseline Transformer, the Base model 205 |src tgt| |vocab| D emb Emb Train BLEU Transformer-256 12k 12k 256 3M 37M 34.40 Transformer-512 12k 12k 512 6.1M 44M 34.64 Transformer-256 20k 20k 256 5.1M 42M 34.19 Transformer-512 20k 20k 512 10.1M 52M 34.39 CipherDAug-1key 11.8k 16k 256 4.1M 40M 36.25 CipherDAug-1key 11.8k 16k 512 8.2M 47M 36.19 CipherDAug-2keys 11.8k 20k 256 5M 42M 36.90 CipherDAug -2keys 11.8k 20k 512 10.1M 52M 37.53 Table 6: Results on IWSLT14 De En with baseline Transformer and CipherDAug using different vocabulary sizes and embedding dimensions.", "(68M params) approaches saturation when 9M enciphered sentences (2 keys) are added.", "Upgrading to Transformer Big (218M) may be viable, but would be an unfair comparison with other models.", "The model capacity becomes a bottleneck with larger datasets when the model is optimised to translate each of the source sentences (4.5M plain and 9M enciphered) individually (single-source) as well as together (multi-source) through the co-regularization loss.", "The results indicate that our proposed approach works best in small and low resource data settings.", "Number of Keys Figure 2 (left) shows the effect of adding different amounts of enciphered data.", "We obtain the best performance using just 2 different keys.", "Using more or fewer degrades performance, though both cases still outperform the baseline.", "As noted in Section 3.2, the model may become saturated when too many keys are used.", "Agreement Loss Figure 2 (right) shows an ablation analysis on the agreement loss.", "We find that CipherDAug is sensitive to the weight given to this term: increasing or decreasing it from our default setting = 5 incurs a performance drop of nearly 2 BLEU.", "Despite the performance gains attendant to this term, it is equally clear that agreement loss cannot fully account for CipherDAug's improvements over the baseline: in the naive setting where = 0 , CipherDAug still outperforms the baseline by approximately 1 BLEU.", "Learning BPE vocabularies jointly vs. separately From Table 7, we see that there is no significant impact on BLEU if we learn BPE vocabu-number of updates BLEU 32 34 36 38 20000 40000 60000 80000 100000 1 key 5 keys 2 keys Baseline KL div.", "laries separately for each language or enciphered language from IWSLT14 De En.", "This is consistent with results from Neubig and Hu (2018) in the context of mutilingual NMT.", "Note that it is preferable to learn the BPEs jointly as this allows us to limit the total vocabulary size.", "When learned separately, we cannot control the combined vocabulary size which may result in a larger or smaller vocabulary (and therefore, a different number of embedding parameters) than intended.", "Disentangling the effects of increased parameters in the embedding layer CipherDAug leverages the combined vocabularies of the original parallel bitext and enciphered copies of the source text.", "This necessarily increases in the number of parameters in the embedding layer even though the rest of the network remains identical.", "To understand the effect of these extra parameters, we compare CipherDAug against the baseline Transformer model with different vocabulary and embedding sizes.", "Results from different settings are shown in Table 6.", "13 As we reduce the embedding dimension of our best model (CipherDAug with 2 keys) from 512 to 256, we observe a small change of -0.6 BLEU in the final scores.", "With 1 cipher key, however, our model exhibits a slight (statistically insignificant) improvement of +0.06 BLEU.", "These results show that the few extra embedding parameters in CipherDAug do not have an outsized impact on model performance, but we emphasize that reducing the dimensionality of the embedding layer diminishes its expressivity and is therefore not a completely fair comparison.", "The attention mechanism of a model might not reflect a model's true inner reasoning (Jain and Wallace, 2019; Moradi et al., 2019, 2021).", "To better analyze NMT models, Lee et al. (2018) introduce the notion of hallucinations .", "A model hallucinates when small perturbations in its input cause drastic changes in the output, implying it is not actually attentive to this input.", "Using Algorithm 2 of Raunak et al. (2021), Table 8 shows the number of hallucinations on the IWSLT14 De-En test set for the baseline and CipherDAug models.", "We use the 50 most common subwords as perturbations.", "CipherDAug sees a 40% reduction in hallucinations relative to the baseline, suggesting it is more resilient against perturbations and more attentive to the content of its input.", "We argue that CipherDAug is effective in part because it reduces the impact of rare words.", "On average, the rarest subword in a ROTk enciphered 13 Note that in Table 6, the BPE vocabularies from the original source and target remain approximately same across the baseline (12k) and CipherDAug (11.8k) even though the final vocabulary sizes of our models vary with the addition of the enciphered", "source(s).", "sentence is significantly more frequent than the rarest subword in a plaintext sentence.", "This is apparent in an example like the following: hier ist es ntig, das, was wir unter politically correctness verstehen, immer wieder anzubringen.", "Figure 3 plots the frequency of each subword in this sentence and its ROTk enciphered variants.", "In the plaintext, we observe a series of rare subwords ically , _correct , and ness coming from the English borrowing.", "After encipherment, however, these are replaced by a variety of more common subwords jd , bmm , _d , and so on.", "The result is that the enciphered sentences have fewer rare subwords; this allows them to share more information with other sentences, and allows the more common enciphered tokens to inform the model's encoding of less common plaintext tokens.", "We reiterate that this trend holds across the whole corpus, and highlights the value of an augmentation scheme that allows a model to see many different segmentations of each input.", "This is not the only mechanism by which CipherDAug improves performance: we find improvements for tokens in every frequency bucket, not simply those which are rare (Figure 4).", "In Section 2.2, we argue that the agreement loss in (4) acts as a co-regularization term in a multiview learning setting.", "Multi-view learning works best when the different views capture distinct information.", "In CipherDAug, this is accomplished by 207 target word frequency f m ea s u r e 0.2 0.3 0.4 0.5 0.6 0.7 < 1 1 2 3 4 [ 5 , 10 ) [ 10 , 100 ) [ 100 , 1 k ) >= 1 k Transformer Moving Avg CipherDAug Moving Avg sentence length s e n t e n c e BLEU 15 20 25 30 35 40 < 10 [ 10 , 20 ) [ 20 , 30 ) [ 30 , 40 ) [ 40 , 50 ) [ 50 , 60 ) >= 60 Transformer CipherDAug Figure 4: CipherDAug yields improvements for tokens of all frequencies and sentences of every length.", "allowing enciphered inputs to receive different segmentations than plaintext inputs.", "As evidence that the different views capture distinct information, we note that even after training with co-regularization the model remains sensitive to the choice of input encoding, as seen in cases such as Figure 6 where the model may produce any of three distinct outputs depending on whether it is given plainor ciphertext as input.", "If all of the input views captured identical information we should expect no such variation, especially after training with an explicit co-regularization term.", "To further analyze CipherDAug, we turn to canonical correlation analysis (CCA; Hardoon et al. 2004; Raghu et al. 2017), which finds a linear transform to maximize correlation between values in two high dimensional datasets.", "As detailed in Raghu et al. 2017, it is useful for measuring correlations between activations from different networks.", "For each IWSLT14 De-En test sentence, we save the activations from each layer of our baseline and CipherDAug models.", "For the CipherDAug model, we save activations on plaintext and enciphered inputs.", "For every pair of layers, we compute the projection weighted 14 CCA (PWCCA) between activations from those layers.", "If this value is high (rel-ative to a random baseline), this means that there is a linear transformation under which the activations from those layers are linearly correlated, implying that the layers capture similar information.", "Figure 5 plots the PWCCA between encoder states from the baseline and CipherDAug models, and between CipherDAug encoder states with dif-14 See Raghu et al. 2017 for an explanation of CCA variants including PWCCA.", "ferent input encodings.", "It is immediately clear that CipherDAug learns similar, but not identical, representations for plainand ciphertext inputs: the state of a layer in the de en setting is generally predictive of the state of that same layer in the ROT-1(de) en and ROT-2(de) en settings.", "We emphasize, however, that representations for plainand ciphertexts are not identical, as can be seen by comparing against the baseline model.", "Here, some layers in one model show a moderate correlation to every layer of the other model; other layers show a strong correlation with a different layer from the other model.", "This implies that, while the two models extract some of the same information, they do so at different depths in the encoder.", "Moreover, CipherDAug states from enciphered inputs present an entirely different pattern of correlations than plaintext inputs.", "This implies that CipherDAug not only learns different information than the baseline, but that these differences are distinct for plaintexts and ciphertexts.", "These results strengthen Section 4.4's claim that plain-and ciphertexts capture distinct information.", "Data-augmentation (Sennrich et al., 2016b) can be broadly categorized into back-translation based methods and those which perturb or change the input (Wang et al., 2018).", "Back-translation (Sennrich et al., 2016b) is arguably the de-facto data augmentation method for NMT.", "Besides back-translating 208 Model De En Transformer 34.71 CipherDAug-2keys de en 37.53 ROT-1(de) en 37.41 ROT-2(de) en 37.35 Source: sein onkel floh mit ihrer heiligkeit in die diaspora, die leute nach nepal brachte.", "external monolingual data (Edunov et al., 2018), Li et al. (2019) forward-translate the source (Zhang and Zong, 2016) and/or backward-translate the target side (Sennrich et al., 2016a) of the original (in-domain) parallel data.", "Our technique produces lexically diverse samples using only the original source data, rather than relying on model predictions which may be of limited quality.", "Belinkov and Bisk (2018) showed that NMT models can be sensitive to orthographic variation, and that training with noise improves their robustness (Khayrallah and Koehn, 2018).", "Common noising techniques include token dropping (Zhang et al., 2020), word replacement (Xie et al., 2017; Wu et al., 2021a), Word-Dropout (randomly zeroing out word embeddings; Sennrich et al. 2016a; Gal and Ghahramani 2016) and adding synthetic noise by swapping random characters or replacing words with common typos (Karpukhin et al., 2019).", "Adding enciphered data is distinct from noising as the ciphertexts are generated deterministically and follow the same distribution as the underlying natural language, simply using shifted letters of the same alphabet.", "15 To extend the support of the empirical data distribution, Norouzi et al. (2016) introduced RAML on the target side; Wang et al. (2018) proposed SwitchOut as a more general method which they applied to the source side.", "Special cases of SwitchOut include Word-Dropout and sequence-mixing (Guo et al., 2020a), which exchanges words between similar source sentences to encourage compositional behaviour.", "Such methods generate several different samples for each sentence because of the large vocabulary to choose replacements from; they often give poor coverage despite this.", "In contrast, CipherDAug guarantees lexically diverse examples with semantic equivalence to the source sentences without having to choose specific replacements.", "Adversarial techniques (Gao et al., 2019) perform soft perturbations of tokens or spans 15 CipherDAug can also apply to non-alphabetic scripts (e.g. Mandarin, Japanese) by incrementing Unicode codepoints modulo the size of the block containing the script in question.", "(Takase and Kiyono 2021, Karpukhin et al. 2019).", "An advantage of soft replacements over hard ones is that they take into account the context of the tokens being replaced (Liu et al., 2021; Mohiuddin et al., 2021).", "These methods require architectural changes to a model whereas CipherDAug does not.", "Ciphertext-based augmentation is orthogonal to most other data-augmentation methods and can be seamlessly combined with these to jointly improve neural machine translation.", "We introduce CipherDAug, a novel technique for augmenting translation data using ROTk enciphered copies of the source corpus.", "This technique requires no external data, and significantly outperforms a variety of strong existing data augmentation techniques.", "We have shown that an agreement loss term, which minimizes divergence between representations of plainand ciphertext inputs, is crucial to the performance of this model, and we have explained the function of this loss term with reference to co-regularization techniques from multi-view learning.", "We have also demonstrated other means by which enciphered data can improve model performance, such as by reducing the impact of rare words.", "Overall, CipherDAug shows promise as a simple, out-of-the-box approach to data augmentation which improves on and combines easily with existing techniques, and which yields particularly strong results in low-resource settings.", "We would like to thank the anonymous reviewers for their helpful comments and Kumar Abhishek for the numerous discussions that helped shape this paper.", "The research was partially supported by the Natural Sciences and Engineering Research Council of Canada grants NSERC RGPIN-2018-06437 and RGPAS-2018-522574 and a Department of National Defence (DND) and NSERC grant DGDND-2018-00025 to the third author." ]
[ "objective", "abstain", "objective", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "result", "objective", "abstain", "other", "other" ]
[ "Natural language often exhibits inherent hierarchical structure ingrained with complex syntax and semantics.", "However, most state-of-the-art deep generative models learn embeddings only in Euclidean vector space, without accounting for this structural property of language.", "We investigate text generation in a hyperbolic latent space to learn continuous hierarchical representations.", "An Adversarial Poincar Variational Autoencoder (APo-VAE) is presented, where both the prior and variational posterior of latent variables are defined over a Poincar ball via wrapped normal distributions.", "By adopting the primal-dual formulation of Kullback-Leibler divergence, an adversarial learning procedure is introduced to empower robust model training.", "Extensive experiments in language modeling, unaligned style transfer, and dialog-response generation demonstrate the effectiveness of the proposed APo-VAE model over VAEs in Euclidean latent space, thanks to its superb capabilities in capturing latent language hierarchies in hyperbolic space.", "The Variational Autoencoder (VAE) (Kingma and Welling, 2013; Rezende et al., 2014) is a generative model widely applied to language-generation tasks, which propagates latent codes drawn from a simple prior to manifest data samples through a decoder.", "The generative model is augmented by an inference network, which feeds observed data samples through an encoder to yield a distribution on the corresponding latent codes.", "Since natural language often manifests a latent hierarchical structure, it is desirable for the latent code in a VAE to reflect such inherent language structure, so that the generated text can be more natural and expressive.", "An example of language structure is illustrated in Figure 1, where sentences are organized into a tree structure.", "The root node corresponds to simple sentences ( e.g. , Yes ), while nodes on outer leaves represent sentences with more complex syntactic structure and richer, more specific semantic meaning ( e.g. , The food in the restaurant is awesome ) 1 .", "In existing VAE-based generative models, such structures are not explicitly considered.", "The latent code often employs a simple Gaussian prior, and the posterior is approximated as a Gaussian with diagonal covariance matrix.", "Such embeddings assume Euclidean structure, which is inadequate in capturing geometric structure illustrated in Figure", "1. While some variants have been proposed to enrich the prior distributions (Xu and Durrett, 2018; Wang et al., 2019a,b; Shi et al., 2019), there is no evidence that structural information in language can be recovered effectively by the model.", "2019; Nickel and Kiela, 2017).", "Informally, hyperbolic space can be considered as a continuous map of trees.", "For example, a Poincar disk (a hyperbolic space with two dimensions) can represent any tree with arbitrary low distortion (De Sa et al., 2018; Sarkar, 2011).", "In Euclidean space, however, it is difficult to learn such structural representation even with infinite dimensions (Linial et al., 1995).", "Motivated by these observations, we propose Adversarial Poincar Variational Autoencoder (APo-VAE), a text embedding and generation model based on hyperbolic representations, where the latent code is encouraged to capture the underlying tree-like structure in language.", "Such latent structure provides more control of the generated sentences, i.e. , an increase of sentence complexity and diversity can be achieved along some trajectory from a root to its children.", "In practice, we define both the prior and the variational posterior of the latent code over a Poincar ball, via the use of a wrapped normal distribution (Nagano et al., 2019).", "To obtain more stable model training and learn more flexible representation of the latent code, we exploit the primal-dual formulation of Kullback-Leibler (KL) divergence (Dai et al., 2018) based on the Fenchel duality (Rockafellar et al., 1966), to adversarially optimize the variational bound.", "Unlike the primal form that relies on Monte Carlo approximation (Mathieu et al., 2019), our dual formulation bypasses the need for tractable posterior likelihoods via the introduction of an auxiliary dual function.", "We apply the proposed approach to language modeling, unaligned style transfer and dialog-response generation.", "For language modeling, in order to enhance the distribution complexity of the prior, we use an additional variational mixture of posteriors prior (VampPrior) design (Tomczak and Welling, 2018) for the wrapped normal distribution.", "Specifically, VampPrior uses a mixture distribution with components from variational posteriors, coupling the parameters of the prior and variational posterior.", "For unaligned style transfer, we add a sentiment classifier to our model, and disentangle content and sentiment information by using adversarial training (Zhao et al., 2017a).", "For dialog-response generation, a conditional model variant of APo-VAE is designed to take into account the dialog context.", "a major obstacle preventing efficient learning of a VAE on text data.", "In posterior collapse, the encoder learns an approximate posterior similar to the prior, and the decoder tends to ignore the latent code for generation.", "Experiments show that our proposed model can effectively avoid posterior collapse.", "We hypothesize that this is due to the use of a more informative prior in hyperbolic space that enhances the complexity of the latent representation, which aligns well with previous work (Tomczak and Welling, 2018; Wang et al., 2019a) that advocates a better prior design.", "Our main contributions are summarized as follows.", "( i )", "We present Adversarial Poincar Variational Autoencoder (APo-VAE), a novel approach to text embedding and generation based on hyperbolic latent representations.", "( ii )", "In addition to the use of a wrapped normal distribution, an adversarial learning procedure and a VampPrior design are incorporated for robust model training.", "( iii )", "Experiments on language modeling, unaligned style transfer, and dialog-response generation benchmarks demonstrate the superiority of the proposed approach compared to Euclidean VAEs, as it benefits from capturing informative latent hierarchies in natural language.", "Let X = { x i } Ni =1 be a dataset of sentences, where each x i = [ x i, 1 , ..., x i,T i ] is a sequence of tokens of length T i .", "Our goal is to learn p ( x ) that best models the observed sentences so that the expected log-likelihood is maximized, i.e. , L ( ) = 1 N (cid:80) i log p ( x i ) .", "The variational autoencoder (VAE) (Kingma and Welling, 2013; Chen et al., 2018b) considers a latent-variable model p ( x , z ) to represent sentences, with an auxilary encoder that draws samples of latent code z from the conditional density q ( z | x ) , known as the approximate posterior.", "Given a latent code z , the decoder samples a sentence from the conditional density p ( x | z ) = (cid:81) t p ( x t | x <t , z ) , where the decoding pass takes an auto-regressive form.", "Together with prior p ( z ) , the model is given by the joint p ( x , z ) = p ( x | z ) p ( z ) .", "The VAE leverages the approximate posterior to derive an evidence lower bound (ELBO) to the (intractable) marginal log-likelihood log p ( x ) = log (cid:82) p ( x , z ) d z : L ( x ; , ) = E z q ( z | x ) (cid:20) log p ( x , z ) q ( z | x ) (cid:21) , (1) where ( , ) are jointly optimized during training, and the gap is given by the decomposition log p ( x ) = L ( x ; , )+ DKL ( p ( z | x ) (cid:107) q ( z | x )) , (2) where DKL denotes Kullback-Leibler divergence.", "Alternatively, the ELBO can be written as: L ( x ; , ) = E z q ( z | x ) [log p ( x | z )] DKL ( q ( z | x ) (cid:107) p ( z )) , (3) where the first conditional likelihood and second KL terms respectively characterize reconstruction and generalization capabilities.", "Intuitively, a good model is expected to strike a balance between good reconstruction and generalization.", "In most cases, both the prior and variational posterior are assumed to be Gaussian for computational convenience.", "However, such over-simplified assumptions may not be ideal for capturing the intrinsic characteristics of data that have unique geometrical structure, such as natural language.", "Riemannian manifolds can provide a more powerful and meaningful embedding space for complex data with highly non-Euclidean structure, that cannot be effectively captured in a vectorial form ( e.g. , social networks, biology and computer graph-ics).", "Of particular interest is the hyperbolic space (Ganea et al., 2018), where ( i ) the relatively simple geometry allows tractable computations, and ( ii ) the exponential growth of distance in finite dimensions naturally embeds rich hierarchical structure in a compact form.", "Riemannian Geometry.", "An n -dimensional Riemannian manifold M n is a set of points locally similar to a linear space R n .", "At each point x of the manifold M n , we can define a real vector space T x M n that is tangent to x , along with an associated metric tensor g x ( , ) : T x M n T x M n R which is an inner product on T x M n .", "Intuitively, a Riemannian manifold behaves like a vector space only in its infinitesimal neighborhood, allowing the generalization of common notation like angle, straight line and distance to a smooth manifold.", "For each tangent space T x M n , there exists a specific one-to-one map exp x ( v ) : T x M n M n from an (cid:15) -ball at the origin of T x M n to a neighborhood of x on M n , called the exponential map .", "We refer to the inverse of an exponential map as the logarithm map , denoted log x ( y ) : M n T x M n .", "In addition, a parallel transport P x x (cid:48) : T x M n T x (cid:48) M n intuitively transports tangent vectors along a straight line between x and x (cid:48) , so that they remain parallel.", "This is the basic machinery that allows us to generalize distributions and computations in the hyperbolic space, as detailed in later sections.", "Poincar Ball Model.", "Hyperbolic geometry is one type of non-Euclidean geometry with a constant negative curvature.", "As a classical example of hyperbolic space, an n -dimensional Poincar ball, with curvature parameter c 0 ( i.e. , radius 1 c ), can be denoted as B n c := (cid:8) z R n | c (cid:107) z (cid:107) 2 < 1 (cid:9) with its metric tensor given by g c z = 2 z g E , where z = 2 1 c (cid:107) z (cid:107) 2 and g E denotes the regular Euclidean metric tensor.", "Intuitively, as z moves closer to the boundary 1 c , the hyperbolic distance between z and a nearby z (cid:48) diverges at a rate of 1 1 c (cid:107) z (cid:107) 2 .", "This implies significant representation capacity, as very dissimilar objects can be encoded on a compact domain.", "Note that as c 0 , the model recovers the Euclidean space R n , i.e. , the lack of hierarchy.", "In comparison, a larger c implies a stronger hierarchical organization.", "2 Mathematical Operations.", "We review the closed-form mathematical operations that enable differentiable training for hyperbolic space models, namely the hyperbolic algebra (vector addition) and tangent space computations (ex-ponential/logarithm map and parallel transport).", "The hyperbolic algebra is formulated under the framework of gyrovector spaces (Ungar, 2008), with the addition of two points z , z (cid:48) B nc given by the Mbius addition : z c z (cid:48) := (4) (1 + 2 c (cid:104) z , z (cid:48) (cid:105) + c (cid:107) z (cid:48) (cid:107) 2 ) z + (1 c (cid:107) z (cid:107) 2 ) z (cid:48) 1 + 2 c (cid:104) z , z (cid:48) (cid:105) + c 2 (cid:107) z (cid:107) 2 (cid:107) z (cid:48) (cid:107) 2 .", "For any point B nc , the exponential map and the logarithmic map are given for u (cid:54) = 0 and y (cid:54) = 2 The fact that APo-VAE outperforms standard VAE evidences the existence of the hierarchical organization in NLP data.", "exp c ( u ) := c (tanh( c c (cid:107) u (cid:107) 2 ) u c (cid:107) u (cid:107) ) , log c ( y ) := 2 c c tanh 1 ( c (cid:107) , y (cid:107) ) , y (cid:107) , y (cid:107) ,", "(5) where , y := ( ) c y .", "Note that the Poincar ball model is geodesically complete in the sense that exp c is well-defined on the full tangent space T B nc .", "The parallel transport map from a vector v T 0 B nc to another tangent space T B nc is given by P c 0 ( v ) = log c ( c exp c 0 ( v )) = c 0 c v .", "We first introduce our hyperbolic encoder and decoder, and how to apply reparametrization.", "We then provide detailed descriptions on model implementation, explaining how the primal-dual form of KL divergence can help stabilize training.", "Finally, we describe how to adopt VampPrior (Tomczak and Welling, 2018) to enhance performance.", "A summary of our model scheme is provided in Figure", "2. 3.1 Flexible Wrapped Distribution Encoder We begin by generalizing the standard normal distribution to a Poincar ball (Ganea et al., 2018).", "While there are a few competing definitions of the hyperbolic normal, we choose the wrapped normal as our prior and variational posterior, largely due to its flexibility for more expressive generalization.", "A wrapped normal distribution NB nc ( , ) is defined as follows: ( i ) sample vector v from N ( 0 , ) , ( ii ) parallel transport v to u := P c 0 ( v ) , and ( iii ) using exponential map to project u back to z := exp c ( u ) .", "For approximate posteriors, ( , ) depends on x .", "We further generalize the (restrictive) hyperbolic wrapped normal by acknowledging that under the implicit VAE (Fang et al., 2019) framework, one does not need the approximate posterior q ( z | x ) to be analytically tractable.", "This allows us to replace the tangent space sampling step v N ( 0 , ) in (7) with a more flexible implicit distribution from which we draw samples as v := G ( x , ; 1 ) for N ( 0 , I ) .", "Note that now := F ( x ; 2 ) can be regarded as a deterministic displacement vector that anchors embeddings to the correct semantic neighborhood, allowing the stochastic v to only focus on modeling the local uncertainty of the semantic embedding.", "The synergy between the deterministic and stochastic parts enables efficient representation learning relative to existing alternatives.", "For simplicity, we denote the encoder neural network as EncNet , which contains G and F , with parameters = { 1 , 2 } .", "To build a geometry-aware decoder for a hyperbolic latent code, we follow Ganea et al. (2018), and use a generalized linear function analogously defined in the hyperbolic space.", "A Euclidean linear function takes the form f ( z ) = (cid:104) a , z b (cid:105) = sign ( (cid:104) a , z b (cid:105) ) (cid:107) a (cid:107) d E ( z , H a , b ) , where a is the coefficient, b is the intercept, H a , b is a hyperplane passing through b with a as the normal direction, and d E ( z , H ) is the distance between z and hyperplane H .", "The counterpart in Poincar ball analogously writes f c a , b ( z ) = sign ( (cid:104) a , log c b ( z ) (cid:105) b ) (cid:107) a (cid:107) b d B c ( z , H c a , b ) , (8) where H c a , b = { z B nc |(cid:104) a , log c b ( z ) (cid:105) b = 0 } , and d B c ( z , H c a , b ) = 1 c sinh 1 (cid:16) 2 c |(cid:104) b , z , a (cid:105)| (1 c (cid:107) b , z (cid:107) 2 ) (cid:107) a (cid:107) (cid:17) are the the gyroplane and the distance between z and the gyroplane, respectively.", "Specifically, we use the hyperbolic linear function in (8) to extract features from the Poincar embedding z .", "The feature f c a , b ( z ) will be the input to the RNN decoder.", "We denote the combined network of f c a , b and the RNN decoder as DecNet , where parameters contain a and b .", "While it is straightforward to compute the ELBO (3) via Monte Carlo estimates using the explicit wrapped normal density (Mathieu et al., 2019), we empirically observe that: ( i ) the normal assumption restricts the expressiveness of the model, and ( ii ) the wrapped normal likelihood makes the training unstable.", "Therefore, we appeal to a primal-dual view of VAE training to overcome such difficul-ties (Rockafellar et al., 1966; Dai et al., 2018; Tao et al., 2019; Fang et al., 2019).", "Specifically, the KL term in (3) can be reformulated as: DKL ( q ( z | x ) (cid:107) p ( z )) = max (9) (cid:110) E z q ( z | x ) ( x , z ) E z p ( z ) exp ( x , z ) (cid:111) , where ( x , z ) is the (auxiliary) dual function ( i.e. , a neural network) with parameters .", "The primal-dual view of the KL term enhances the approximation ability, while also being tractable computationally.", "Meanwhile, since the density function in the original KL term in (3) is replaced by the dual function ( x , z ) , we can avoid direct computation with respect to the probability density function of the wrapped normal distribution.", "To train our proposed APo-VAE with the primal-dual form of the VAE objective, we follow the training schemes of coupled variational Bayes (CVB) (Dai et al., 2018) and implicit VAE (Fang et al., 2019), which optimize the objective adversarially.", "Specifically, we update in the dual function ( x , z ) to maximize: L 1 = E x X [ E z q ( z | x ) ( x , z ) E z p ( z ) exp ( x , z ) ] , (10) Algorithm 1 Training procedure of APo-VAE.", "where E x X [ ] denotes the expectation over empirical distribution on observations.", "Accordingly, parameters and are updated to maximize: L 2 = E x XE z q ( z | x ) [ log p ( x | z ) ( x , z ) ] .", "Note that the term E x XE z q ( z | x ) ( x , z ) is maximized in (10) while it is minimized in (11), i.e. , adversarial learning.", "In other words, one can consider the dual function as a discriminative network that distinguishes between the prior z p ( z ) and the variational posterior z q ( z | x ) , both of which are paired with the input data x X .", "While the use of a standard normal prior is a simple choice in Euclidean space, we argue that it induces bias in the hyperbolic setup.", "This is because natural sentences have specific meaning, and it is unrealistic to have the bulk of mass concentrated in the center (this is for low dimension; for high dimensions, it will concentrate near the surface of a sphere, which may partly explain why cosine similarity works favorably compared with Euclidean distance for NLP applications).", "To reduce the induced bias from a pre-fixed prior, we adopt the VampPrior framework (Tomczak and Welling, 2018), which is a mixture of variational posteriors conditioned on learnable pseudo-data points.", "Specifically, we consider the prior as a learnable distribution given by p ( z ) = 1 K (cid:80) Kk =1 q ( z | s k ) , (12) where q is the learned approximate posterior, and we call the parameter := { s k } Kk =1 pseudo inputs.", "Intuitively, p ( z ) seeks to match the aggregated posterior (Makhzani et al., 2015): q ( z ) = 1 N (cid:80) Ni =1 q ( z | x i ) in a cost-efficient manner via parameterizing the pseudo inputs.", "By replacing the prior distribution p ( z ) in (10) with p ( z ) , we complete the final objective of the proposed APo-VAE.", "The detailed training procedure is summarized in Algorithm", "1. 4 Related Work VAE for Text Generation.", "Many VAE models have been proposed for text generation, most of which focus on solving the issue of posterior collapse.", "The most popular strategy is to alter the training dynamics, keeping the encoder away from bad local optima.", "For example, variants of KL annealing (Bowman et al., 2016; Zhao et al., 2018; Fu et al., 2019) dynamically adjust the weight on the KL penalty term as training progresses.", "Lagging VAE (He et al., 2019) aggressively optimizes the encoder before each decoder update, to overcome the imbalanced training issue between the encoder and decoder.", "Alternative strategies have also been proposed based on competing theories or heuristics.", "-VAE (Razavi et al., 2019) tackles this issue by enforcing a minimum KL divergence between the posterior and the prior.", "Yang et al. (2017) blames mode-collapse on the auto-regressive design of the decoder and advocates alternative architectures.", "A semi-amortized inference network is considered by Kim et al. (2018) to bridge the amortization gap between log-likelihood and the ELBO.", "Recent work has also shown that posterior collapse can be ameliorated by using more expressive priors and variational posteriors other than Gaussian.", "Flow-based VAE is considered in Ziegler and Rush (2019) to enhance the flexibility of prior distributions.", "A topic-guided prior is proposed in Wang et al. (2019a) to achieve more controllable text generation.", "Fang et al. (2019) explores implicit sample-based representations, without requiring an explicit density form for the approximate posterior.", "Xu and Durrett (2018) considers replacing the Gaussian with the spherical von Mises-Fisher (vMF) distribution.", "Compared to these prior arts, our model features structured representation in hyperbolic space, which not only captures latent hierarchies but also combats posterior collapse.", "Hyperbolic Space Representation Learning.", "There has been a recent surge of interest in representation learning in hyperbolic space, largely due to its exceptional effectiveness modeling data with underlying graphical structure (Chamberlain et al., 2017), such as relation nets (Nickel and Kiela, 2017).", "In the context of NLP, hyperbolic geometry has been considered for word embeddings (Tifrea et al., 2018).", "A popular vehicle for hyperbolic representation learning is the autoencoder (AE) framework (Grattarola et al., 2019; Ovinnikov, 2019), where the decoders are built to efficiently exploit the hyperbolic geometry (Ganea et al., 2018).", "Closest to our APo-VAE are the works of hyperbolic VAEs (Mathieu et al., 2019; Nagano et al., 2019), where wrapped normal distributions have been used.", "Drawing power from the dual form of the KL, the proposed APo-VAE highlights an implicit posterior and data-driven prior, showing improved training stability.", "We evaluate the proposed model on three tasks: ( i ) language modeling, ( ii ) unaligned style transfer, and ( iii ) dialog-response generation, with quantitative results, human evaluation and qualitative analysis.", "Datasets.", "We use three datasets for language modeling: Penn Treebank (PTB) (Marcus et al., 1993), Yahoo and Yelp corpora (Yang et al., 2017).", "PTB contains one million words of 1989 Wall Street Journal material annotated in Treebank II style, with 42k sentences of varying lengths.", "Yahoo and Yelp are much larger datasets, each containing 100k sentences with greater average length.", "For unaligned style transfer, we use the Yelp restaurant reviews dataset (Shen et al., 2017), which is obtained by pre-processing the Yelp dataset, i.e. , sentences are shortened for more feasible sentence level sentiment analysis.", "Overall, the dataset includes 350k positive and 250k negative reviews (based on user rating).", "Following Gu et al. (2019), we use the Switchboard (Godfrey and Holliman, 1997) dataset for dialogue-response generation.", "The former contains 2.4k two-sided telephone conversations, manually transcribed and aligned.", "We split the data into training, validation and test sets following the protocol described in Zhao et al. (2017b).", "Evaluation Metrics.", "To benchmark language modeling performance, we report the ELBO and Perplexity (PPL) of APo-VAE and baselines.", "In order to verify our proposed Apo-VAE is more resistant to posterior collapse, we also report the KL-divergence DKL ( q ( z | x ) (cid:107) p ( z )) and mutual information (MI) between z and x (He et al., 2019).", "The number of active units (AU) of the latent code is also reported, where the activity of a latent dimension z is measured as A z = Cov x E z q ( z | x ) [ z ] , and defined as active if A z > 0 .", "01 .", "To evaluate our model on unaligned style transfer, we consider the transfer accuracy from one sentiment to another, the BLEU scores between original and transferred sentences, the reconstruction perplexity of original sentences, and the reverse perplexity (RPPL) based on a language model from the transferred sentences.", "For dialogue-response generation, we adopt the evaluation metrics used in previous studies (Zhao et al., 2017b; Gu et al., 2019), including BLEU (Pa-pineni et al., 2002), BOW (Liu et al., 2016), and intra/inter-dist values (Gu et al., 2019).", "The first two metrics are used to assess the relevance of the generated response, and the third is for diversity evaluation.", "Model Implementation.", "For language modeling, we adopt the LSTM (Hochreiter and Schmid-huber, 1997) for both the encoder and decoder, with dimension of the latent code set to 32 .", "Following Mathieu et al. (2019), the hyper-parameter c is set to 0 .", "7 .", "For unaligned style transfer, we extend our model in the same fashion as Fang et al. (2019).", "For dialogue-response generation, we modify APo-VAE following the conditional VAE framework (Zhao et al., 2017b).", "Specifically, an extra input of context embedding s is supplied to the model ( i.e. , p ( x , z | s ) , q ( z | x , s ) ).", "The prior p ( z | s ) is a wrapped normal conditioned on context embedding, learned together with the posterior.", "Language Modeling.", "Table 1 shows results on language modeling.", "We compare APo-VAE with other VAE-based solutions, including -VAE (Hig-gins et al., 2017), SA-VAE (Kim et al., 2018), lagging VAE (LAG-VAE) (He et al., 2019), vMF-VAE (Xu and Durrett, 2018), Poincar VAE ( P Model -ELBO PPL KL MI AU PTB VAE 102.6 108.26 1.1 0.8 2 -VAE 104.5 117.92 7.5 3.1 5 SA-VAE 102.6 107.71 1.2 0.7 2 vMF-VAE 95.8 93.70 2.9 3.2 21 P-VAE 91.4 76.13 4.5 2.9 23 iVAE 87.2 53.44 12.5 12.2 32 APo-VAE 87.2 53.32 8.4 4.8 32 APo-VAE+VP 87.0 53.02 8.9 4.5 32 Yahoo VAE 328.6 61.21 0.0 0.0 0 -VAE 328.7 61.29 6.3 2.8 8 SA-VAE 327.2 60.15 5.2 2.9 10 LAG-VAE 326.7 59.77 5.7 2.9 15 vMF-VAE 318.5 53.92 6.3 3.7 23 P-VAE 313.4 50.57 7.2 3.3 27 iVAE 309.1 47.93 11.4 10.7 32 APo-VAE 286.2 47.00 6.9 4.1 32 APo-VAE+VP 285.6 46.61 8.1 4.9 32 Yelp VAE 357.9 40.56 0.0 0.0 0 -VAE 358.2 40.69 4.2 2.0 4 SA-VAE 357.8 40.51 2.8 1.7 8 LAG-VAE 355.9 39.73 3.8 2.4 11 vMF-VAE 356.2 51.03 4.1 3.9 13 P-VAE 355.4 50.64 4.3 4.8 19 iVAE 348.7 36.88 11.6 11.0 32 APo-VAE 319.7 34.10 12.1 7.5 32 APo-VAE+VP 316.4 32.91 12.7 6.2 32 Table 1: Results on PTB, Yahoo, and Yelp datasets. A better language model achieves lower negative ELBO and PPL. Higher KL and MI indicate a better utilization of the latent space. VAE) (Mathieu et al., 2019) and iVAE 3 (Fang et al., 2019).", "On all three datasets, the proposed model achieves lower negative ELBO and PPL than other models, demonstrating its strong ability to better model sequential text data.", "Meanwhile, the larger KL term and higher mutual information (between z and x ) of APo-VAE model indicate its robustness in handling posterior collapse.", "In addition, the introduction of a data-driven prior (denoted as APo-VAE+VP) further boosts the performance, especially on negative ELBO and PPL.", "Visualization.", "To verify our hypothesis that the proposed model is capable of learning latent tree structure in text data, we visualize the two-dimensional projection of the learned latent code in Figure 3.", "For visualization, we randomly draw 5k samples from PTB-test, and encode them to the latent space using the APo-VAE encoder.", "We color-code each sentence based on its length ( i.e. , blue for long sentences and red for short sentences).", "Note that only a small portion of data have a length longer than 32 ( < 10% ), and human inspection 3 We report iVAE MI results in all our experiments.", "verified that most of them contain multiple sub-sentences.", "We exclude these samples from our analysis.", "As shown in Figure 3, longer sentences (dark blue) tend to occupy the outer rim of the Poincar ball, while the shorter ones (dark red) are concen-vs iVAE vs DialogWAE win loss tie win loss tie Informativeness 52.8 27.9 19.3 63.7 27.1 19.2 Coherence 41.7 35.5 22.8 41.2 34.4 24.4 Diversity 51.2 26.4 22.4 62.1 25.1 12.8 Table 4: Human evaluation results.", "trated in the inner area.", "We also select some long sample sentences (dark blue), and manually shorten them to create several variants of different lengths (ranging from 6 to 27), which are related in a hierarchical manner based on human judgement.", "We visualize their latent codes projected by the trained APo-VAE.", "The resulting plot is consistent with a hierarchical structure for APo-VAE: as the sentence becomes more specific, the embedding moves outward.", "We also decode from the neighbours of these latent codes, the outputs (see the Appendix) of which demonstrate a similar hierarchical structure.", "Unaligned Style Transfer.", "Table 3 shows the results on the Yelp restaurant reviews dataset.", "APo-VAE achieves over 1 point increased BLEU scores than iVAE, capturing a more informative and structured feature space.", "Comparable performance is achieved for the other evaluation metrics.", "Dialogue Response Generation.", "Results on SwitchBoard are summarized in Table", "2. Our proposed model generates comparable or better responses than the baseline models in terms of both relevance (BLEU and BOW) and diversity (intra/inter-dist).", "APo-VAE improves the average recall from 0 .", "427 (by iVAE) to 0 .", "438 , while significantly enhancing generation diversity ( e.g. , from 0 . 692 to 0 . 792 for intra-dist-2).", "Human Evaluation.", "We further perform human evaluation via Amazon Mechanical Turk.", "We asked the turkers to compare generated responses from two models, and assess each model's informativeness, relevance to the dialog context (coherence), and diversity.", "We use 500 randomly sampled contexts from the test set, each assessed by three judges.", "In order to evaluate diversity, 5 responses are generated for each dialog context.", "For quality control, only workers with a lifetime task approval rating greater than 98% were allowed to participate in our study.", "Table 4 summarizes the human evaluation results.", "The responses generated by our model are clearly preferred by the judges compared with other competing methods.", "We present APo-VAE, a novel model for text generation in hyperbolic space.", "Our model can learn latent hierarchies in natural language via the use of wrapped normals for the prior.", "A primal-dual view of KL divergence is adopted for robust model training.", "Extensive experiments on language modeling, text style transfer, and dialog response generation demonstrate the superiority of the model.", "For future work, we plan to combine APo-VAE with the currently prevailing large-scale pre-trained language models." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "result", "abstain", "objective", "method", "abstain", "method", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "other", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method" ]
[ "We propose a novel text editing task, referred to as fact-based text editing , in which the goal is to revise a given document to better describe the facts in a knowledge base (e.g., several triples).", "The task is important in practice because reflecting the truth is a common requirement in text editing.", "First, we propose a method for automatically generating a dataset for research on fact-based text editing, where each instance consists of a draft text, a revised text, and several facts represented in triples.", "We apply the method into two public table-to-text datasets, obtaining two new datasets consisting of 233k and 37k instances, respectively.", "Next, we propose a new neural network architecture for fact-based text editing, called FACTEDITOR , which edits a draft text by referring to given facts using a buffer, a stream, and a memory.", "A straightforward approach to address the problem would be to employ an encoder-decoder model.", "Our experimental results on the two datasets show that FACTEDITOR outperforms the encoder-decoder approach in terms of fidelity and fluency.", "The results also show that FACTEDITOR conducts inference faster than the encoder-decoder approach.", "Automatic editing of text by computer is an important application, which can help human writers to write better documents in terms of accuracy, fluency, etc.", "The task is easier and more practical than the automatic generation of texts from scratch and is attracting attention recently (Yang et al., 2017; Yin et al., 2019).", "In this paper, we consider a new and specific setting of it, referred to as fact-based text editing , in which a draft text and several facts (represented in triples) are given, and the system The work was done when Hayate Iso was a research intern at ByteDance AI Lab.", "aims to revise the text by adding missing facts and deleting unsupported facts.", "Table 1 gives an example of the task.", "As far as we know, no previous work did address the problem.", "In a text-to-text generation, given a text, the system automatically creates another text, where the new text can be a text in another language (machine translation), a summary of the original text (summarization), or a text in better form (text editing).", "In a table-to-text generation, given a table containing facts in triples, the system automatically composes a text, which describes the facts.", "The former is a text-to-text problem, and the latter a table-to-text problem.", "In comparison, fact-based text editing can be viewed as a text&table-to-text' problem.", "First, we devise a method for automatically creating a dataset for fact-based text editing.", "Recently, several table-to-text datasets have been created and released, consisting of pairs of facts and corresponding descriptions.", "We leverage such kind of data in our method.", "We first retrieve facts and their descriptions.", "Next, we take the descriptions as revised texts and automatically generate draft texts based on the facts using several rules.", "We build two datasets for fact-based text editing on the basis of WEBNLG (Gardent et al., 2017) and ROTOWIRE , consisting of 233k and 37k instances respectively (Wiseman et al., 2017) 1 .", "Second, we propose a model for fact-based text editing called FACTEDITOR .", "One could employ an encoder-decoder model, such as an encoder-decoder model, to perform the task.", "The encoder-decoder model implicitly represents the actions for transforming the draft text into a revised text.", "In contrast, FACTEDITOR explicitly represents the actions for text editing, including Keep , Drop , and Gen , which means retention, deletion, and generation of word respectively.", "The model utilizes a buffer for storing the draft text, a stream to store the revised text, and a memory for storing the facts.", "It also employs a neural network to control the entire editing process.", "FACTEDITOR has a lower time complexity than the encoder-decoder model, and thus it can edit a text more efficiently.", "Experimental results show that FACTEDITOR outperforms the baseline model of using encoder-decoder for text editing in terms of fidelity and fluency, and also show that FACTEDITOR can perform text editing faster than the encoder-decoder model.", "Text editing has been studied in different settings such as automatic post-editing (Knight and Chan-der, 1994; Simard et al., 2007; Yang et al., 2017), paraphrasing (Dolan and Brockett, 2005), sentence simplification (Inui et al., 2003; Wubben et al., 2012), grammar error correction (Ng et al., 2014), and text style transfer (Shen et al., 2017; Hu et al., 2017).", "and copy mechanisms (Gu et al., 2016; Gulcehre et al., 2016) has dramatically changed the landscape, and now one can perform the task relatively easily with an encoder-decoder model such as Transformer provided that a sufficient amount of data is available.", "For example, Li et al. (2018) introduce a deep reinforcement learning framework for paraphrasing, consisting of a generator and an evaluator.", "Yin et al. (2019) formalize the problem of text edit as learning and utilization of edit representations and propose an encoder-decoder model for the task.", "Zhao et al. (2018) integrate paraphrasing rules with the Transformer model for text simplification.", "Zhao et al. (2019) proposes a method for English grammar correction using a Transformer and copy mechanism.", "Another approach to text editing is to view the problem as sequential tagging instead of encoder-decoder.", "In this way, the efficiency of learning and prediction can be significantly enhanced.", "Vu and Haffari (2018) and Dong et al. (2019) conduct automatic post-editing and text simplification on the basis of edit operations and employ Neural Programmer-Interpreter (Reed and De Freitas, 2016) to predict the sequence of edits given a sequence of words, where the edits include KEEP , DROP , and ADD .", "Malmi et al. (2019) propose a sequential tagging model that assigns a tag ( KEEP or DELETE ) to each word in the input sequence and also decides whether to add a phrase before the word.", "Our proposed approach is also based on sequential tagging of actions.", "It is designed for fact-based text editing, not text-to-text generation, however.", "Table-to-text generation is the task which aims to generate a text from structured data (Reiter and Dale, 2000; Gatt and Krahmer, 2018), for example, a text from an infobox about a term in biology in wikipedia (Lebret et al., 2016) and a description of restaurant from a structured representation (Novikova et al., 2017).", "Encoder-decoder models can also be employed in table-to-text generation with structured data as input and generated text as output, for example, as in (Lebret et al., 2016).", "Puduppully et al. (2019) and Iso et al. (2019) propose utilizing an entity tracking module for document-level table-to-text generation.", "searchers have developed methods to deal with the problem using other texts as templates (Hashimoto et al., 2018; Guu et al., 2018; Peng et al., 2019).", "The difference between the approach and fact-based text editing is that the former is about table-to-text generation based on other texts, while the latter is about text-to-text generation based on structured data.", "In this section, we describe our method of data creation for fact-based text editing.", "The method automatically constructs a dataset from an existing table-to-text dataset.", "There are two benchmark datasets of table-to-text, WEBNLG (Gardent et al., 2017) 2 and ROTOWIRE (Wiseman et al., 2017) 3 .", "We create two datasets on the basis of them, referred to as WEBEDIT and ROTOEDIT respectively.", "In the datasets, each instance consists of a table (structured data) and an associated text (unstructured data) describing almost the same content.", "4 .", "For each instance, we take the table as triples of facts and the associated text as a revised text, and we automatically create a draft text.", "The set of triples is represented as T = { t } .", "Each triple t consists of subject, predicate, and object, denoted 2 The data is available at https://github.com/ ThiagoCF05/webnlg .", "We utilize version 1.5.", "3 We utilize the ROTOWIRE-MODIFIED data provided by Iso et al. (2019) available at https://github.com/ aistairc/rotowire-modified .", "The authors also provide an information extractor for processing the data.", "4 In ROTOWIRE , we discard redundant box-scores and unrelated sentences using the information extractor and heuristic rules.", "as t = ( subj, pred, obj ) .", "For simplicity, we refer to the nouns or noun phrases of subject and object simply as entities.", "The revised text is a sequence of words denoted as y .", "The draft text is a sequence of words denoted as x .", "Given the set of triples T and the revised text y , we aim to create a draft text x , such that x is not in accordance with T , in contrast to y , and therefore text editing from x to y is needed.", "Our method first creates templates for all the sets of triples and revised texts and then constructs a draft text for each set of triples and revised text based on their related templates.", "For each instance, our method first delexical-izes the entity words in the set of triples T and the revised text y to obtain a set of triple templates T (cid:48) and a revised template y (cid:48) .", "For example, given T = { (Baymax, voice, Scott Adsit) } and y = Scott Adsit does the voice for Baymax, it produces the set of triple templates T (cid:48) = { (AGENT-1, voice, PATIENT-1) } and the revised template y (cid:48) = AGENT-1 does the voice for PATIENT-1.", "Our method then collects all the sets of triple templates T (cid:48) and revised templates y (cid:48) and retains them in a key-value store with y (cid:48) being a key and T (cid:48) being a value.", "Next, our method constructs a draft text x using a set of triple templates T (cid:48) and a revised template y (cid:48) .", "For simplicity, it only considers the use of either insertion or deletion in the text editing, and one can easily make an extension of it to a more complex setting.", "Note that the process of data creation is reverse to that of text editing.", "Given a pair of T (cid:48) and y (cid:48) , our method retrieves another pair denoted as T (cid:48) and x (cid:48) , such that y (cid:48) and x (cid:48) have the longest common subsequences.", "We refer to x (cid:48) as a reference template.", "There are two possibilities; T (cid:48) is a subset or a superset of T (cid:48) .", "(We ignore the case in which they are identical.)", "Our method then manages to change y (cid:48) to a draft template denoted as x (cid:48) on the basis of the relation between T (cid:48) and T (cid:48) .", "If T (cid:48) (cid:40) T (cid:48) , then the draft template x (cid:48) created is for insertion, and if T (cid:48) (cid:41) T (cid:48) , then the draft template x (cid:48) created is for deletion.", "For insertion , the revised template y (cid:48) and the reference template x (cid:48) share subsequences, and the set of triples T \\ T appear in y (cid:48) but not in x (cid:48) .", "Our method keeps the shared subsequences in y (cid:48) , removes the subsequences in y (cid:48) about T \\ T , and copies the rest of words in y (cid:48) , to create the draft template x (cid:48) .", "Table 2a gives an example.", "The shared subsequences AGENT-1 performed as PATIENT-3 on BRIDGE-1 mission are kept.", "The set of triple templates T \\ T is { (BRIDGE-1, operator, PATIENT-2) } .", "The subsequence that was operated by PATIENT-2 is removed.", "Note that the subsequence served is not copied because it is not shared by y (cid:48) and x (cid:48) .", "For deletion , the revised template y (cid:48) and the reference template x (cid:48) share subsequences.", "The set of triples T \\T appear in x (cid:48) but not in y (cid:48) .", "Our method retains the shared subsequences in y (cid:48) , copies the subsequences in x (cid:48) about T \\T , and copies the rest of words in y (cid:48) , to create the draft template x (cid:48) .", "Table 2b gives an example.", "The subsequences AGENT-1 was created by BRIDGE-1 and PATIENT-2 are retained.", "The set of triple templates T \\T is { (AGENT-1, full-Name, PATIENT-1) } .", "The subsequence whose full name is PATIENT-1 is copied.", "Note that the subsequence the character of is not copied because it is not shared by y (cid:48) and x (cid:48) .", "After getting the draft template x (cid:48) , our method lexicalizes it to obtain a draft text x , where the lexicons (entity words) are collected from the corresponding revised text y .", "We obtain two datasets with our method, referred to as WEBEDIT and ROTOEDIT , respectively.", "Table 3 gives the statistics of the datasets.", "In the WEBEDIT data, sometimes entities only appear in the subj 's of triples.", "In such cases, we also make them appear in the obj 's.", "To do so, we WEBEDITROTOEDITTRAINVALIDTESTTRAINVALIDTEST # D 181k 23k 29k 27k 5.3k 4.9k # W d 4.1M 495k 624k 4.7M 904k 839k # W r 4.2M 525k 649k 5.6M 1.1M 1.0M # S 403k 49k 62k 209k 40k 36k Table 3: Statistics of WEBEDIT and ROTOEDIT , where # D is the number of instances, # W d and # W r are the total numbers of words in the draft texts and the revised texts, respectively, and # S is total the number of sentences.", "introduce an additional triple ( ROOT , IsOf , subj ) for each subj , where ROOT is a dummy entity.", "In this section, we describe our proposed model for fact-based text editing referred to as FACTEDITOR .", "FACTEDITOR transforms a draft text into a revised text based on given triples.", "The model consists of three components, a buffer for storing the draft text and its representations, a stream for storing the revised text and its representations, and a memory for storing the triples and their representations, as shown in Figure 1.", "FACTEDITOR scans the text in the buffer, copies the parts of text from the buffer into the stream if they are described in the triples in the memory, deletes the parts of the text if they are not mentioned in the triples, and inserts new parts of next into the stream which is only presented in the triples.", "The architecture of FACTEDITOR is inspired by those in sentence parsing Dyer et al. (2015); Watanabe and Sumita (2015).", "The actual processing of FACTEDITOR is to generate a sequence of words into the stream from the given sequence of words in the buffer and the set of triples in the memory.", "A neural network is employed to control the entire editing process.", "FACTEDITOR first initializes the representations of content in the buffer, stream, and memory.", "There is a feed-forward network associated with the memory, utilized to create the embeddings of triples.", "Let M denote the number of triples.", "The embedding of triple t j , j = 1 , , M is calculated as t j = tanh( W t [ e subj j ; e pred j ; e obj j ] + b t ) , where W t and b t denote parameters, e subj j , e pred j , e obj j denote the embeddings of subject, predicate, and object of triple t j , and [ ; ] denotes vector concatenation.", "There is a bi-directional LSTM associated with the buffer, utilized to create the embeddings of words of draft text.", "The embeddings are obtained as b = BILSTM ( x ) , where x = ( x 1 , . . . , x N ) is the list of embeddings of words and b = ( b 1 , . . . , b N ) is the list of representations of words, where N denotes the number of words.", "There is an LSTM associated with the stream for representing the hidden states of the stream.", "The first hidden state is initialized as s 1 = tanh (cid:32) W s (cid:34)(cid:80) Ni =1 b i N ; (cid:80) Mj =1 t j M (cid:35) + b s (cid:33) where W s and b s denotes parameters.", "FACTEDITOR predicts an action at each time t using the LSTM.", "There are three types of action, namely Keep , Drop , and Gen .", "First, it composes a context vector t t of triples at time t using attention t t = M (cid:88) j =1 t,j t j where t,j is a weight calculated as t,j exp (cid:16) v (cid:62) tanh ( W [ s t ; b t ; t j ]) (cid:17) where v and W are parameters.", "Then, it creates the hidden state z t for action prediction at time t z t = tanh (cid:0) W z [ s t ; b t ; t t ] + b z (cid:1) where W z and b z denote parameters.", "Next, it calculates the probability of action a t P ( a t | z t ) = softmax ( W a z t ) where W a denotes parameters, and chooses the action having the largest probability.", "(a) The Keep action, where the top embedding of the buffer b t is popped and the concatenated vector [ t t ; b t ] is pushed into the stream LSTM.", "(b) The Drop action, where the top embedding of the buffer b t is popped and the state in the stream is reused at the next time step t + 1 .", "(c) The Gen action, where the concatenated vector [ t t ; W p y t ] is pushed into the stream, and the top embedding of the buffer is reused at the next time step t + 1 .", "FACTEDITOR takes action based on the prediction result at time t .", "For Keep at time t , FACTEDITOR pops the top embedding b t in the buffer, and feeds the combination of the top embedding b t and the context vector of triples t t into the stream, as shown in Fig. 1a.", "The state of stream is updated with the LSTM as s t +1 = LSTM ([ t t ; b t ] , s t ) .", "FACTEDITOR also copies the top word in the buffer into the stream.", "For Drop at time t , FACTEDITOR pops the top embedding in the buffer and proceeds to the next state, as shown in Fig. 1b.", "The state of stream is updated as s t +1 = s t .", "Note that no word is inputted into the stream.", "For Gen at time t , FACTEDITOR does not pop the top embedding in the buffer.", "It feeds the Draft text x Bakewell pudding is Dessert that can be served Warm or cold .", "combination of the context vector of triples t t and the linearly projected embedding of word w into the stream, as shown in Fig. 1c.", "The state of stream is updated with the LSTM as s t +1 = LSTM ([ t t ; W p y t ] , s t ) , where y t is the embedding of the generated word y t and W p denotes parameters.", "In addition, FACTEDITOR copies the generated word y t into the stream.", "FACTEDITOR generates a word y t at time t , when the action is Gen ,", "P gen ( y t | z t ) = softmax ( W y z t ) where W y is parameters.", "To avoid generation of OOV words, FACTEDITOR exploits the copy mechanism.", "It calculates the probability of copying the object of triple t j P copy ( o j | z t ) exp ( v (cid:62) c tanh( W c [ z t ; t j ])) where v c and W c denote parameters, and o j is the object of triple t j .", "It also calculates the probability of gating p gate = sigmoid ( w (cid:62) g z t + b g ) where w g and b g are parameters.", "Finally, it calculates the probability of generating a word w t through either generation or copying, P ( y t | z t ) = p gate P gen ( y t | z t ) + (1 p gate ) M (cid:88) j =1: o j = y t P copy ( o j | z t ) , where it is assumed that the triples in the memory have the same subject and thus only objects need to be copied.", "The conditional probability of sequence of actions a = ( a 1 , a 2 , , a T ) given the set of triples T and", "the sequence of input words x can be written as", "where P ( a t | z t ) is the conditional probability of action a t given state z t at time t and T denotes the", "number of actions.", "The conditional probability of sequence of generated words y = ( y 1 , y 2 , , y T ) given the sequence of actions a can be written as P ( y | a ) = T (cid:89) t =1 P ( y t | a t ) where P ( y t | a t ) is the conditional probability of generated word y t given action a t at time t , which is calculated as P ( y t | a t ) = (cid:40) P ( y t | z t ) if a t = Gen 1 otherwise Note that not all positions have a generated word.", "The learning of the model is carried out via supervised learning.", "The objective of learning is to minimize the negative log-likelihood of P ( a | T , x ) and P ( y | a ) L ( ) = T (cid:88) t =1 { log P ( a t | z t ) + log P ( y t | a t ) } where denotes the parameters.", "A training instance consists of a pair of draft text and revised text, as well as a set of triples, denoted as x , y , and T respectively.", "For each instance, our method derives a sequence of actions denoted as a , in a similar way as that in (Dong et al., 2019).", "It first finds the longest common subsequence between x and y , and then selects an action of Keep , Drop , or Gen at each position, according to how y is obtained from x and T (cf., Tab. 4).", "Action Gen is preferred over action Drop when both are valid.", "The time complexity of inference in FACTEDITOR is O ( NM ) , where N is the number of words in the buffer, and M is the number of triples.", "Scanning of data in the buffer is of complexity O ( N ) .", "The generation of action needs the execution of attention, which is of complexity O ( M ) .", "Usually, N is much larger than M .", "We consider a baseline method using the encoder-decoder architecture, which takes the set of triples and the draft text as input and generates a revised text.", "We refer to the method as ENCDECEDITOR .", "The encoder of ENCDECEDITOR is the same as that of FACTEDITOR .", "The decoder is the standard attention and copy model, which creates and utilizes a context vector and predicts the next word at each time.", "The time complexity of inference in ENCDECEDITOR is O ( N 2 + NM )", "(cf.,Britz et al. (2017)).", "Note that in fact-based text editing, usually N is very large.", "That means that ENCDECEDITOR is less efficient than FACTEDITOR .", "We conduct experiments to make comparison between FACTEDITOR and the baselines using the two datasets WEBEDIT and ROTOEDIT .", "The main baseline is the encoder-decoder model ENCDECEDITOR , as explained above.", "We further consider three baselines, No-Editing, Table-to-Text, and Text-to-Text.", "In No-Editing, the draft text is directly used.", "In Table-to-Text, a revised text is generated from the triples using encoder-decoder.", "In Text-to-Text, a revised text is created from the draft text using the encoder-decoder model.", "Figure 2 gives illustrations of the baselines.", "We utilize ExactMatch (EM), BLEU (Papineni et al., 2002) and SARI (Xu et al., 2016) scores 5 as evaluation metrics for fluency.", "We also utilize precision, recall, and F1 score as evaluation metrics for fidelity.", "For WEBEDIT , we extract the entities from the generated text and the reference text and then calculate the precision, recall, and F1 scores.", "For ROTOEDIT , we use the information extraction tool provided by Wiseman et al. (2017) for calculation of the scores.", "For the embeddings of subject and object for both datasets and the embedding of the predicate for ROTOEDIT , we simply use the embedding lookup table.", "For the embedding of the predicate for WEBEDIT , we first tokenize the predicate, lookup the embeddings of lower-cased words from the table, and use averaged embedding to deal with the OOV problem (Moryossef et al., 2019).", "We tune the hyperparameters based on the BLEU score on a development set.", "For WEBEDIT , we set the sizes of embeddings, buffers, and triples to 300, and set the size of the stream to 600.", "For ROTOEDIT , we set the size of embeddings to 100 and set the sizes of buffers, triples, and stream to 200.", "The initial learning rate is 2e-3, and AMS-Grad is used for automatically adjusting the learning rate (Reddi et al., 2018).", "Our implementation makes use of AllenNLP (Gardner et al., 2018).", "We present the performances of our proposed model FACTEDITOR and the baselines on fact-based text editing in Table 5.", "One can draw several conclusions from the results.", "First, our proposed model, FACTEDITOR , achieves significantly better performances than the main baseline, ENCDECEDITOR , in terms of almost all measures.", "In particular, FACTEDITOR 5 We use a modified version of SARI where equals 1 .", "0 , available at https://github.com/tensorflow/ tensor2tensor/blob/master/tensor2tensor/utils/sari_hook.py Model FLUENCYFIDELITYBLEUSARIKEEPADDDELETEEM P% R% F1% Baselines No-Editing 66.67 31.51 78.62 3.91 12.02.", "obtains significant gains in DELETE scores on both WEBEDIT and ROTOEDIT .", "Second, the fact-based text editing models (FACTEDITOR and ENCDECEDITOR ) significantly improve upon the other models in terms of fluency scores, and achieve similar performances in terms of fidelity scores.", "Third, compared to No-Editing, Table-to-Text has higher fidelity scores, but lower fluency scores.", "Text-to-Text has almost the same fluency scores, but lower fidelity scores on ROTOEDIT .", "We also manually evaluate 50 randomly sampled revised texts for WEBEDIT .", "We check whether the revised texts given by FACTEDITOR and ENCDECEDITOR include all the facts.", "We categorize the factual errors made by the two models.", "Table 6 shows the results.", "One can see that FACTEDITOR covers more facts than ENCDECEDITOR and has less factual errors than ENCDECEDITOR .", "FACTEDITOR has a larger number of correct editing (CQT) than ENCDECEDITOR for fact-based text editing.", "In contrast, ENCDECEDITOR often includes a larger number of unnecessary rephrasings (UPARA ) than FACTEDITOR .", "There are four types of factual errors: fact repetitions (RPT ), fact missings (MS ), fact unsupported (USUP ), and relation difference (DREL ).", "Both FACTEDITOR and ENCDECEDITOR often fail to insert missing facts (MS ), but rarely insert unsupported facts (USUP ).", "ENCDECEDITOR often generates the same facts multiple times (RPT) or facts in different relations (DREL ).", "In contrast, FACTEDITOR can seldomly make such errors.", "Table 7 shows an example of results given by ENCDECEDITOR and FACTEDITOR .", "The revised texts of both ENCDECEDITOR and FACTEDITOR appear to be fluent, but that of FACTEDITOR has higher fidelity than that of ENCDECEDITOR .", "ENCDECEDITOR cannot effectively eliminate the Set of triples { ( Ardmore Airport , runwayLength , 1411.0 ), ( Ardmore Airport , 3rd runway SurfaceType , Poaceae ), ( Ardmore Airport , operatingOrganisation , Civil Aviation Authority of New Zealand ), ( Ardmore Airport , elevationAboveTheSeaLevel , 34.0 ), ( Ardmore Airport , runwayName , 03R/21L ) } Draft text Ardmore Airport , ICAO Location Identifier UTAA .", "description about an unsupported fact (in orange) appearing in the draft text.", "In contrast, FACTEDITOR can deal with the problem well.", "In addition, ENCDECEDITOR conducts an unnecessary substitution in the draft text (underlined).", "FACTEDITOR tends to avoid such unnecessary editing.", "We conduct runtime analysis on FACTEDITOR and the baselines in terms of number of processed words per second, on both WEBEDIT and ROTOEDIT .", "Table 8 gives the results when the batch size is 128 for all methods.", "Table-to-Text is the fastest, followed by FACTEDITOR .", "FACTEDITOR is always faster than ENCDECEDITOR , apparently because it has a lower time complexity, as explained in Section 4.", "The texts in WEBEDIT are relatively short, and thus FACTEDITOR and ENCDECEDITOR have similar runtime speeds.", "In contrast, the texts in ROTOEDIT are relatively long, and thus FACTEDITOR executes approximately two times faster than ENCDECEDITOR .", "In this paper, we have defined a new task referred to as fact-based text editing and made two contributions to research on the problem.", "First, we have proposed a data construction method for fact-based text editing and created two datasets.", "Second, we have proposed a model for fact-based text editing, named FACTEDITOR , which performs the task by generating a sequence of actions.", "Experimental results show that the proposed model FACTEDITOR performs better and faster than the baselines, including an encoder-decoder model.", "We would like to thank the reviewers for their insightful comments." ]
[ "objective", "abstain", "objective", "objective", "objective", "abstain", "result", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "method", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "other", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "other" ]
[ "Neural abstractive summarizers generate summary texts using a language model conditioned on the input source text, and have recently achieved high ROUGE scores on benchmark summarization datasets.", "We investigate how they achieve this performance with respect to human-written gold-standard abstracts, and whether the systems are able to understand deeper syntactic and semantic structures.", "We generate a set of contrastive summaries which are perturbed, deficient versions of human-written summaries, and test whether existing neural summarizers score them more highly than the human-written summaries.", "We analyze their performance on different datasets and find that these systems fail to understand the source text, in a majority of the cases.", "Open-domain abstractive summarization is a longstanding goal of the field of automatic summarization.", "Compared to extractive techniques, abstraction offers the potential to generate much more useful summaries by simplifying and rephrasing the source text (Knight and Marcu, 2002), and furthermore by aggregating information and performing operations which are not possible with extractive techniques (Genest and Lapalme, 2012; Carenini and Cheung, 2008).", "Recently, a number of abstractive summarization systems based on neural sequence-to-sequence architectures have been proposed (Rush et al., 2015; Nallapati et al., 2016; See et al., 2017; Paulus et al., 2018; Chen and Bansal, 2018).", "These systems learn a compressed representation of the source text using an encoder, then generate the output summary using a conditional decoder.", "Such neural abstractive systems have achieved very good ROUGE scores on different datasets.", "Source A former Iraqi army chief of staff being investigated in denmark for war crimes is believed to be back in Iraq , one of his sons said Tuesday.", "Our interest in this paper is to investigate how these abstractive systems achieve such results, and whether they represent progress towards language understanding and generation.", "ROUGE arguably provides a limited view of the performance of such systems, as they only relate the system summary to a fixed number of gold-standard summaries.", "We propose a novel method to directly test the abstractive summarizers in terms of how they score potential candidate summaries, viewing them as conditional language models.", "This allows us to test whether the summarizers favour output summaries with specific desired qualities, such as generating a summary that is semantically consistent and entailed by the source text.", "We test how well the neural abstractive summarizers distinguish human-written abstracts from contrastive distractors, which are clearly incorrect summaries that are generated using a rule-based procedure.", "Table 1 shows contrastive examples which are clearly incorrect 1 .", "In majority of source texts, we are able to find a contrastive example that scores more highly than the gold-standard summary.", "1 For other NLP tasks, others have proposed a similar notion called adversarial examples (Jia and Liang, 2017).", "Since the term adversarial' has traditionally implied learning to specifically attack the weakness of a model, which we do not do, we refrain from using the word adversarial'.", "Our work demonstrates the difficulty of controlling expressive neural abstractive systems to produce correct and fluent output.", "It also underscores the need to revisit fundamental issues in summarization evaluation for neural abstractive models, so that a comprehensive evaluation scheme that captures all relevant aspects of summary quality can be developed.", "Our code for generating contrastive summaries is available online.", "2 2 Related Work Most work in neural abstractive summarization has focused on optimizing ROUGE, whether implicitly by maximum likelihood training or explicitly by reinforcement learning.", "While this could certainly capture aspects of the content selection problem, we believe that the focus should now shift towards semantic correctness and readability.", "Cao et al. (2018) took a step in this direction through their fact-aware neural abstractive summarization system.", "They use fact descriptions of the source as additional features for the summarizer, and showed improved faithfulness according to human judgments.", "Multi-task learning is another approach used by Pasunuru et al. (2017) to reduce semantic errors in the generated summaries.", "They jointly learn summarization and entailment generation tasks, using different encoders but a shared decoder.", "A number of automatic evaluation metrics have shown high correlation with human judges (Liu and Liu, 2008; Graham, 2015), but these results are either restricted to extractive systems or were performed with respect to human-generated summaries.", "Correlation values are significantly reduced when performed on abstractive summarization systems and datasets (Toutanova et al., 2016).", "In this section, we describe our method for evaluating summarization systems based on whether they can separate human-written gold summaries from automatically generated contrastive summaries.", "We define a contrastive summary to be similar to a gold summary, except it contains a perturbation.", "The perturbation results in either a semantic discrepancy, where facts in source and summary do not corroborate, or a readability issue, where issues with grammar or fluency renders 2 https://github.com/krtin/ ContrastiveSummaries Rule Switching Criteria Noun NN, NNP or NNS must match.", "the summary clearly incorrect.", "Below, we first describe our basic method of introducing these discrepancies, then describe a number of restrictions we apply to ensure that the generated contrastive summaries are of high quality.", "Perturbation by switching words.", "Given pairs of source texts and gold summaries, we generate multiple contrastive summaries for each source text by perturbing its associated gold summary.", "There are many types of possible perturbations, but we focus on two strategies:", "1) switching words within a gold summary, and", "2) replacing a word in gold summary by a word from the source text.", "We chose these types of perturbations as they are likely to result in difficult contrastive summaries that contain words which are likely to appear in a reasonable summary of the source text, but which are nevertheless incorrect.", "In order to select the words to be swapped, we apply four rules, separated by syntactic category, as shown in Table 2, using the dependency parse of the texts (Manning et al., 2014).", "We switch words either within a gold summary, or from the source text and use a single rule at a time for generating a contrastive summary.", "For example, if the POS tag NNS is matched between a word sides' in the source text and the word combatants' in the gold summary, then the Noun rule would apply, and the words would be switched to obtain the contrastive summary.", "Further, for the Noun and Verb categories, the switched words may not match in number or verb conjugation.", "We use SimpleNLG (Gatt and Reiter, 2009) to convert the word to the appropriate inflectional form of the destination's POS tag.", "Further restrictions.", "We place a number of restrictions on the words switched, to ensure that the generated summary is contrastive compared to the gold summary.", "We do not allow switching of the same words.", "We also do not allow words to be switched if they are separated by any of the following: or' , and' or ,' , as these are likely to be commutative operators.", "Furthermore, we only allow switching of words from the source text if the context of the words to be switched sufficiently differ from each other.", "We compute unigram overlap around a context window of size 2 on each side, and allow a switch when the overlap proportion is less than 0.65.", "These settings were determined by manual inspection of the generated summaries, and allowed us to reduce the number of examples where generated summary is not contrastive, without significantly reducing the number of generated summaries.", "We will describe the results of a human verifi-cation study in Section 5, in which we ask human raters to check the quality of our contrastive summaries.", "We apply our set of contrastive summaries to evaluate a number of neural abstractive summarizers.", "For each summarizer under evaluation, we assume access to a conditional language model which de-fines a probability distribution over words conditioned on the source.", "Formally, such a language model is given by Equation 1: P ( y i | , S , y 1 ...y i 1 ) , (1) here S is the source, are model parameters, y i V sm represents the i th word in the summary, V sm is the vocabulary space of the summary and P is the conditional probability.", "S ( s 1 , ..., s n ) where s i V so , V so is the vocabulary space of the source and n is the maximum source length.", "Further, we use Equation 2 as our scoring function, p ( y ) = m (cid:88) i =1 log P ( y i | , S , y 1 ...y i 1 ) , (2) here m is maximum summary length and y represents a gold or contrastive summary.", "For a given triple of source, gold (g) and contrastive (c) summary, if p ( g ) > p ( c ) , then we label the triple dodged' , since the summarizer successfully dodged the generated contrastive summary.", "If a system is able to dodge all contrastive summaries generated from a source and gold summary tuple, then we label the tuple as escaped' .", "Datasets.", "We experimented on two datasets, for two abstractive summarization tasks.", "The first is a short summarization task, where the summary is one sentence long, for which we use the Gigaword corpus (Graff et al., 2011; Napoles et al., 2012).", "We use the scripts provided by Rush et al. (2015) to process the Gigaword corpus, which contains the first sentence of the article and the headline as source and gold summary pairs.", "The test set contains about 250K source-summary pairs from which we randomly selected 10K pairs and generated 509K contrastive summaries.", "The second is a long summarization task, in which the summary consists of multiple sentences.", "We use the CNN/Dailymail corpus, where the highlights of the articles are used as the gold summary (Hermann et al., 2015).", "We used the scripts from Nallapati et al. (2016) to get the data and use the non-anonymized version like See et al. (2017).", "We use 11.49K test pairs and were able to generate 563K contrastive summaries.", "Models.", "We analyze and evaluate three state-of-the-art neural abstractive summarization systems: ABS+ (Rush et al., 2015), GTP (See et al., 2017) and FAS (Chen and Bansal, 2018).", "The ABS+ system uses an attention-based neural language model as an encoder and a feed-forward neural network for decoding, and is trained on the Gigaword corpus.", "The GTP system is a seq2seq model with attention on the encoder and a pointer-generator mechanism to choose words from the source in the decoder and, is trained on the CNN/Dailymail corpus.", "FAS uses reinforcement learning algorithm to extract the most important sentences from the source text, and then summarizes each sentence using a similar architecture as GTP on the CNN/Dailymail corpus.", "These systems have performed well on small and large text summarization tasks, and have open-source implementation available from the authors.", "We would have liked to test other relevant systems (Pasunuru et al., 2017; Cao et al., 2018), but were unable to obtain their implementations.", "contrastive summaries is very large.", "To restrict the number of contrastive summaries we randomly select approximately 50 generated summaries, while maintaining the rule-wise distribution.", "The rule-wise distribution was estimated based upon contrastive summaries, generated from a subset of 100 gold standard summaries from CNN/Dailymail corpus.", "In order to correctly evaluate the FAS system, for each extracted sentence we generate all sentences in the gold summary, and pick the set of summaries with the maximum probability.", "Human verification.", "To ensure that we are generating incorrect contrastive summaries, 200 randomly sampled summaries from the Gigaword corpus were evaluated by a human annotator, to verify if a semantic discrepancy or a readability issue was present.", "We ensured that we sample equally across all the 8 rules, and that we restrict our set of contrastive summaries which the ABS+ system was not able to dodge' .", "We found that 49.5% had a readability issue while 43.5% had a discrepancy issue, and 93% of the examples had at least one of these issues.", "This indicates that the vast majority of our contrastive examples are true negatives; i.e., a perfect summarization system should score them lower than the gold standard summary.", "We summarize our results in Table 4, and report rule-wise results in Table 3.", "We also include examples where the ABS+ system is unsuccessful in dodging the generated contrastive summaries, in Table 5.", "Since these metrics directly evaluate the posterior distribution of a summarizer, it al-Model Dataset Dodged Escaped GTP CNN 96.3% 29.8% FAS CNN 92.9% 12.2% ABS+ Gigaword 48.6% 10.8% Table 4: Performance on dodged , escaped metrics.", "lows us to explicitly recognize problematic examples for a model.", "We also look at what percentage of gold summaries lie across different ranks of gold summaries in Figure 1.", "This gives us an insight into distribution of gold summaries, across different ranks.", "The rank of a gold summary is 1 plus the number of contrastive summaries that scored higher than it.", "Dodged and Escaped.", "On the CNN/Dailymail dataset, we find both the models were able to dodge most of the contrastive summaries, but a large number of summaries had at least a few contrastive summaries which scored higher than the gold summary, as reflected by the escaped metric.", "The FAS model performs worse than GTP model, this might be because the abstraction model only observes one sentence, and thus the probability of observing a word outside the source sentence is higher for the contrastive summaries.", "Rule-wise Analysis.", "The GTP and FAS models perform better on rules which switch words within the gold-standard summary.", "Thus, the decoder LSTM has captured the data distribution very well for words within the summary but is not generalizing for words outside it.", "This suggest that using words outside the source vocabulary might help in generating harder contrastive examples.", "The ABS+ model is better in capturing data distributions of prepositions and adjectives.", "This points towards biases towards particular distributions and can be helpful in further improving these models.", "Rank of Gold Summary.", "As shown in Figure 1, almost all the gold summaries have rank lower than 8 for the GTP model, while a large percentage of gold summaries have rank greater than 20, for the ABS+ model.", "The maximum rank in both the cases is of the order 500K, which is the size of contrastive summaries (Section 5).", "We suspect that this might be due to behaviourally extractive nature of GTP model, which allows it to easily distinguish any perturbations in the contrastive summaries.", "We proposed to analyze existing neural abstractive summarizers by testing how they score contrastive summaries, compared to gold-standard ones.", "For the majority of the gold-standard summaries, we were able to find contrastive examples which score more highly according to current state-of-the-art systems.", "These examples can be useful not only in evaluating the performance of these systems, but also for improving these systems in the future." ]
[ "abstain", "objective", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "method", "method", "abstain", "result", "abstain", "method", "objective", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain" ]
[ "Information about the meaning of mathematical variables in text is useful in NLP/IR tasks such as symbol disambiguation, topic modeling and mathematical information retrieval (MIR).", "We introduce variable typing , the task of assigning one mathematical type (multi-word technical terms referring to mathematical concepts) to each variable in a sentence of mathematical text.", "As part of this work, we also introduce a new annotated data set composed of 33,524 data points extracted from scientific documents published on arXiv.", "Our intrinsic evaluation demonstrates that our data set is sufficient to successfully train and evaluate current classifiers from three different model architectures.", "The best performing model is evaluated on an extrinsic task: MIR, by producing a typed formula index .", "Our results show that the best performing MIR models make use of our typed index, compared to a formula index only containing raw symbols, thereby demonstrating the usefulness of variable typing.", "Scientific documents, such as those from Physics and Computer Science, rely on mathematics to communicate ideas and results.", "Written mathematics, unlike general text, follows strong domain-specific conventions governing how content is presented.", "According to Ganesalingam (2008), the sense of mathematical text is conveyed through the interaction of two contexts: the textual context (flowing text) and the mathematical (or symbolic) context (mathematical formulae).", "In this work, we introduce a new task that focuses on one particular interaction: the assignment of meaning to variables by surrounding text in the same sentence 1 .", "For example, in the sentence 1 Data for the task is available at https://www.cst.", "the variables P and N in the symbolic context are assigned the meaning parabolic subgroup and unipotent radical by the textual context surrounding them respectively.", "We will refer to the task of assigning one mathematical type to each variable in a sentence as variable typing .", "We use mathematical types (Stathopoulos and Teufel, 2016) as variable denotation labels.", "Types are multi-word phrases drawn from the technical terminology of the mathematical discourse that label mathematical objects (e.g., set), algebraic structures (e.g., monoid) and instantiable notions (e.g., cardinality of a set).", "In the sentence presented earlier, the phrases parabolic subgroup, Levi decomposition and unipotent radical are examples of types.", "Typing variables may be beneficial to other natural language processing (NLP) tasks, such as topic modeling, to group documents that assign meaning to variables consistently (e.g., E is en-ergy consistently in some branches of Physics).", "In mathematical information retrieval (MIR), for instance, enriching formulae with types may improve precision.", "For example, the formulae x + y and a + b can be considered -equivalent matches.", "However, if a and b are matrices while x and y are vectors, the match is likely to be a false positive.", "Typing information may be helpful in reducing such instances and improving retrieval precision.", "Variable typing differs from similar tasks in three fundamental ways.", "First, meaning in the form of mathematical types is explicitly assigned to variables, rather than arbitrary mathematical expressions.", "Second, variable typing is carried out at the sentential level, with valid type assignments for variables drawn from the sentences in which 303 they occur, rather than from larger contexts, such as documents.", "Third, denotations are drawn from a pre-determined list of types, rather than from free-form text in the surrounding context of each variable.", "As part of our work, we have constructed a new data set for variable typing that is suitable for machine learning (Section 4) and is distributed under the Open Data Commons license.", "We propose and evaluate three models for typing variables in mathematical documents based on current machine learning architectures (Section 5).", "Our intrinsic evaluation (Section 6) suggests that our models significantly outperform the state-of-the-art SVM model by Kristianto et al. (2012, 2014) (originally developed for description extraction) on our data set.", "More importantly, our intrinsic evaluation demonstrates that our data set is sufficient to successfully train and evaluate classifiers from three different architectures.", "We also demonstrate that our variable typing task and data are useful in MIR in our extrinsic evaluation (Sec-tion 7).", "The task of extracting semantics for variables from the linguistic context was first proposed by Grigore et al. (2009) with the intention of disambiguating symbols in mathematical expressions.", "Grigore et al. took operators listed in OpenMath content dictionaries (CDs) as concepts and used term clusters to model their semantics.", "A bag of nouns is extracted from the operator description in the dictionary and enriched manually using terms taken from online lexical resources.", "The cluster that maximises the similarity (based on Pointwise Mutual Information (PMI) and DICE) between nouns in the cluster and the local context of a target formula is taken to represent its meaning.", "Wolska et al. (2011) used the Cambridge dictionary of mathematics and the mathematics subject classification hierarchy to manually construct taxonomies used to assign meaning to simple expressions .", "Simple expressions are defined by the authors to be mathematical formulae taking the form of an identifier, which may have super/sub-scripted expressions of arbitrary complexity.", "Lexical features surrounding simple expressions are used to match the context of candidate expressions to suitable taxonomies using a combination of PMI and DICE (Wolska et al., 2011).", "Wolska et al. report a precision of 66%.", "Quoc et al. (2010) used a rule-based approach to extract descriptions for formulae (phrases or sentences) from surrounding context.", "In a similar approach, Kristianto et al. (2012) applied pattern matching on sentence parse trees and a nearest noun approach to extract descriptions.", "These rule-based methods have been shown to perform well for recall but poorly for precision (Kris-tianto et al., 2012).", "However, Kristianto et al. (2012) note that domain-agnostic parsers are confused by mathematical expressions making rule-based methods sensitive to parse tree errors.", "Both rule-based extraction methods were outperformed by Support Vector Machines (SVMs) (Kristianto et al., 2012, 2014).", "Schubotz et al. (2016) use hierarchical named topic clusters, referred to as namespaces , to model the semantics of mathematical identifiers.", "Namespaces are derived from a document collection of 22,515 Wikipedia articles.", "A vector-space approach is used to cluster documents into namespaces using mini-batch K-means clustering.", "Clusters beyond a certain purity threshold are selected and converted into namespaces by extracting phrases that assign meaning to identifiers in the selected clusters.", "Schubotz et al. (2016) take a ranked approach at determining the phrase that best assigns meaning to a particular identifier.", "The authors report F 1 scores of 23.9% and 56.6% for their definition extraction methods.", "In contrast, we assign meaning exclusively to variables, using denotations from a pre-computed dictionary of mathematical types, rather than freeform text.", "Types as pre-identified, compositionally constructed denotational labels enable effi-cient determination of relatedness between mathematical concepts.", "In our extrinsic MIR experiment (Section 7), the mathematical concept that two or more types are derived from is identified by locating their common parent type the supertype on a suffix trie.", "Topically related types that do not share a common supertype can be identified using an automatically constructed type embedding space (Stathopoulos and Teufel (2016), Section 5.1), rather than manually curated namespaces or fuzzy term clusters.", "We define the task of variable typing as follows.", "Given a sentence containing a pre-identified set of 304 variables V and types T , variable typing is the task of classifying all edges V T as either existent (positive) or non-existent (negative).", "However, not all elements of V T are valid edges.", "Invalid edges are usually instances of type parameterisation , where some type is parame-terised by what appears to be a variable.", "For example, the set of candidate edges for the sentence We now consider the q -exterior algebras of V and V , cf.", "would include ( V , exterior algebra ) and ( V , exterior algebra ) but not ( q , exterior algebra ) .", "Such edges are identified using pattern matching (Java regular expressions) and are not presented to annotators or recorded in the data set.", "Our definition of variable mirrors that of simple expression proposed by Grigore et al. (2009): instances of formulae in the discourse are considered to be typeable variables if they are only composed of a single, potentially scripted base identifier.", "Variable typing, as defined in this work, is based on four assumptions: (1) typings occur at the sentential level and variables in a sentence can only be assigned a type phrase occurring in that sentence, (2) variables and types in the sentence are known a priori, (3) edges in each sentence are independent of one another, and (4) edges in one sentence are independent of those in other sentences given a variable v in sentence s , type assignment for v is agnostic of other typings involving v from other sentences.", "The decision to constrain variable typing at the sentential level is motivated by empirical studies (Grigore et al., 2009; Godert, 2012).", "Grigore et al. (2009) have shown that the majority of variables are introduced and declared in the same sentence.", "In addition, mathematical text tends to be composed of local contexts, such as theorems, lemmas and proofs (Ganesalingam, 2008).", "The assumptions introduced above simplify the task of variable typing without sacrificing the gen-eralisability of the task.", "For example, cases where the same variable is assigned multiple conflicting types from different sentences within a document can be collected and resolved using a type disambiguation algorithm.", "We have constructed an annotated data set of sentences for building variable typing classifiers.", "The sentences in our corpus are sourced from the Mathematical REtrieval Corpus (MREC) (Lska et al., 2011), a subset of arXiv (over 439,000 papers) with all LATEX formulae converted to MathML.", "The data set is split into a standard train-ing/development/test machine learning partitioning scheme as outlined in Table 1.", "The idea behind this scheme is to train and evaluate new models on standardised data partitions so that results can be directly comparable.", "The structure and role of sentences in mathematical papers may vary according to their location in the discourse.", "For example, sentences in the Introduction intended to introduce the subject matter can be expected to differ in structure from those in a proof, which tend to be short, formal statements.", "Our sampling strategy is designed to control for this diversity in sentence structure.", "First, we sentence-tokenised and transformed each document in the MREC into a graph that encodes its section structure.", "Document graphs also take into account blocks of text unique to the mathematical discourse such as theorems, proofs and definitions.", "Then, we sampled sentences for our data set by distribution according to their location in the source arXiv document.", "Variables in each MREC document are identified via a parser that recognises the variable description given in Section 3.", "Our variable parser is designed to operate on symbol layout trees (SLTs) (Schellenberg et al., 2012) trees representing the 2-dimensional presentation layout of mathematical formulae.", "We identified 28.6 million sentences that contain variables.", "The distribution of sentences according to", "(a) the type of discourse/math block of origin and", "(b) the number of unique types in the sentence is reconstructed by putting sentences into bins based 305 on the value of these features.", "Sentences are selected from the bins at random in proportion to their size.", "The training, development and test samples have been produced via repeated application of this sample-by-distribution strategy over the set of all sentences that contain variables.", "The type dictionary distributed by Stathopoulos and Teufel (2016) contains 10,601 automatically detected types from the MREC.", "However, the MREC contains 2.9 million distinct technical terms, many of which might also be types.", "Therefore, the seed dictionary is too small to be used with variable typing at scale since types from the seed dictionary will be sparsely present in sampled sentences.", "To overcome this problem, we used the double suffix trie algorithm (DSTA) to automatically expand the type dictionary.", "The algorithm makes use of the fact that most types are compositional (Stathopoulos and Teufel, 2016): longer subtypes can be constructed out of shorter super-types by attaching pre-modifiers (e.g., a Rieman-nian manifold can be considered a subtype of manifold).", "The DSTA takes two lists of technical terms as input the seed dictionary of types and the MREC master list (2.9 million technical terms).", "First, technical terms on both lists are word-tokenised.", "Then, all technical terms in the seed dictionary (the known types) are placed onto the known types suffix trie (KTST).", "Additional types are generated from single word types on the KTST by expanding them with one of 40 prefixes observed in the corpus.", "For example, the type algebra might generate the supertype coalgebra.", "These are also added on the KTST as known types.", "Technical terms in the KTST are copied onto the candidate type suffix trie (CTST) and are labeled as types.", "Next, the technical terms on the master list are inserted into the CTST.", "Technical terms in the master list that have known types from the seed dictionary as their suffix on the CTST are also marked as types.", "A new dictionary of types (in the form of a list of technical terms) is produced by traversing the CTST and recording all phrases that have a known type as their suffix.", "This way, we have expanded the type dictionary from 10,601 types to approximately 1.23 million technical terms, from which an updated KTST can be produced.", "Two of the authors jointly developed the annotation scheme and guidelines using sentences sampled by distribution as discussed in Section 4.1.", "Sentences sampled for this purpose are excluded from subsequent sampling.", "The labeling scheme, presented in Table 2, implements the assumptions of the variable typing task each variable in a sentence is assigned exactly one label: either one type from the sentence or one of six fixed labels for special situations.", "An annotation experiment was carried out using two authors as annotators to investigate", "(a) how intuitive the task of typing is to humans and", "(b) the reliability of the annotation scheme.", "For this purpose, a further 1,000 sentences were sampled (and removed) from the pool and organised into two subsamples each with 554 sentences.", "The subsamples have an overlap of 108 sentences with a total of 182 edges, which are used to measure inter-annotator agreement.", "We report annotator agreement for three separate cases.", "The first case reflects whether annotators agree that a variable can be typed or not by its context.", "A variable falls into the first category if it is assigned a type from the sentential context and in the latter category if it is assigned one of the six fixed labels from Table 2.", "In this case, agreement is substantial (Cohen's K = 0 . 80 , N = 182 , k = 2 , n = 2 ).", "The second case is for instances where both annotators believe a variable can be typed by its sentential context the variable is assigned a type by both annotators.", "In this case, Co-hen's Kappa is not applicable because the number of labels varies: there are as many labels as there are types in the sentence.", "Instead, we report accuracy as the proportion of decisions where annotators agree over all decisions: 90.9%.", "In the last case where both annotators agree that a variable is not a type (i.e., is assigned one of the six fixed labels), agreement has been found to be moderate (Fleiss' K = 0 . 61 , N = 123 , k = 2 , n = 6 ).", "The bulk of the annotation was carried out by one of the author-annotators and was produced by repeated sampling by distribution (as described in Section 4.1).", "Sentences in the bulk sample are combined with the 554 sentences annotated by the author during the annotation experiment to produce a final data set composed of 7,803 sentences.", "The training, test and development sets have been produced using the established 70% for training, 306 Label Description One label per type instance One label per instance of any type in the sentence.", "Type Unknown The type of the variable is not in the scope of the sentence.", "Type Present but Undetected The type of the variable is in the scope of the sentence but is not in the dictionary.", "Parameterisation Variable is part of an instance of parameterisation.", "Index Variable is an instance of indexing (numeric or non-numeric).", "Number Variable is implied to be a number by the textual context (e.g., the n -th element...).", "Formula is not a variable Label used to mark data errors.", "For example, in some instances end-of-proof symbols are encoded as identifiers in the corpus and are mistaken for variables.", "20% for test and 10% for development data set partitioning strategy.", "Each partition is sampled by distribution in order to model training and predicting typings over complete discourse units, such as documents.", "We compare three models for variable typing to two baselines: the nearest type baseline and the SVM proposed by Kristianto et al. (2014).", "One of our models is an extension of the latter baseline with both type and variable-centric features.", "The other two models are based on deep neural networks: a convolutional neural network and a bidirectional LSTM.", "We treat the task of typing as binary classification: every possible typing in a sentence is presented to a classifier which, in turn, is expected to make a type or not-type decision.", "We say that an edge is positive if it connects a variable to a type in the sentence and negative otherwise.", "We use the extended dictionary of types (Section 4.2) to pre-train a type embedding space .", "Computed over the MREC, a type embedding space includes embeddings for both words and types (as atomic lexical tokens).", "These vectors are used by our deep neural networks to model the distributional meaning of words and types.", "The type embedding space is constructed using the process described by Stathopoulos and Teufel (2016): occurrences of extended dictionary type phases in the MREC are substituted with unique atomic lexical units before the text is passed on to word2vec .", "Nearest Type baseline (NT) Given a variable v , the nearest type baseline takes the edge that minimises the word distance between v and some type in the sentence to be the positive edge.", "This baseline is intended to approximate the nearest noun baseline (Kristianto et al., 2012, 2014) which we cannot directly compute due to the fact that noun phrases in the text become parts of types.", "Support Vector Machine (Kristianto et al.) (SVM) This is an implementation of the features and linear SVM described by Kristianto et al. (2012).", "Furthermore, we use the same value for hyperparameter C (the soft margin cost parameter) used by Kristianto et al. (2012).", "Due to the class imbalance in our data set we have used inversely proportional class weighting (as implemented in scikit-learn).", "L2-normalisation is also applied.", "Extended Support Vector Machine (SVM+) We have extended the SVM proposed by Kristianto et al. (2012) with the features that are type and variable-centric, such as the base symbol of a candidate variable' and first letter in the candidate type'.", "A description of these extended features are listed in Table 4.", "We applied automatic class weighting and L2-normalisation.", "We have found that C = 2 is optimal for this model by fine-tuning over the development set.", "Convolutional Neural Network (Convnet) We use a Convnet to classify each of the V T assignment edges as either positive or negative, where V are the variables in the input text and T are the types.", "Unlike the SVM models, we do not use any hand-crafted features, but only the inputs (Table 3), and the pre-trained embeddings (Section 5.1).", "The input is a tensor that encodes the input described in Table 3.", "We use the embeddings to represent the input tokens.", "In addition, we concatenate two dimensions to the input for each token : one dimension to denote (using 1 or 0 ) whether a given token is a type and another dimension to denote if a token is a variable.", "The model has a set of different sized filters, and each filter size has an associated number of filters to be applied (all are hyperparameters to 307 Name Description Token A word in the sentence.", "the model).", "The filters are applied to the input text (i.e. convolutions), and then max-pooled, flat-tened, concatenated, and a dropout layer ( p = 0 . 5) is then applied before being fed into a multilayer perceptron (MLP), with the number of hidden layers and their hidden units as hyperparameters.", "Finally, a softmax layer is used to output a binary decision.", "The model is implemented using the Keras library using binary cross-entropy as loss function, and the ADAM optimizer (Kingma and Ba, 2014).", "We tune the aforementioned hyperparameters on the development data and we use balanced over-sampling with replacement in order to adjust for the class imbalance in the data.", "Our tuned hyperparameters are as follows: filter window sizes ( 2 to 12 , then 14 , 16 , 18 , 20 ) with an associated number of filters ( 300 for the first five, 200 for the next four, 100 for the next three, then 75 , 70 , 50 ).", "One hidden layer of the MLP with 512 units is used with batch size 50 .", "Bidirectional LSTM (BiLSTM) The architecture takes as input a sequence of words, which are then mapped to word embeddings.", "For each token in the input sentence, we also include the inputs described in Table 3.", "In addition, the model uses one string feature we refer to as supertype.", "If the token is a type, then this feature is the string key of the embedding vector of its supertype or NONE otherwise.", "These features are mapped to a separate embedding space and then concatenated with the word embedding to form a single task-specific word representation.", "This allows us to capture useful information about each word, and also designate which words to focus on when processing the sentence.", "We use a neural sequence labeling architecture, based on the work of Lample et al. (2016) and Rei and Yannakoudakis (2016).", "The constructed word representations are given as input to a bidirectional LSTM (Hochreiter and Schmidhuber, 1997), and a context-specific representation of each word is created by concatenating the hidden representations from both directions.", "A hidden layer is added on top to combine the features from both directions.", "Finally, we use a softmax output layer that predicts a probability distribution over positive or negative assignment for a given edge.", "We also make use of an extension of neural sequence labeling that combines character-based word representations with word embeddings using a predictive gating operation (Rei et al., 2016).", "This allows our model to capture character-level patterns and estimate representations for previously unseen words.", "In this framework, an alternative word repre-308 sentation is constructed from individual characters, by mapping characters to an embedding space and processing them with a bidirectional LSTM.", "This representation is then combined with a regular word embedding by dynamically predicting element-wise weights for a weighted sum, allowing the model to choose for each feature whether to take the value from the word-level or character-level representation.", "The LSTM layer size was set to 200 in each direction for both wordand character-level components; the hidden layer d was set to size 50 .", "During training, sentences were grouped into batches of size 64 .", "Performance on the development set was measured at every epoch and training was stopped when performance had not improved for 10 epochs; the best-performing model on the development set was then used for evaluation on the test set.", "Evaluation is performed over edges, rather than sentences, in the test set.", "We measure performance using precision, recall and F 1 -score.", "We use the non-parametric paired randomisation test to detect significant differences in performance across classifiers.", "The convnet and BiLSTM models are trained and evaluated with as many sentences as there are edges: the source sentence is copied for each input edge, with inputs modified to reflect the relation of interest.", "We employed early stopping and dropout to avoid overfitting with these models.", "Table 5 shows the performance results of all classifiers considered.", "All three proposed models have significantly outperformed the NT baseline and Kristianto et", "al.'s (Kristianto et al., 2014) state-of-the-art SVM.", "The best performing model is the bidirectional LSTM ( F 1 = 78 . 98% ) which has significantly outperformed all other models ( = 0 . 01 ).", "According to the results in Table 5, both deep neural network models have significantly outperformed classifiers based on other paradigms.", "This is consistent with the intuition that the language of mathematics is formulaic: we expect deep neural networks to effectively recognise patterns and identify correlations between tokens.", "The neural models outperform SVM+ despite the fact that the latter is a product of laborious manual feature engineering.", "In contrast, no man-Precision (%) Recall (%) F 1 -score (%) NT 30.30 82.94 44.39 SVM 55.39 76.36 64.21 SVM+ 71.11 72.74 71.91 Convnet 80.11 70.26 74.86 BiLSTM 83.11 74.77 78.98 Table 5: Model performance summary.", "ual feature engineering has been performed on the Convnet model (or indeed on any of the deep neural network models).", "The nearest type (NT) baseline demonstrates high recall but low precision.", "This is not surprising since the NT baseline is not capable of making a negative decision: it always assigns some type to all variables in a given sentence.", "We demonstrate that our data set and variable typing task are useful using a mathematical information retrieval (MIR) experiment.", "The hypothesis for our MIR experiment is two-fold:", "(a) types identified in the textual context for the variable typing task are also useful for text-based mathematical retrieval and", "(b) substituting raw symbols with types in mathematical expressions will have an observable effect to MIR.", "In order to motivate the second hypothesis, consider the following natural language query: Let x be a vector.", "In the context of MIR, mathematical expressions are represented using SLTs (Pattaniyil and Zanibbi, 2014) that are constructed by parsing presentation MathML.", "The expression x + y is represented by the SLT in figure", "1(a).", "The variable typing classifier and the type disambiguation algorithm determine the types of the variables x and y as vector.", "Thus, the variable nodes in figure", "1(a) will be substituted with their type, producing the SLT in figure", "1(b).", "The example query can be satisfied by identifying a vector y such that when added to x will produce the zero vector.", "This operation is abstract in mathematics and extends to objects beyond vectors, including integers.", "In an untyped formula index, there is no distinction between instances of x + y where the variables are integers or vectors.", "As a result, documents where both variables are integers might also be returned.", "In contrast, 309 x + ADJ y ADJ LINEAR FORM WITHIN VAR OP VAR WITHIN WITHIN vector + ADJ ADJ LINEAR FORM WITHIN VAR OP VAR WITHIN WITHIN vector", "a typed formula index will return instances of the typed SLT in figure", "1(b) where the variables are vectors, as opposed to integers.", "Therefore, a typed index can reduce the number of false positives and increase precision.", "Four MIR retrieval models are introduced in Section 7.3 designed to control for text index-ing/retrieval so that the effects of type-aware vs type-agnostic formula indexing and scoring can be isolated.", "These models make use of the Tangent formula indexing and scoring functions (Pattaniyil and Zanibbi, 2014), which we have implemented.", "We use the Cambridge University Math IR Test Collection (CUMTC) (Stathopoulos and Teufel, 2015) which is composed of 120 research-level mathematical information needs and 160 queries.", "The CUMTC is ideal for our evaluation for two reasons.", "First, topics in the CUMTC are expressed in natural language and are rich in mathematical types.", "This allows us to directly apply our best performing variable typing model (BiLSTM) in our retrieval experiment in order to extract variable typings for documents and queries.", "Second, the CUMTC uses the MREC as its underlying document collection, which enables downstream evaluation in an optimal setting for variable typing.", "Given a mathematical formula, the Tangent indexing algorithm starts from the root node of an SLT and generates symbol pair tuples in a depth-first manner.", "Symbol pair tuples record parent/child relationships between SLT nodes, the distance (number of edges) and vertical offset between them.", "At each step in the traversal, the index is updated to record one tuple representing the relationship between the current node and every node in the path to the SLT root.", "We have also implemented Tangent's method of indexing matrices, but we refer the reader to Pattaniyil and Zanibbi (2014) for further details.", "Tangent scoring proceeds as follows.", "For each query formula, the symbol pair tuples are generated and matched exactly to those in the document index.", "Let C denote the set of matched index formulae and | s | the number of symbol pairs in any given expression s in C .", "For each s in C , recall (R) is said to be | C | | Q | , where | C | and | Q | are the num-bers of tuples in C and the query formula Q respectively, and precision (P) is | C | | s | .", "Candidate s is assigned the F score of these precision and recall values.", "The mathematical context score for a given document d and query with formulae e 1 , . . . , e n is m ( d, e 1 , . . . , e n ) = n X j =1 | e j | t 1( d, e j ) P ni =1 | e i | where | e j | represents the number of tuples in expression e j and t 1( d, e j ) represents the top F-score for expression e i in document d .", "The final score for document d is a linear combination of the math context score above and its Lucene text score (L(d)): L ( d ) + (1 ) m ( d, e 1 , . . . , e n ) 7.2 Typed Tangent Indexing and Scoring We have applied the BiLSTM variable typing model to obtain variable typings for all symbols in the documents in the MREC.", "For each document in the collection our adapted Tangent formula indexer first groups the variable typing edges for that document according to the variable identifier involved.", "Subsequently, our typed indexing process applies a type disambiguation algorithm to determine which of the candidate types associated with the variable will be designated as its type.", "For a variable v in document d , our type disambiguation algorithm first looks at the known types suffix trie (KTST) containing all 1.23 million types in order to find a common parent be-310 tween the candidate types.", "If a common supertype T is discovered, then v is said to be of type T .", "Otherwise, the type disambiguation algorithm uses simple majority vote amongst the candidates to determine the final type for variable v .", "The type disambiguation algorithm is applied to every typing group until all variable typings have been processed.", "Variable groups with no type candidates (e.g., no variable typings have been extracted for a variable) are assigned a missing type symbol (*).", "Subsequently, variables in the SLT of each formula in d are replaced with their type or the missing type symbol.", "An index, referred to as the typed index, is generated by applying the tangent indexing process on the modified SLTs.", "The same process is applied to query formulae during query time in order to facilitate typed matching and scoring.", "We have replicated runs of the Lucene vector-space model (VSM) and BM25 models presented by Stathopoulos and Teufel (2016) on the CUMTC.", "Furthermore, we introduce four models based on Tangent indexing and scoring that represent different strategies in handling types in text and formulae.", "We refer to a model as typed if it uses the type-substituted version of the Tangent index and untyped otherwise.", "Text with types removed (RT): The Lucene score L ( d ) is computed over a text index with type phrases completely removed.", "This model is intended to isolate the performance of retrieval on the formula index alone.", "We consider both typed and untyped instances of this model.", "Text with types(TY): The Lucene score is computed over a text index that treats type phrases as atomic lexical tokens.", "This model is intended to simulate type-aware text that enables the application of variable typing.", "Both typed and untyped instances of this model are considered.", "Optimal values for the linear combination parameter are obtained using 13 queries in the de-velopment set of the CUMTC.", "We report mean average precision (MAP) for our models computed over all 160 queries in the main CUMTC.", "MAPs obtained over the CUMTC are low due to the difficulty of the queries rather than an unstable evaluation (Stathopoulos and Teufel, 2016).", "The paired randomisation test is used to test for significance in retrieval performance gains between the models.", "The results of our MIR experiments are presented in Table 6.", "The best performing model is TY/typed which significantly outperforms all other baselines ( p value < 0 . 05 for comparison with BM25 and p value < 0 . 01 with all other models).", "The TY/typed model yields almost double the MAP performance of its untyped counterpart (TY/untyped, .083 MAP).", "In contrast, the RT/typed and RT/untyped models perform comparably (no significant difference) but poorly.", "This drop in MAP performance suggests that type phrases are beneficial for text-based retrieval of mathematics.", "Retrieval models employing formula indexing seem to be affected by both the presence of types in the text as well as in the formula index.", "The TY/typed model outperforms the TY/untyped model, which in turn outperforms RT/untyped.", "This suggests that gains in retrieval performance are strongest when types are used in both text and formula retrieval models using either approach alone do not perform as well.", "These results demonstrate that variable typing is a valuable task in MIR.", "This work introduces the new task of variable typing and an associated data set containing 33,524 labeled edges in 7,803 sentences.", "We have constructed three variable typing models and have shown that they outperform the current state-of-the-art methods developed for similar tasks.", "The BiLSTM model is the top performing model achieving 79% F 1 -score.", "This model is then evaluated in an extrinsic downstream taskMIR, where we augmented Tangent formula indexing with variable typing.", "A retrieval model employing the typed Tangent index outperforms all considered retrieval models demonstrating that our variable typing task, data and trained model are useful in downstream applications.", "We make our variable typing data set available through the Open Data Commons license." ]
[ "abstain", "abstain", "other", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "method", "other", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "objective", "objective", "abstain" ]
[ "Automatic text summarization has enjoyed great progress over the years and is used in numerous applications, impacting the lives of many.", "Despite this development, there is lit-tle research that meaningfully investigates how the current research focus in automatic summarization aligns with users' needs.", "To bridge this gap, we propose a survey methodology that can be used to investigate the needs of users of automatically generated summaries.", "Importantly, these needs are dependent on the target group.", "Hence, we design our survey in such a way that it can be easily adjusted to investigate different user groups.", "In this work we focus on university students, who make extensive use of summaries during their studies.", "We find that the current research directions of the automatic summarization community do not fully align with students' needs.", "Motivated by our findings, we present ways to mitigate this mismatch in future research on automatic summarization: we propose research directions that impact the design, the development and the evaluation of automatically generated summaries.", "The field of automatic text summarization has experienced great progress over the last years, especially since the rise of neural sequence to sequence models (e.g., Cheng and Lapata, 2016; See et al., 2017; Vaswani et al., 2017).", "The introduction of self-supervised transformer language models like BERT (Devlin et al., 2019) has given the field an additional boost (e.g., Liu et al., 2018; Liu and Lapata, 2019; Lewis et al., 2020; Xu et al., 2020).", "Theoften implicit goal of automatic text summarization is to generate a condensed textual version of the input document(s), whilst preserving the main message.", "This is reflected in today's most common evaluation metrics for the task; they focus on aspects such as informativeness, fluency, succinctness and factuality (e.g., Lin, 2004; Nenkova Output factors Purpose factors Input factors This figure shows the classical approach for textual summarization.", "and Passonneau, 2004; Paulus et al., 2018; Narayan et al., 2018b; Goodrich et al., 2019; Wang et al., 2020; Xie et al., 2021).", "The needs of the users of the summaries are often not explicitly addressed, despite their importance in explicit definitions of the goal of automatic summarization (Sprck Jones, 1998; Mani, 2001a).", "Mani defines this goal as: to take an information source, extract content from it, and present the most important content to the user in a condensed form and in a manner sensitive to the user's or application's needs. Different user groups have different needs.", "Investigating these needs explicitly is critical, given the impact of adequate information transfer (Bennett et al., 2012).", "We propose a survey methodology to investigate these needs.", "In designing the survey, we take stock of past work by Sprck Jones (1998) who argues that in order to generate useful summaries, one should take the context of a summary 46 into accounta statement that has been echoed by others (e.g., Mani, 2001a; Aries et al., 2019).", "To do this in a structured manner, Sprck Jones introduces three context factor classes: input factors , purpose factors and output factors , which respectively describe the input material, the purpose of the summary, and what the summary should look like.", "We structure our survey and its implications around these factors.", "In Figure 1 we give an example of incorporating the context factors in the design of automatic summarization methods.", "Our proposed survey can be flexibly adjusted to different user groups.", "Here we turn our focus to university students as a first stakeholder group.", "University students are a particularly relevant group to focus on first, as they benefit from using pre-made summaries in a range of study activities (Reder and Anderson, 1980), but the desired characteristics of these pre-made summaries have not been extensively investigated.", "We use the word premade to differentiate such summaries from the ones that users write themselves.", "Automatically generated summaries fall in the pre-made category, and should thus have the characteristics that users wish for pre-made summaries.", "Motivated by our findings, we propose important future research directions that directly impact the design, development, and evaluation of automatically generated summaries.", "We contribute the following: C1 We design a survey that can be easily adapted and reused to investigate and understand the needs of the wide variety of users of automatically generated summaries; C2 We develop a thorough understanding of how automatic summarization can optimally benefit users in the educational domain, which leads us to unravel important and currently underexposed research directions for automatic summarization; C3 We propose a new, feasible and comprehensive evaluation methodology to explicitly evaluate the usefulness of a generated summary for its intended purpose.", "In Section 1 we introduced the context factors as proposed by Sprck Jones (1998).", "Each context factor class can be divided into more fine-grained subclasses.", "To ensure the flow of the paper, we list an overview in Appendix A. Below, we explain and use the context factors and their fine-grained subclasses to structure the related work.", "As our findings have implications for the evaluation of automatic summarization, we also discuss evaluation methods.", "Lastly, we discuss the use-cases of automatic summaries in the educational domain.", "Input factors.", "We start with the fine-grained input factor unit , which describes how many sources are to be summarized at once, and the factor scale , which describes the length of the input data.", "These factors are related to the difference between single and multi-document summarization (e.g., Chopra et al., 2016; Cheng and Lapata, 2016; Wang et al., 2016; Yasunaga et al., 2017; Nallapati et al., 2017; Narayan et al., 2018b; Liu and Lapata, 2019).", "Scale plays an important role when material shorter than a single document is summarized, such as sentence summarization (e.g., Rush et al., 2015).", "Regarding the genre of the input material, most current work focuses on the news domain or Wikipedia (e.g., Sandhaus, 2008; Hermann et al., 2015; Koupaee and Wang, 2018; Liu et al., 2018; Narayan et al., 2018a).", "A smaller body of work addresses different input genres, such as scientific articles (e.g., Cohan et al., 2018), forum data (e.g., Vlske et al., 2017), opinions (e.g., Amplayo and Lapata, 2020) or dialogues (e.g., Liu et al., 2021).", "These differences are also closely related to the input factor subject type , which describes the difficulty level of the input material.", "The factor medium refers to the input language.", "Most automatic summarization work is concerned with English as language input, although there are exceptions, such as Chinese (e.g., Hu et al., 2015) or multilingual input (Ladhak et al., 2020).", "The last input factor is structure .", "Especially in recent neural approaches, explicit structure of the input text is often ignored.", "Exceptions include graph-based approaches, where implicit document structure is used to summarize a document (e.g., Tan et al., 2017; Yasunaga et al., 2017), and summarization of tabular data (e.g., Zhang et al., 2020a) or screenplays (e.g., Papalampidi et al., 2020).", "Purpose factors.", "Although identified as the most important context factor class by Sprck Jones (1998)and followed by, for example, Mani (2001a)purpose factors do not receive a substantial amount of attention.", "There are some exceptions, e.g., query-based summarization (e.g., Nema et al., 2017; Litvak and Vanetik, 2017), question-driven 47 summarization (e.g., Deng et al., 2020), personalized summarization (e.g., Mro and Bielikov, 2012) and interactive summarization (e.g., Hirsch et al., 2021).", "They take the situation and the audience into account.", "The use -cases of the generated summaries are also clearer in these approaches.", "Output factors.", "We start with the output factors style and material .", "The latter is concerned with the degree of coverage of the summary.", "Most generated summaries have an informative style and cover most of the input material.", "There are exceptions, e.g., the XSum dataset (Narayan et al., 2018a) which constructs summaries of a single sentence and is therefore more indicative in terms of style and inevitably less of the input material is covered.", "Not many summaries have a critical or aggregative style.", "Aggregative summaries put different source texts in relation to each other, to give a topic overview.", "Most popular summarization techniques focus on a running format .", "Work on template-based (e.g., Cao et al., 2018) and faceted (e.g., Meng et al., 2021) summarization follows a more headed (structured) format .", "Falke and Gurevych (2017) build concept maps and Wu et al. (2020) make knowledge graphs.", "The difference between abstractive and extractive summarization is likely the best known distinction in output type (e.g., Nallapati et al., 2017; See et al., 2017; Narayan et al., 2018b; Gehrmann et al., 2018; Liu and Lapata, 2019), although it is not entirely clear which output factor best describes the difference.", "In Section 5 we use the context factors to identify future research directions, based on the difference between our findings and the related work.", "Evaluation methods for automatic summarization can be grouped in intrinsic vs. extrinsic methods (Mani, 2001b).", "Intrinsic methods evaluate the model itself, e.g., on informativeness or flu-ency (Paulus et al., 2018; Liu and Lapata, 2019).", "Extrinsic methods target how a summary performs when used for a task (Dorr et al., 2005; Wang et al., 2020).", "Extrinsic methods are resource intensive, explaining the popularity of intrinsic methods.", "Evaluation methods can also be grouped in automatic vs. human evaluation methods.", "Different automatic metrics have been proposed, such as Rouge (Lin, 2004) and BERTScore (Zhang et al., 2020b) which respectively evaluate lexical and semantic similarity.", "Other methods use an automatic question-answering evaluation methodology (Wang et al., 2020; Durmus et al., 2020).", "Most human evaluation approaches evaluate intrinsic factors such as informativeness, readability and conciseness (DUC, 2003; Nallapati et al., 2017; Paulus et al., 2018; Liu and Lapata, 2019)factors that are difficult to evaluate automatically.", "There are also some extrinsic human evaluation methods, where judges are asked to perform a certain task based on the summary (e.g., Narayan et al., 2018b).", "So far, usefulness 1 has not been evaluated in a feasible and comprehensive manner, whereas it is an important metric to evaluate whether summaries fulfil users' needs.", "Therefore, we bridge the gap by introducing a feasible and comprehensive evaluation methodology to evaluate usefulness.", "Summaries play a prominent role in education.", "Reder and Anderson (1980) find that students who use a pre-made summary score better on a range of study activities than students who do not use such a summary.", "As the quality of automatically generated summaries increases (e.g., Lewis et al., 2020; Xu et al., 2020), so does the potential to use them in the educational domain, especially given the increasing importance of digital tools and devices for education (Luckin et al., 2012; Hashim, 2018).", "With these developments in mind, it is critical that educators are aware of the pedagogical implications; they need to understand how to best make use of all new possibilities (Hashim, 2018; Amhag et al., 2019).", "The outcomes of our survey result in concrete suggestions for developing methods for automatic summarization in the educational domain, whilst taking students' needs into account.", "Here we detail our survey procedure.", "For concreteness, we present the details with our intended target group in mind.", "The context factors form the backbone of our survey and the setup can be easily adjusted to investigate the needs of different target groups.", "For example, we ask participants about a pre-made summary for a recent study activity, but it is straightforward to adapt this to a different use-case that is more suitable for other user groups.", "1 We follow the definition of the English Oxford Learner's Dictionary ( www.oxfordlearnersdictionaries. com/definition/english/ ) for usefulness: the fact of being useful or possible to use , where useful is defined as that can help you to do or achieve what you want .", "We recruited participants among students at universities across the Netherlands by contacting ongoing courses and student associations, and by advertisements on internal student websites.", "As incentive, we offered a ten euro shopping voucher to ten randomly selected participants.", "A total of 118 participants started the survey and 82 completed the full survey, resulting in a 69 .", "5% completion rate.", "We only include participants who completed the study in our analysis.", "Participants spent 10 minutes on average on the survey.", "In the final part of our survey we ask participants to indicate their current level of education and main field of study.", "The details are given in Figure 2. 3.2 Survey procedure Figure 3 shows a brief overview of our survey procedure.", "A detailed account is given in Appendix B. We arrived at the final survey version after a number of pilot runs where we ensured participants understood their task and all questions.", "We ran the survey with SurveyMonkey ( surveymonkey.com ).", "A verbatim copy is included in Appendix C and released under CC BY license.", "2 Introduction.", "The survey starts with an introduction where we explain what to expect, how we process the data and that participation is voluntary.", "After participants agree with this, an explanation of the term pre-made summary follows.", "As we do not want to bias participants by stating that the summary was automatically generated, we explain that the summary can be made by anyone, e.g., a teacher, a good performing fellow student, the authors of the original material, or a computer.", "Recall that an automatically generated summary is a pre-made summary.", "Hence, our survey identi-fies the characteristics an automatically generated summary should have.", "We also give examples of 2 https://github.com/maartjeth/survey_ useful_summarization types of pre-made summaries; based on the pilot experiments we noticed that people missed this information.", "We explicitly state that these are just examples and that participants can come up with any example of a helpful pre-made summary.", "Context factors.", "In the main part of our survey we focus on the context factors.", "First, we ask participants whether they have made use of a pre-made summary in one of their recent study activities.", "If so, we ask them to choose the study activity where a summary was most useful.", "We call this group the Remembered group , as they describe an existing summary from memory.", "If people indicate that they have not used a pre-made summary in one of their recent study activities, we ask them whether they can imagine a situation where a pre-made summary would have been helpful.", "If not, we ask them why not and lead them to the final background questions and closing page.", "If yes, we ask them to keep this imaginary situation in mind for the rest of the survey.", "We call this group the Imagined group .", "Now we ask the Remembered and Imagined groups about the input, purpose and output factors of the summary they have in mind.", "We ask questions for each of the context factor subclasses that we discussed in Section 2. At this point, the two groups are in different branches of the survey.", "The difference is mainly linguistically motivated: in the Imagined group we use verbs of probability instead of asking to describe an existing situation.", "Some questions can only be asked in the Remembered group, e.g., how helpful the summary was.", "In the first context factor question we ask what the study material consisted of.", "We give a number of options, as well as an other' checkbox.", "To avoid position bias, all answer options for multiple choice and multiple response questions in the survey are randomized, with the other' checkbox always as the last option.", "If participants do not choose the mainly text' option, we tell them that we focus on textual input in the current study 3 and ask whether they can think of a situation where the input did consist of text.", "If not, we lead them to the background questions and closing page.", "If yes, they proceed to the questions that give us a full overview of the input, purpose and output factors of the situation participants have in mind.", "Finally, we ask the Remembered group to suggest how their described summary could be turned into their ideal 3 Different modalities are also important to investigate, but we leave this for future work to ensure clarity of our results.", "Trustworthiness and future features questions.", "So far we have included the possibility that the summary was machine-generated, but also explicitly included other options so as not to bias participants.", "At this point we acknowledge that machine-generated summaries could give rise to additional challenges and opportunities.", "Hence, we include some exploratory questions to get an understanding of the trust users would have in machine-generated summaries and to get ideas for the interpretation of the context factors in exploratory settings.", "For the first questions we tell participants to imagine that the summary was made by a computer, but contained all needs identified in the first part of the survey.", "We then ask them about trust in computerand human-generated summaries.", "Next, we ask them to imagine that they could interact with the computer program that made the summary in the form of a digital assistant.", "We tell them not to feel restricted by the capabilities of today's digital assistants.", "The verbatim text is given in Appendix C. We ask participants to select the three most and the three least useful features for the digital assistant, similar to ter Hoeve et al. (2020).", "For each question we examine the outcomes of all respondents together and of different subgroups (Table 1).", "For space and clarity reasons, we present the results of all respondents together, unless interesting differences between groups are found.", "We use the question formulations as used for the Remembered group and abbreviate answer options.", "Answers to multiple choice and multiple response questions are presented in an aggregated manner and we ensure that none of the open answers can be used to identify individual participants.", "Of our participants, 78 .", "0% were led to the Remembered branch and of the remaining 22 .", "0% , 78 .", "2% were led to the Imagined branch.", "We asked the few remaining participants why they could not think of a case where a pre-made summary could be useful for them.", "People answered that they would not 1 All respondents together 2 Remembered branch vs Imagined branch 3 Different study fields 4 Different study levels 5 Different levels of how helpful the summary was according to participants, rated on a 5 -point Likert scale (note that only the remembered group answered this question) Table 1: Levels of investigation.", "Figure 4 shows the input factor results.", "We highlight some here.", "Textual input is significantly more popular than other input types (Figure 4a), 4 stressing the relevance of automatic text summarization.", "People described a diverse input for scale and unit (Figure 4b), much more than the classical focus of automatic summarization suggests.", "Most input had a considerable amount of structure (Figure 4e).", "Structure is often discarded in automatic summarization, although it can be very informative.", "Figure 5 shows the purpose factor results.", "Participants indicated that the summary was helpful or very helpful (Figure 5f), which allows us to draw valid conclusions from the survey.", "5 We now highlight some results from the other questions in this category.", "For the intended audience of the summaries, students selected level (4) and (5) ( a lot (4) or full (5) domain knowledge is expected from the users of the summary\" ) significantly more often than the other options (Figure 5d).", "Although perhaps unsurprising given our target group, it is an important outcome as this requires a different level of detail than, for example, a brief overview of a news article.", "People used the summaries for many different use-cases (Figure 5e), whereas current research on automatic summarization mainly focuses on giving an overview of the input.", "We show the results for the Remembered vs. Imagined splits, 4 This is based on people's initial responses and not on the follow up question if they selected another option than text'.", "5 Because we do not find significant differences in the overall results when we exclude the few participants who did not find their summary helpful and we do not find many correlations w.r.t. how helpful a summary was and a particular context factor, we include all participants in the analysis, regardless of how helpful they found their summary, for completeness.", "as the Imagined group chose refresh memory and overview more often than the Remembered group (Fisher's exact test, p < 0 . 05 ).", "Although not significant after a Bonferroni correction, this can still be insightful for future research directions.", "Lastly, participants in the Imagined group ticked more boxes than participants in the Remembered group: 3 .", "33 vs. 2 .", "57 per participant on average, stressing the importance of considering many different use-cases for automatically generated summaries.", "Figure 6 shows the results for the output factor questions.", "Textual summaries were significantly more popular than other summary types (Figure 6a), which again stresses the importance of automatic text summarization.", "Most participants indicated that the summary covered (or should cover) most of the input material (Figure 6c).", "For the output factor style we find an interesting difference between the Remembered and Imagined group (Fig-ure 6d).", "Whereas the Remembered group described significantly more often an informative summary, the Imagined group opted significantly more often for a critical or aggregative summary.", "Most research on automatic summarization focusses on informative summaries only.", "For the output factor structure (Figure 6b), people described a substantially richer format of the pre-made summaries than adopted in most research on automatic summarization.", "Instead of simply a running text, the vast majority of people indicated that the summary contained (or should contain) structural elements such as special formatting, diagrams, headings, etc.", "Moreover, the Imagined group ticked more answer boxes on average than the Remembered group: 4 .", "17 vs. 3 .", "56 per participant, indicating a desire for structure in the generated summaries, which is supported by the open answer questions.", "Open answer questions.", "We asked participants in the Remembered group how the summary could be transformed into their ideal summary and 86 .", "9% of these participants made suggestions.", "Many of those include adding additional structural elements to the summary, like figures, tables or structure in the summary text itself.", "For example, one of the participants wrote: An ideal summary is good enough to fully replace the original (often longer) texts contained in articles that need to be read for exams.", "The main purpose behind this is speed of learning from my experience.", "More tables, graphs and visual representations of the study material and 51", "(a) Situation (1): What was the goal of this study activity?", "(MC)", "(b) Situation (2): Who made this premade summary?", "(MC, Only if Remembered)", "(c) Situation (3): The summary was made specifically to help me (and potentially my fellow students) with my study activity (LS, Only if Remembered)", "key concepts / links would improve the summary, as I would faster comprehend the study", "material. Another participant wrote: colors and a key for color-coding different sections, such as definitions on the left maybe and then the rest of the page reflects the structure of the course material with notes on the readings that have many headings and subheadings.", "Another theme is the desire to have more examples in the summary.", "One participant wrote: More examples i think. For me personally i need examples to understand the material. Now i needed to imagine them myself .", "Some participants wrote that they would like a more personalized summary, for example: I'd highlight some things I find difficult. So I'd per-sonalise the summary more.", "Another participant wrote: Make it more personalized may be. These notes were by another student. I might have focussed more on some parts and less on others. 4.5 Trustworthiness and future features Of all participants, 48 .", "is machineor human-generated, as long as the quality is as good as a human-generated one.", "This last point is reflected in which types of summaries participants would trust more.", "People opted significantly more often for a human-generated one.", "For the future feature questions, adding more details to the summary and answering questions based on the content of the summary were very popular.", "We give a full account in Appendix D. 5 Implications and Perspectives 5.1 Future research directions Our findings have important implications for the design and development of future automatic summarization methods.", "We present these in Table 2, per context factor.", "Summarizing, the research developments as summarized in Section 2 are encouraging, yet given that automatic summarization methods increasingly mediate people's lives, we argue that more attention should be devoted to its stakeholders, i.e., to the purpose factors.", "Here we have shown that students, an important stakeholder group, have different expectations of pre-made 52", "(a) Format (1): What was the type of the summary?", "(MC)", "(b) Format (2): How was the summary structured?", "(MR)", "(c) Material: How much of the study material was covered by the summary?", "(LS)", "summaries than what most automatic summarization methods offer.", "These differences include the type of input material that is to be summarized, but also how these summaries are presented.", "Presum-Input Factors Stronger focus on developing methods that can: handle a wide variety and a mixture of different types of input documents at once; understand the relationships between different input documents; use the structure of the input document(s).", "Purpose Factors Explicitly define a standpoint on the purpose factors in each research project; Include a comprehensive evaluation methodology to evaluate usefulness.", "We propose this in Section 5.2.", "Output Factors Stronger focus on developing methods that can: output different summary styles , e.g., informative, aggregative or critical.", "Especially the last two require a deeper understanding of the input material than current models have; explicitly model and understand relationships between different elements in the summary and potentially relate this back to the input document(s).", "ably, this also holds for other stakeholder groups and thus we hope to see our survey used for different target groups in the future.", "Datasets.", "To support these future directions we need to expand efforts on using and collecting a wide variety of datasets.", "Most recent data collection efforts are facilitating different input factors the purpose and output factors need more emphasis.", "Our findings also impact the evaluation of summarization methods.", "We discuss this next.", "Following Sprck Jones (1998) and Mani (2001a), we argue that a good choice of context factors is crucial in producing useful summaries for users.", "It is important to explicitly evaluate this.", "The few existing methods to evaluate usefulness are very resource demanding (e.g., Riccardi et al., 2015) or not comprehensive enough (e.g., DUC, 2003; Dorr et al., 2005).", "Thus, we propose a feasible and comprehensive method to evaluate usefulness.", "For the evaluation methodology, we again use the context factors.", "Before the design and development of the summarization method the intended purpose factors need to be defined.", "Especially the 53 fine-grained factor use is important here.", "Next, the output factors need to be evaluated on the use factors.", "For this, we take inspiration from research on simulated work tasks (Borlund, 2003).", "Evaluators should be given a specific task to imagine, e.g., writing a news article , or studying for an exam .", "This task should be relatable to the evaluators, so that reliable answers can be obtained (Borlund, 2016).", "With this task in mind, evaluators should be asked to judge two summaries in a pairwise manner on their usefulness, in the following format: The [out-put factor] of which of these two summaries is most useful to you to [use factor]?", "For example: The style of which of these two summaries is most useful to you to substitute a chapter that you need to learn for your exam preparation?", "It is critical to ensure that judges understand the meaning of each of the evaluation criteria style and substitute in the example.", "We provide example questions for each of the use and output factors in Appendix E. 6 Conclusion In this paper we focused on users of automatically generated summaries and argued for a stronger emphasis on their needs in the design, development and evaluation of automatic summarization methods.", "We led by example and proposed a survey methodology to identify these needs.", "Our survey is deeply grounded in past work by Sprck Jones (1998) on context factors for automatic summarization and can be re-used to investigate a wide variety of users.", "In this work we use our survey to investigate the needs of university students, an important target group of automatically generated summaries.", "We found that the needs identified by our participants are not fully supported by current automatic summarization methods and we proposed future research directions to accommodate these needs.", "Finally, we proposed an evaluation methodology to evaluate the usefulness of automatically generated summaries.", "With this work we hope to take a step in the right direction to make research into automatic summarization more inclusive, by explicitly taking the needs of users of these summaries into account.", "As stressed throughout the paper, these needs are different per user group and therefore it is critical that a wide variety of user groups will be investigated.", "There might also be within group differences.", "For example, in this work we have focussed on students from universities in one country, but students attending universities in other geographical locations and with different cultures might express different needs.", "It is important to take these considerations into account, to limit the risk of overfitting on a particular user group and potentially harming other user groups.", "We thank Jacobijn Sandberg and Ana Lucic for helpful comments and feedback.", "This research was supported by the Nationale Politie.", "All content represents the opinion of the authors, which is not necessarily shared or endorsed by their respective employers and/or sponsors." ]
[ "abstain", "abstain", "objective", "abstain", "objective", "method", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "method", "abstain", "objective", "other", "abstain", "other", "method", "abstain", "abstain", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "other", "other" ]
[ "Adaptive policies are better than fixed policies for simultaneous translation, since they can flexibly balance the tradeoff between translation quality and latency based on the current context information.", "But previous methods on obtaining adaptive policies either rely on complicated training process, or underper-form simple fixed policies.", "We design an algorithm to achieve adaptive policies via a simple heuristic composition of a set of fixed policies.", "Experiments on Chinese English and German English show that our adaptive policies can outperform fixed ones by up to 4 BLEU points for the same latency, and more surprisingly, it even surpasses the BLEU score of full-sentence translation in the greedy mode (and very close to beam mode), but with much lower latency.", "Simultaneous translation (ST) aims to provide good translation quality while keeping the latency of translation process as low as possible.", "This is very important for the scenarios that require simultaneity, such as international summits and negotiations.", "For this, human interpreters usually start translation before the source sentence ends.", "However, this makes the translation process much more challenging than the full-sentence translation, because to balance the translation quality and latency, interpreters need to make decisions on when to continue translation and when to stop temporarily to wait for more source side information, which are diffi-cult, especially for syntactically divergent language pairs, such as German and English.", "The above decisions can be considered as two actions: READ (wait for a new source word) and WRITE (emit a translated target word) (Gu et al., 2017).", "Then we only need to decide which action to choose at each step, and the solution can be represented by a policy .", "Earlier works (Yarmohammadi t (cid:1372) he y x (cid:1310)(cid:6556) probably b (cid:1255) not y ngg i (cid:2859)(cid:6598) should du (cid:2642) for c (cid:3980) this fz (cid:6702)(cid:6705) be responsible He probably should not be responsible for this T a r g e t Source H e s h o u l d b e r e s po n s i b l e f o r t h i s R R R R R R wait-3 wait-2 wait-1 R p r ob a b l y n o t Figure 1: An adaptive policy (in bold arrows) composed of three waitk policies ( k = 1 , 2 , 3 ).", "et al., 2013; Bangalore et al., 2012; Fugen et al., 2007; Sridhar et al., 2013; Jaitly et al., 2016) study policies as a part of speech-to-speech ST system, where the policies usually try to separate the source sentence into several chunks that can be translated safely.", "Recent works focus on obtaining policies for text-to-text ST, which can be generally divided into two categories: fixed and adaptive.", "Fixed policies (Ma et al., 2019; Dalvi et al., 2018) usually follow some simple rules to choose actions.", "For example, the waitk policy by Ma et al. (2019) first chooses k READ actions, and then chooses WRITE and READ alternatively.", "This kind of policies do not utilize the context information and can be either too aggressive or too conservative in different cases.", "By contrast, adaptive policies try to make decisions on the fly using the currently available information.", "It is obvious that this kind of policies is more desirable for ST than the fixed ones, and different methods are explored to achieve an adaptive policy.", "The majority of such methods (Grissom II et al., 2014; Cho and Esipova, 2016; Gu et al., 2017; Alinejad et al., 2018; Zheng et al., 2019a) are based on full-sentence translation models, which may be simple to use but cannot outperform fixed policies applied with genuinely simultaneous models trained for ST (Ma et al., 2019).", "Other methods (Arivazhagan et al., 2019; Zheng et al., 2019b) try to learn a policy together with the underlying translation model, but they rely on complicated and time-consuming training process.", "In this paper, we propose to achieve an adaptive policy via a much simpler heuristic composition of a set of waitk policies (e.g., k = 1 10 ).", "See Fig. 1 for an example.", "To further improve the translation quality of our method, we apply ensemble of models trained with different waitk policies.", "Our experiments on Chinese English and German English translation show that our method can achieve up to 4 BLEU points improvement over the waitk method for same latency.", "More interestingly, compared with full-sentence translation, our method achieves higher BLEU scores than greedy search but with much lower latency, and is close to the results from beam search.", "Full-sentence translation.", "Neural machine translation (NMT) model usually consists of two components: an encoder, which encodes the source sentence x = ( x 1 , . . . , x m ) into a sequence of hidden states, and a decoder, which sequentially predicts target tokens conditioned on those hidden states and previous predictions.", "The probability of the predicted target sequence y = ( y 1 , . . . , y n ) will be p ( y | x ) = (cid:81) | y | t =1 p ( y t | x , y <t ) where y <t = ( y 1 , . . . , y t 1 ) denotes the target sequence predicted before step t .", "Simultaneous translation.", "Ma et al. (2019) propose a prefix-to-prefix framework to train models to make predictions conditioned on partial source sentences.", "In this way, the probability of predicted sequence y becomes p g ( y | x ) = (cid:81) | y | t =1 p ( y t | x g ( t ) , y <t ) where g ( t ) is a monotonic non-decreasing function of t , denoting the number of processed source tokens when predicting y t .", "This function g ( t ) can be used to represent a policy for ST. Ma et al. (2019) introduce a kind of fixed policies, called waitk policy, that can be defined by the following g k ( t ) = min {| x | , t + k 1 } .", "Intuitively, this policy first waits k source tokens and then outputs predicted tokens concurrently with the rest of source sentence.", "Assume we have a set of waitk policies and the corresponding models M k ( k = k min . . . k max ).", "We can obtain an adaptive policy, whose lag at each step is between k min and k max , meaning that at each step, the target sequence falls behind the source sequence at most k max tokens and at least k min tokens.", "At each step, there is a waitk policy synchronizing the adaptive policy, meaning that they have the same lag at that step.", "Specifically, at any step t , if the lag of the adaptive policy is k (cid:48) , then we apply the NMT model with the waitk (cid:48) policy and force it to predict existing target tokens until step t , when the model will make a new prediction as the output of step t .", "However, the above method only shows how to simulate the adaptive policy to make a prediction at one step if we would like to write at that step, but it does not tell us at which steps we should write.", "We utilize the model confidence to make such a decision.", "Specifically, we set a probability threshold k for each waitk policy.", "At each step, if the NMT model follows a waitk (cid:48) policy, and predicts the most likely token with probability higher than the threshold k (cid:48) , then we consider the model is confi-dent on this prediction, and choose WRITE action; otherwise, we choose READ action.", "Figure 2 gives an example for this process.", "We define the process of applying a waitk model M k with a waitk policy on a given sequence pair ( x , y ) by the following y top , p top P k ( M k , x , y ) which forces model M k to predict y , and returns the top token y top at the final step with the corresponding probability p top .", "The process of reading and returning a new source token is denoted by READ () , and expression x x represents to append an element x to the end of sequence x .", "We denote by <s> and </s> the start symbol and end symbol of a sequence.", "Then Algorithm 1 gives the pseudocode of the above method.", "Using the corresponding model M k with each wait-k policies may not give us the best performance.", "If we have a set of models trained independently with different waitk policies, then we can apply ensemble of those models (Dietterich, 2000; Hansen and Salamon, 1990) to improve the translation quality, which is also used to improve the translation quality of full-sentence translation (Stahlberg and Byrne, 2017).", "However, there may be two issues to apply ensemble of all models: (1) the runtime for each prediction could be longer, resulting in higher latency; and (2) the translation accuracy may be worse, for the best model for one policy may give bad performance when doing inference with another policy.", "To avoid these, we propose to apply ensemble of the top-3 models for each policy.", "That is, we first generate distribution with the top-3 models independently with the same policy, and then take the arithmetic average of the three distributions as the final token distribution at that step.", "Datasets and models.", "We conduct experiments on Chinese English (ZH EN) and German English (DE EN) translation.", "For ZH EN, we use NIST corpus (2M sentence pairs) as training set, NIST 2006 as dev set, and NIST 2008 as test set.", "For DE EN, we use WMT15 parallel corpus for training, newstest-2013 for validation and newstest-2015 for testing.", "All datasets are tokenized and segmented into sub-word units with byte-pair encoding (Sennrich et al., 2016).", "We take Transformer-base (Vaswani et al., 2017) as our model architecture, and follow Ma et al. (2019) to train our model with waitk policies for integer 1 k 10 .", "In the following experiments, we only use catchup (Ma et al., 2019) for DE EN translation, where we read one additional source token after every 6 predictions.", "We use BLEU (Pa-pineni et al., 2002) as the translation quality metric, and Average Lagging (AL) (Ma et al., 2019) as the latency metric, which measures the lag behind source in terms of the number of source tokens.", "Performance with different policies.", "We first evaluate the performance of each model with different policies, which helps us to choose models for different policies.", "Specifically, we apply each model with ten different waitk policies on dev set to compare the performance.", "Fig. 3 shows the results of five models.", "We find the best model for one policy may not be the one trained with that policy.", "For example, on ZH EN translation, the best model for wait-1 policy is the one trained with wait-3 policy.", "Further, there is no one model could achieve the best performance for all policies.", "Comparing different methods.", "We compare our method with others from literature: waitk method (Ma et al., 2019) (train and test models with the same waitk policy), test-time wait-k method (Ma et al., 2019) (apply full-sentence model with waitk policies), wait-if-diff (Cho and Esipova, 2016) (start with s 0 source tokens, choose to read only if top token at t -th step diffs from that at ( t ) -th step), and wait-if-worse (Cho and Es-ipova, 2016) (start with s 0 source tokens, choose to read only if the top probability at t -th step is smaller than that at ( t ) -th step).", "For wait-if-diff we set 4 6 8 10 12 27 29 31 33 35 37 39 41 43 45 4 r e f BLEUZH EN 33 AL wait-1 model wait-3 model wait-5 model wait-7 model wait-9 model 4 6 8 10 12 16 18 20 22 24 26 28 30 1 r e f BLEUDE EN 30 AL wait-1 model wait-3 model wait-5 model wait-7 model wait-9 model Figure 3: Performance of models with different policies on dev set.", "s 0 { 4 , 6 } and { 2 , 4 } ; and for wait-if-worse we set s 0 { 1 , 2 , 4 , 6 } and { 1 , 2 } .", "For our method, we test three different cases: (1) single , where for each policy we apply the corresponding model that trained with the same policy; (2) ensemble top-3 , where for each policy we apply the ensemble of 3 models that achieve the highest BLEU scores with that policy on dev set; (3) ensemble all , where we apply the ensemble of all 10 models for each policy.", "For thresholds, we first choose 1 and 10 , and the other thresholds are computed in the following way: i = 1 d ( i 1) for integer 1 i 10 and d = ( 1 10 ) / 9 .", "We test with 1 { 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9 } , 10 = 0 and 1 = 1 , 10 { 0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9 } , totally 18 different settings in our experiments.", "The reason behind these settings is that we assume our adaptive policy cannot be either too aggressive or too conservative (as mentioned at the beginning of Section 3).", "The policy is the most aggressive for k = 1 , so we set 1 as the largest; while for k = 10 the policy is the most conservative, so we set 10 the smallest.", "The comparison is provided in Fig. 4 (the corresponding numeric scores are provided in Appendix A).", "Compared with waitk method, our single method achieves improvement of up to 2 BLEU point, and our ensemble top-3 achieves improvement up to 4 BLEU points.", "Compared with full-sentence translation, our ensemble top-3 surprisingly outperforms greedy search with much lower latency (AL < 9), and achieves BLEU scores close to that from beam search (see Table 2).", "We also give one ZH EN translation example from dev set in Table 1 to compare different methods, showing that our method achieves an adaptive policy with low latency and good translation quality.", "Efficiency.", "To evaluate the efficiency, we present in Table 3 the averaged time needed to predict one token for different methods.", "These methods are tested on one GeForce GTX TITAN-X GPU for ZH EN test set.", "We can see that our ensemble top-3 method needs about 0.2 seconds to make a prediction on average.", "However, if the source sentence is revealed in the same speed as general pinyin womenxi`ang sh`ouh`aizhe de jiashu biaosh` zu` chengzh` de tongqng he ai d`ao input . gloss we to victim 's family express most sincere 's sympathy and condolence ensemble top-3 1 =1 , 10 =0 (AL=7) we expressour most sincere sympathyand condolences to the families of the victims . ensemble top-3 1 =0 .", "speech, which is about 0.6 seconds per token in Chinese (Zheng et al., 2019c), then our method is still faster than that (which means that it could be used for real-time).", "Further, we believe the efficiency of our method could be improved with other techniques, such as parallelizing the running of three models in the ensemble, making it less an issue.", "We have designed a simple heuristic algorithm to obtain an adaptive policy based on a set of waitk policies, and applied ensemble in our method to improve the translation quality while maintaining low latency.", "Experiments show that our method not only outperforms the original waitk method with relatively large gap, but also surpasses greedy full-sentence translation with much lower latency.", "We thank the anonymous reviewers for helpful suggestions." ]
[ "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "result", "result", "other" ]
[ "We provide an NLP framework to uncover four linguistic dimensions of political polarization in social media: topic choice, framing, affect and illocutionary force.", "We quantify these aspects with existing lexical methods, and propose clustering of tweet embeddings as a means to identify salient topics for analysis across events; human evaluations show that our approach generates more cohesive topics than traditional LDA-based models.", "We apply our methods to study 4.4M tweets on 21 mass shootings.", "We provide evidence that the discussion of these events is highly polarized politically and that this polarization is primarily driven by partisan differences in framing rather than topic choice.", "We identify framing devices, such as grounding and the contrasting use of the terms terrorist and crazy, that contribute to polarization.", "Results pertaining to topic choice, affect and illocutionary force suggest that Republicans focus more on the shooter and event-specific facts (news) while Democrats focus more on the victims and call for policy changes.", "Our work contributes to a deeper understanding of the way group divisions manifest in language and to computational methods for studying them.", "1 1 Introduction Elites, political parties, and the media in the US are increasingly polarized (Layman et al., 2010; Prior, 2013; Gentzkow et al., forthcoming), and the propagation of partisan frames can influence public opinion (Chong and Druckman, 2007) and party identification (Fiorina and Abrams, 2008).", "Americans increasingly get their news from internet-based sources (Mitchell et al., 2016), and political information-sharing is highly ideologically segregated on platforms like Twitter (Conover et al., 2011; Halberstam and Knight, 1 All data and code is available at: https://github. com/ddemszky/framing-twitter 2016) and Facebook (Bakshy et al., 2015).", "Prior NLP work has shown, e.g., that polarized messages are more likely to be shared (Zafar et al., 2016) and that certain topics are more polarizing (Balasubramanyan et al., 2012); however, we lack a more broad understanding of the many ways that polarization can be instantiated linguistically.", "This work builds a more comprehensive framework for studying linguistic aspects of polarization in social media, by looking at topic choice, framing, affect, and illocutionary force.", "We explore these aspects of polarization by studying a sample of more than 4.4M tweets about 21 mass shooting events, analyzing polarization within and across events.", "Framing and polarization in the context of mass shootings is well-studied, though much of the literature studies the role of media (Chyi and McCombs, 2004; Schildkraut and Elsass, 2016) and politicians (Johnson et al., 2017).", "Several works find that frames have changed over time and between such events (Muschert and Carr, 2006; Schildkraut and Muschert, 2014), and that frames influence opinions on gun policies (HaiderMarkel and Joslyn, 2001).", "Prior NLP work in this area has considered how to extract factual information on gun violence from news (Pavlick et al., 2016) as well as quantify stance and public opinion on Twitter (Benton et al., 2016) and across the web (Ayers et al., 2016); here we advance NLP approaches to the public discourse surrounding gun violence by introducing methods to analyze other linguistic manifestations of polarization.", "We are particularly interested in the role of the shooter's race in shaping polarized responses to these events.", "Implicit or explicit racial biases can be central in people's understanding of social problems (Drakulich, 2015); in the mass shooting context, race is a factor in an event's newsworthiness (Schildkraut et al., 2018) and is often mentioned prominently in media coverage, particularly when the shooter is non-white (Mingus and Zopf, 2010; Park et al., 2012).", "Duxbury et al. (2018) find that media representations of white shooters disproportionately divert blame by framing them as mentally ill while representations of non-white shooters are more frequently criminal-ized, highlighting histories of violent behavior.", "The important question remains as to how polarized ideologies surrounding race take shape on forums such as Twitter.", "Therefore, in all of the analyses throughout this paper we consider the race of the shooter as a potential factor.", "We note that in the 21 shooting events we study, shootings in schools and places of worship are overwhelmingly carried out by white perpetrators, so we cannot fully disentangle the effect of race from other factors.", "Data collection.", "We compiled a list of mass shootings between 2015 and 2018 from the Gun Violence Archive.", "2 For each, we identified a list of keywords representative of their location (see Appendix A).", "Given Twitter search API's limitations on past tweets, we retrieved data from a Stanford lab's archived intake of the Twitter firehose.", "3 For each event, we built a list of relevant tweets for the two weeks following the event.", "A tweet is relevant if it contained at least one of the event's location-based representative keywords and at least one lemma from the following list: shoot, gun, kill, attack, massacre, victim.", "We filtered out retweets and tweets from users who have since been deactivated.", "We kept those 21 events with more than 10,000 tweets remaining.", "For more details see Appendix A. Partisan assignment.", "We estimate the party af-filiation of users in the dataset from the political accounts they follow using a method similar to that of Volkova et al. (2014), which takes advantage of homophily in the following behavior of users on twitter (Halberstam and Knight, 2016).", "We compile a list of Twitter handles of US Congress members in 2017, the 2016 presidential and vice 2 https://www.gunviolencearchive.org 3 With the exception of the two most recent shootings in Pittsburgh and Thousand Oaks, for which we collected tweets real time via the Twitter API.", "presidential candidates, and other party-affiliated pages.", "4 We label a user as a Democrat if they followed more Democratic than Republican politicians in November 2017, and as a Republican if the reverse is true.", "For each event, 5172% of users can be assigned partisanship in this way; to validate our method we compare state averages of these inferred partisan labels to state two-party vote shares, finding a high correlation (Figure 1).", "5 3 Quantifying Overall Polarization We begin by quantifying polarization (equiva-lently, partisanship) between the language of users labeled Democrats and Republicans after mass shooting events.", "We establish that there is substantial polarization, and that the polarization increases over time within most events.", "Pre-processing.", "We first build a vocabulary for each event as follows.", "Each vocabulary contains unigrams and bigrams that occur in a given event's tweets at least 50 times, counted after stemming via NLTK's SnowballStemmer and stopword removal.", "6 We refer to these unigrams and bigrams collectively as tokens .", "4 See Appendix B.1 for the complete list.", "5 We performed the sanity check for all partisan users with a valid US state as part of their geo-location ( 350k users).", "6 Stopword list is provided in Appendix A.1", "Measure of partisanship.", "We apply the leave-out estimator of phrase partisanship from Gentzkow et al. (forthcoming).", "Partisanship is defined as the expected posterior probability that an observer with a neutral prior would assign to a tweeter's true party after observing a single token drawn at random from the tweets produced by the tweeter.", "If there is no difference in token usage between the two parties, then this probability is .", "5 , i.e. we cannot guess the user's party any better after observing a token.", "The leave-out estimator consistently estimates partisanship under the assumption that a user's tokens are drawn from a multinomial logit model.", "The estimator is robust to corpus size.", "The leave-out estimate of partisanship LO between Democrats i 2 D and Republicans i 2 R is LO = 1 2 1 | D | X i 2 D q i \u0000 i + 1 | R | X i 2 R q i (1 \u0000 \u0000 i ) where q i = c i /m i is the vector of empirical token frequencies for tweeter i , with c i being the vector of token counts for tweeter i and m i the sum of token counts for tweeter i ; and \u0000 i = ( q D \\ i ( q D \\ i + q R \\ i )) is a vector of empirical posterior probabilities, excluding speaker i and any token that is not used by at least two speakers.", "Here we let denote element-wise division and q G = P i 2 G c i / P i 2 G m i denote the empirical token frequency of tweeters in group G .", "The estimator thus captures two intuitive components of partisanship: between-group difference (poste-rior probability for each feature), and within-group similarity (dot-product between the feature vector of each speaker and that of their group).", "User-level measures.", "As the above leave-out estimator represents the average of user-level polarization values, we take the user-level dot product ( q i \u0000 i ) as an estimate of the polarization of user i 's language.", "We consider the correlation of this value and the number of politicians a user follows in total and from their preferred party.", "Overall polarization.", "As Figure 2 shows, the discussion of each event 7 is highly polarized: values range from .", "517 to .", "547 .", "For comparison, this 7 For all the experiments using the leave-out estimator, we exclude Fort Lauderdale for which we only have tweets for the first day after the event; polarization is most dominant a few days after each event, making it incomparable.", "For reference, the leave-out estimate for Fort Lauderdale is .", "51 .", "measure, for most events, is similar to or higher than the polarization in the US congress ( . 53 in recent years) (Gentzkow et al., forthcoming).", "While we observe a slight increase in polarization over the past three years, this increase is not statistically significant ( p . 26 ).", "Post-event polarization.", "To see how polarization changes at the event level, we computed the leave-out estimate for each of the first 10 days following the events (see Figure 3).", "An event-day level regression of partisanship on days since the event suggests a slight increase in post-event polarization across events (slope = . 002 , p < 0 . 05 ).", "Fitting separate regressions, we find that the five events with the steepest increase in polarization are Burlington (slope = . 03 , p < 0 . 05 ), Orlando (slope = . 006 , p < 0 . 001 ), Las Vegas (slope = . 003 , p < 0 . 001 ), Chattanooga (slope = . 003 , p < 0 . 05 ) and Roseburg (slope = . 003 , p < 0 . 05 ) note that except for Las Vegas, these shootings are all committed by a person of color.", "Are the changes in the leave-out score due to different users tweeting at different times or due to the same users becoming more or less political?", "We found that while on average only 10% of users tweeted on multiple days ( SD = 5% ) across the events, these users contribute 28% of the tweets ( SD = 15% ).", "After removing these users from the leave-out estimation, we found that the temporal patterns remain with the same statistical significance, providing one piece of evidence 0.50 0.55 0.60 0.65 2.5 5.0 7.5 10.0 Day after event Lea v e ou t e s t i m a t e Burlington Chattanooga Orlando Roseburg Vegas all other Figure 3: Leave-out estimate post-event.", "that changes in polarization are not due to changes within users who tweet on multiple days.", "User-level polarization.", "We estimated a linear regression of the leave-out score on the total number of followed politicians and the number from the user's preferred party, with controls for event indicators.", "The estimates imply that, fixing the total number of followed politicians, one more followed politician from one's preferred party is associated with an increase of .", "009 SD in the leave-out.", "Fixing the number of followed politicians from the user's preferred party, one more followed politician is associated with a decrease of .", "02 SD in the leave-out.", "Topic choice can be a tool for agenda-setting by establishing what an author or institution deems worthy of discussion (McCombs, 2002), and works in NLP have used topic modeling as an approach to measure this effect (Tsur et al., 2015; Field et al., 2018).", "The strategy of highlighting particular aspects within topics as a means of framing (Entman, 2007) has also been quantified in the NLP literature (Boydstun et al., 2013; Card et al., 2015; Naderi and Hirst, 2017).", "Previous work largely focuses on the relation between topic and framing in the news media; we study social media, proposing methods to identify general, non-event-specific topics and to quantify betweenand within-topic polarization.", "Topic assignment.", "Our goal is to induce topics that are salient in our narrow domain and comparable across events.", "This presents a challenge for traditional topic modeling approaches, since the discourse surrounding these events is inherently tied to concrete aspects of the events that tend to covary with topic usage, like location, setting, and demographics of the shooter and victims.", "We build on the ability of vector space models to represent higher-level semantics to develop our own embedding-based topic assignment approach, comparing it with two traditional LDA-based methods: MALLET 8 and the Biterm Topic Model (BTM) (Yan et al., 2013); BTM was developed specifically for tweets.", "For all of these methods, we first randomly sample 10k tweets from each event forming our subset S of all tweets T ; then, we create a vocabulary V of word stems that occur at least ten times in at least three events within S ( 2000 word stems) and remove all stems from T are not part of V .", "Sampling is crucial for encouraging event-independent topics given the large disparity among event-level tweet counts (the largest event, Orlando, has 225 more tweets than the smallest event, Burlington).", "For the embedding-based approach, we: 1. Train GloVe embeddings (Pennington et al., 2014) on V based on 11-50k random samples of tweets from each event.", "9 2. Create sentence embeddings e t , 8 t 2 T using Arora et al. (2017)'s method, by computing the weighted average v t of the embeddings of stems within t and removing v t 's projection onto the first principal component of the matrix the rows of which are v t , 8 t 2 S .", "Stem weights are set to be inversely proportional to their frequencies in S .", "3. Jointly cluster the embeddings e t , 8 t 2 S via k-means using cosine distance and assign all tweet embeddings e t , 8 t 2 T to the centroids to which they are closest.", "We also trained MALLET and BTM on S and used the resulting models to infer topics for all tweets in T , assigning each tweet to its highest probability topic.", "Henceforth, we use d to mean cosine distance for k-means and probabilities for MALLET and BTM.", "A manual inspection found that about 25% of the tweets are either difficult to assign to any topic or they represent multiple topics equally.", "To filter out such tweets, for each tweet we looked at the ratio of d to its closest and second closest topic and removed tweets that have ratios higher than the 75 th percentile (calculated at the model-level).", "10 8 http://mallet.cs.umass.edu/topics.php 9 This sample is different from S as it includes more tweets to increase data size, which is important for training the embeddings, where the slightly disproportional representation of events is less problematic.", "10 This procedure filters out 11 26% tweets (M= 22% , Figure 4: Topic model evaluations, collapsed across k = 6 \u0000 10 .", "Table 1: Our eight topics (with their average proportions", "proportions across events) and nearest-neighbor stem embeddings to the cluster centroids.", "Topic names were manually assigned based on inspecting the tweets.", "To compare the models, we ran two MTurk experiments: a word intrusion task (Chang et al., 2009) and our own, analogically defined tweet intrusion task, with the number of topics k ranging between 6 10 .", "Turkers were presented with either a set of 6 words (for word intrusion) or a set of 4 tweets (for tweet intrusion), all except one of which was close (in terms of d ) to a randomly cho-sen topic and one that was far from that topic but close to another topic.", "Then, Turkers were asked to pick the odd one out among the set of words / tweets.", "More details in Appendix D. We find that our model outperforms the LDA-based methods with respect to both tasks, particularly tweet intrusion see Figure 4. This suggests that our model both provides more cohesive topics at the word level and more cohesive groupings by topic assignment.", "The choice of k does not yield a significant difference among model-level accuracies.", "However, since k = 8 slightly outperforms other k -s in tweet intrusion, we use it for further analysis.", "See Table 1 for nearest neighbor stems to each topic and Appendix C.2 for example tweets.", "SD= 4% ) across events, for our model, for eight topics.", "from Section 3.1 provides a measure of partisanship.", "The information in a tweet, and thus partisanship, can be decomposed into which topic is discussed, and how it's discussed.", "To measure within-topic partisanship for a particular event, i.e. how a user discusses a given topic, we re-apply the leave-out estimator.", "For each topic, we calculate the partisanship using only tweets categorized to that topic.", "Then, overall within-topic partisanship for the event is the weighted mean of these values, with weights given by the proportion of tweets categorized to each topic within each event.", "Between-topic partisanship is defined as the expected posterior that an observer with a neutral prior would assign to a user's true party after learning only the topic but not the words of a user's tweet.", "We estimate this value by replacing each tweet with its assigned topic and applying the leave-out estimator to this data.", "Figure 5 shows that for most events within-topic is higher than between-topic partisanship, suggesting that while topic choice does play a role in phrase partisanship (its values are meaningfully higher than . 5 ), within-topic phrase usage is significantly more polarized.", "Linear estimates of the relationship between within and between topic partisanship and time show that while within-topic polarization has increased over time, between-topic polarization has remained stable.", "This finding supports the idea that topic choice and topic-level framing are distinct phenomena.", "Partisanship also differs by topic, and within days after a given event.", "Figure 6 shows polarization within topics for 9 days after Las Ve-Figure 6: Las Vegas within-topic polarization in the days after the event.", "The bar charts show the proportion of each topic in the data at a given time.", "gas.", "We find that solidarity has the lowest and shooter's identity & ideology the highest polarization throughout; polarization in most topics increases over time and news has the steepest increase.", "Similar patterns are present after Orlando (Figure 17 in Appendix I).", "Measuring polarization of topics for other events over time is noisy, given the sparsity of the data, but overall within-topic polarization is consistent: the most polarized topics on average across events are shooter's identity & ideology ( . 55 ) and laws & policy ( . 54 ), where people are apparently polarized about both why an event happened and what to do about it.", "Fact-and sympathy-based topics display less polarization: news ( . 51 ), victims & location ( . 52 ), solidarity ( . 52 ) and remembrance ( . 52 ).", "As shown in Figure 7, investigation , news , and shooter's identity & ideology are more likely to be discussed by Republicans and laws & policy and solidarity more likely to be discussed by Democrats across events.", "11 Topics preferred by Republicans seem to relate more to the shooter than to the victims, while topics preferred by Democrats seem to relate more closely to the victims.", "The shooter's race appears to play a role in topic preference: if the shooter is white, Democrats become more likely to focus on shooter's identity & ideology and laws & policy and Republicans on news and investigation than if the shooter is a person of color.", "11 p -values are calculated using a one sample t-test, comparing with zero: shooter's identity & ideology ( p < 0 . 05 ), investigation ( p < 0 . 001 ), laws & policy ( p < 0 . 1 ), news ( p < 0 . 05 ), solidarity ( p < 0 . 001 ).", "and types of grounding that are used as partisan framing devices, contributing to polarization.", "Partisan tokens.", "We estimate the partisanship of tokens via their event-level log odds ratio of Democrats relative to Republicans (based on the vocabularies we create in Section 3.1).", "We compare these estimates across events.", "12 Grounding.", "We study whether there is polarization in which prior tragic events are referenced in the context of a particular mass shooting.", "We compile a list of keywords representing major events of mass violence in the US in the past two decades and kept those that were mentioned at least 100 times by Democrat or Republican users.", "For all tweets for each event in our dataset, we counted the mentions of past context events .", "For example, in the following tweet posted after Las Vegas: Dozens of preventable deaths should not be the cost of living in America. Stand up to the #NRA. #LasVegasShooting #SandyHook #Charleston, Sandy Hook and Charleston are the context events.", "Finally, we calculated the partisan log odds ratio of each context event.", "We focus on the partisanship of the term terrorist and crazy, which exhibit differential pat-12", "pat-12 To compare the partisanship of tokens at the eventand topic-level, we also z -score the log odds ratios (Monroe et al., 2008) within events and topics; the most partisan tokens are reported in Appendix F. The reason why we do not z -score for the between-event comparison is because data verbosity disproportionately affects the range of the values' magnitudes.", "Note that the signs of values which we focus on for the cross-event comparison are not affected by z -scoring.", "terns across events based on the shooter's race.", "13 Terrorist is always more likely to be used by Democrats than Republicans in events where the shooter is white, and the opposite is true when the shooter is a person of color (Figure 8); crazy is more likely used by Republicans if the shooter is white than if they are a person of color and the opposite is true (although the pattern is weaker) when a shooter is white.", "These findings support related work (Perea, 1997; Delgado and Stefancic, 2017) discussing binary conceptualization of race in the US, and its influence in determining whether a shooter's mental health or aspects of their identity are discussed.", "However, the fact that the influence of race flips completely for Democrats and Republicans is a striking result that calls for further exploration.", "The partisanship of contextual grounding also corroborates our finding that the shooter's race in-fluences how people conceptualize a certain event.", "Our results in Figure 9 suggest a few key take-aways: the two most frequently employed context events are both highly partisan (Sandy Hook for Democrats and 9/11 for Republicans); shootings at schools and places of worship are more likely to be brought up by Democrats; Democrats are more likely to reference events with white shooters, while Republicans are more likely to reference those with shooters who are people of color.", "Affect is intimately tied to ideological reasoning (Redlawsk, 2002; Taber et al., 2009), and so emotional expression represents another semantic layer relevant to polarization (Iyengar et al., 2012; Suhay, 2015).", "Others have shown that emotion words can help detect political ideology on Twitter (Preotiuc-Pietro et al., 2017) and that emo-13 Note that these words in fact have the largest difference (negative and positive, respectively) if we calculate the differences between the mean z -scores grouped by the shooter's race for all tokens in our joint vocabulary.", "tive political tweets are more likely to be shared (Stieglitz and Dang-Xuan, 2012).", "Here, we employ a lexicon-based approach to measure valence (positive and negative) and five basic emotion categories (disgust, fear, trust, anger, and sadness).", "Since word-affect associations are highly domain dependent, we tailored an existing affect lexicon, the NRC Emotion Lexicon (Mohammad and Turney, 2013), to our domain via label propagation (Hamilton et al., 2016).", "Specifically, we stem all the words in the lexicon and select 8-10 representative stems per emotion category that have an association with that emotion in the context of mass shootings.", "For each emotion category, we compute pairwise cosine distances between the GloVe embedding of each in-vocabulary stem and the representative stems for that emotion, and include the 30 stems with the lowest mean cosine distances.", "The resulting lexicons can be found in Appendix E. We use these lexicons to measure the partisanship of each affect category.", "For each event and each party we aggregate stem frequencies per emotion category.", "We then calculate the partisan log odds ratio of each category for each event.", "The log odds ratio of each affect category is shown in Figure 10.", "These findings suggest that positive sentiment, sadness and trust are more likely to be expressed by Democrats across events, while fear and disgust are more likely to be expressed by Republicans, particularly when the shooter is a person of color.", "Anger, trust and negative sentiment is similarly likely to be expressed by both parties.", "14 Our results about fear and disgust accord with existing literature on emotion and political ideology: conservatives score higher than liberals on subjective measures of fear (e.g. Jost et al., 2017; Federico et al., 2009; Hibbing et al., 2014) and disgust sensitivity is also associated with political conservativism (e.g. Inbar et al., 2009, 2012).", "Modality is a lexical category concerned with necessity and possibility (Kratzer, 2002; Fintel,", "14 p -values are calculated using a one sample t-test, comparing to zero: anger ( p 0 . 43 ), disgust ( p 0 . 06 ), fear ( p < 0 . 001 ), negative ( p . 2 ), positive ( p < 0 . 001 ), sadness ( p < 0 . 02 ), trust ( p 0 . 07 ).", "2006).", "In the aftermath of a tragic event, people seek solutions, a process that often involves re-flecting on what should have happened or should happen now or in the future (e.g. to prevent such events).", "We hypothesize that the use of modals in our data gives insight into the kind of (illocution-ary) acts (Austin, 1962) the users are performing via their tweets, such as calling for action, assigning blame, expressing emotions, and stating facts.", "We work with all forms of the four most frequent necessity modals in our data should, must, have to and need to .", "For each, we quantify its partisanship via its partisan log odds ratio.", "We also annotate a random sample of 200 tweets containing modals to see whether they are indeed used in contexts that imply calls for change / action (e.g. We must have gun control!') and / or to express the user's mental state about the event, such as despair or disbelief (e.g. Why do people have to die?').", "Table 2 shows a random sample of tweets containing some form of either should, must, have to , or", "This roller coaster debate MUST STOP!", "Sensible gun ownership is one thing but assault weapons massacre innocent lives.", "The savagery of gore at #Parkland was beyond belief & must be the last.", "In times of tragedy shouldn't we all come together?!", "Prayers for those harmed in the #PlannedParenthood shooting.", "Communities need to step up and address white on white crime like the Las Vegas massacre.", "White men are out of control.", "he BLM protest shooting, planned parenthood, now cali... domestic terrorism will crumble this country, SANE PPLHAVE TOFIGHT BACK Shooting cops is horrible, cannot be condoned.", "But must be understood these incidents are outgrowth of decades of police abuses.", "#BatonRouge 1. Islamic terrorists are at war with us 2. Gun free zones = kill zones 3. Americans should be allowed to defend themselves #Chattanooga Las Vegas shooting Walmart shooting and now 25 people killed in Texas over 90 people killed Mexico should build that wall to keep the US out CNN reporting 20 dead, 42 injured in Orlando night club shooting.", "Just awful.", "The US must act to control guns or this carnage will continue.", "need to .", "More collocations, as well as their partisanship, can be found in Appendix H. These examples, as well as our annotation, support the hypothesis that these modals are primarily used to call for action.", "Of the 200 modal uses, 78% express calls for change/action, 40% express the user's mental state.", "15 We also compute the representation p mx of each modal m in each topic x 2 X via ( f m x / P x 0 2 X f mx 0 ) / ( f x / P x 0 2 X f x 0 ) , where f x is the number of tweets from topic x , and f mx the number of those also containing m .", "We find that that modals are over-represented in the laws & policy topic (see Figure 11).", "This evidence suggests that calls for policy change especially gun control, based on annotated samples are a dominant subset of calls for action.", "15 Other uses are primarily epistemic ones (e.g. The suspect must be mentally ill').", "must (mean: \u0000 . 3 , p < 0 . 001 ), should (mean: \u0000 . 18 , p < 0 . 01 ), need to (mean: \u0000 . 18 , p < 0 . 01 ) where Democrat and Republican log odds are negative and positive, respectively.", "16 A two-tailed t-test shows that only should exhibits statistically significant difference based on the shooter's race ( p < 0 . 03 ), as it is even more likely to be used by Democrats when the shooter is white.", "To understand whether assigning blame in this domain is a partisan propensity, we also study uses of should have .", "17 The log odds of should have (mean: \u0000 . 22 , p < 0 . 05 ) show that it is similarly likely to be used by Democrats as should ( p 0 . 8 from two-tailed t-test).", "Interestingly the log odds ratio of should have , unlike that of should , does not differ significantly based on the shooter's race ( p 0 . 8 from two-tailed t-test).", "Moreover, we did not find a significant difference in the partisanship of should have nor any other modal based on the administration (Obama or Trump) a shooting took place under, suggesting that Democrats are more likely call for change and assign blame even if their preferred party is in power.", "We show that inspecting polarization on social media from various angles can shed light on salient phenomena pertinent to group divisions.", "Applying the leave-out estimator of phrase partisanship to data on mass shootings, we find that reactions to these events are highly polarized politically.", "To disentangle topic choice and topic-level framing two phenomena that contribute to polarization we introduce a tweet-clustering approach.", "By sampling, requiring words in the vocabulary to appear in multiple events and relying 16 p -values are from one sample t-test, comparing to 0 .", "on the abstraction of a vector space model, we generate cohesive topic representations that are robust to disparities among event-level vocabularies and tweet counts.", "Human evaluation shows that our method outperforms LDA-based approaches.", "Our induced topics suggest that Republicans preferentially discuss topics about the shooter's identity and ideology, investigation and news, while Democrats preferentially discuss solidarity and policy-related topics.", "We also find that the setting and the shooter's race interact with polarization.", "For example, Democrats are more likely to contextualize any mass shooting among school shootings and call white shooters terrorists than are Republicans, who in turn are more likely to liken any shooting to other violent events perpetrated by people of color whom they are more likely to call terrorist than are Democrats.", "Moreover, Democrats are more likely to frame the shooter as mentally ill when they are a person of color and Republicans when they are white.", "We also demonstrate that looking at affect and illocutionary force can help us understand users' polarized responses to these tragic events: Republicans are more likely to express fear and disgust than are Democrats, while Democrats are more likely to express sadness and positive sentiment, to make calls for action and assign blame.", "Polarization is a multi-faceted phenomenon: in this paper we present a set of measures to study these different facets through the lens of language.", "We show that these measures provide convergent evidence, creating a clearer picture of the complex ideological division permeating public life.", "Acknowledgements.", "We thank Jure Leskovec and Adrijan Bradaschia for data, and Cleo Con-doravdi, Chris Potts, Linda Ouyang, David Ritz-woller and Frank Yang for helpful feedback.", "We are grateful for the support of the Stanford Cy-ber Initiative, the Melvin and Joan Lane Stanford Graduate Fellowship (to D.D.), NSF GRF DGE-114747 (to N.G.), the Michelle and Kevin Douglas Stanford Interdisciplinary Graduate Fellowship (to R.V.), NSF grant CRII 1657155 (to J.Z.), the Stanford Institute for Economic Policy Research and the Knight Foundation (to M.G.) and the Brown University Population Studies and Training Center (to J.S.)." ]
[ "objective", "objective", "method", "method", "objective", "abstain", "objective", "abstain", "other", "result", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "other", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "other", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "objective", "abstain", "method", "result", "abstain", "result", "abstain", "abstain", "objective", "method", "result", "abstain", "abstain", "other" ]
[ "Simultaneous Machine Translation is the task of incrementally translating an input sentence before it is fully available.", "Currently, simultaneous translation is carried out by translating each sentence independently of the previously translated text.", "More generally, Streaming MT can be understood as an extension of Simultaneous MT to the incremental translation of a continuous input text stream.", "In this work, a state-of-the-art simultaneous sentence-level MT system is extended to the streaming setup by leveraging the streaming history.", "Extensive empirical results are reported on IWSLT Translation Tasks, showing that leveraging the streaming history leads to significant quality gains.", "In particular, the proposed system proves to compare favorably to the best performing systems.", "Simultaneous Machine Translation (MT) is the task of incrementally translating an input sentence before it is fully available.", "Indeed, simultaneous MT can be naturally understood in the scenario of translating a text stream as a result of an upstream Automatic Speech Recognition (ASR) process.", "This setup defines a simultaneous Speech Translation (ST) scenario that is gaining momentum due to the vast number of industry applications that could be exploited based on this technology, from person-to-person communication to subtitling of audiovisual content, just to mention two main applications.", "These real-world streaming applications motivate us to move from simultaneous to streaming MT, understanding streaming MT as the task of simultaneously translating a potentially unbounded and unsegmented text stream.", "Streaming MT poses two main additional challenges over simultaneous MT. First, the MT system must be able to leverage the streaming history beyond the sentence level both at training and inference time.", "Second, the system must work under latency constraints over the entire stream.", "With regard to exploiting streaming history, or more generally sentence context, it is worth mentioning the significant amount of previous work in offline MT at sentence level (Tiedemann and Scherrer, 2017; Agrawal et al., 2018), document level (Scherrer et al., 2019; Ma et al., 2020a; Zheng et al., 2020b; Li et al., 2020; Maruf et al., 2021; Zhang et al., 2021), and in related areas such as language modelling (Dai et al., 2019) that has proved to lead to quality gains.", "Also, as reported in (Li et al., 2020), more robust ST systems can be trained by taking advantage of the context across sentence boundaries using a data augmentation strategy similar to the prefix training methods proposed in (Niehues et al., 2018; Ma et al., 2019).", "This data augmentation strategy was suspected to boost re-translation performance when compared to conventional simultaneous MT systems (Arivazhagan et al., 2020).", "Nonetheless, with the notable exception of (Schneider and Waibel, 2020), sentences in simultaneous MT are still translated independently from each other ignoring the streaming history.", "(Schneider and Waibel, 2020) proposed an end-to-end streaming MT model with a Transformer architecture based on an Adaptive Computation Time method with a monotonic encoder-decoder attention.", "This model successfully uses the streaming history and a relative attention mechanism inspired by Transformer-XL (Dai et al., 2019).", "Indeed, this is an MT model that sequentially translates the input stream without the need for a segmentation model.", "However, it is hard to interpret the latency of their streaming MT model because the authors observe that the current sentence-level latency measures, Average Proportion (AP) (Cho and Esipova, 2016), Average Lagging (AL) (Ma et al., 2019) and Differentiable Average Lagging (DAL) (Cherry and Foster, 2019) do not perform 6972 well on a streaming setup.", "This fact is closely related to the second challenge mentioned above, which is that the system must work under latency constraints over the entire stream.", "Indeed, current sentence-level latency measures do not allow us to appropriately gauge the latency of streaming MT systems.", "To this purpose, (Iranzo-Snchez et al., 2021) recently proposed a stream-level adaptation of the sentence-level latency measures based on the conventional re-segmentation approach applied to the ST output in order to evaluate translation quality (Matusov et al., 2005).", "In this work, the simultaneous MT model based on a unidirectional encoder-decoder and training along multiple waitk paths proposed by (El-bayad et al., 2020a) is evolved into a streaming-ready simultaneous MT model.", "To achieve this, model training is performed following a sentence-boundary sliding-window strategy over the parallel stream that exploits the idea of prefix training, while inference is carried out in a single forward pass on the source stream that is segmented by a Direct Segmentation (DS) model (Iranzo-Snchez et al., 2020).", "In addition, a refinement of the unidirectional encoder-decoder that takes advantage of longer context for encoding the initial positions of the streaming MT process is proposed.", "This streaming MT system is thoroughly assessed on IWSLT translation tasks to show how leveraging the streaming history provides systematic and significant BLEU improvements over the baseline, while reported stream-adapted latency measures are fully consistent and interpretable.", "Finally, our system favourably compares in terms of translation quality and latency to the latest state-of-the-art simultaneous MT systems (Ansari et al., 2020).", "This paper is organized as follows.", "Next section provides a formal framework for streaming MT to accommodate streaming history in simultaneous MT. Section 3 presents the streaming experimental setup whose results are reported and discussed in Section", "4. Finally, conclusions and future work are drawn in Section", "5. 2 Streaming MT In streaming MT, the source stream X to be translated into Y comes as an unsegmented and unbounded sequence of tokens.", "In this setup, the decoding process usually takes the greedy decision of which token appears next at the i -th position of the translation being generated Y i = argmax y Y p (cid:16) y (cid:12)(cid:12) XG ( i ) 1 , Y i 1 1 (cid:17) (1) where G ( i ) is a global delay function that tells us the last position in the source stream that was available when the i -th target token was output, and Y is the target vocabulary.", "However, taking into account the entire source and target streams can be prohibitive from a computational viewpoint, so the generation of the next token can be conditioned to the last H ( i ) tokens of the stream as Y i = argmax y Y p (cid:16) y (cid:12)(cid:12) XG ( i ) G ( i ) H ( i )+1 , Y i 1 i H ( i ) (cid:17) .", "Nevertheless, for practical purposes, the concept of sentence segmentation is usually introduced to explicitly indicate a monotonic alignment between source and target sentences in streaming MT. Let us consider for this purpose the random variables a and b for the source and target segmentation of the stream, respectively.", "Variables a and b can be understood as two vectors of equal length denoting that the n -th source sentence starts at position a n , while the n -th target sentence does so at position b n .", "In the next sections, we reformulate simultaneous MT in terms of the more general framework of streaming MT. This reformulation allows us to consider opportunities for improvement of previous simultaneous MT models.", "In the conventional simultaneous MT setup, the aforementioned variables a and b are uncovered at training and inference time, while in streaming MT a and b are considered hidden variables at inference time that may be uncovered by a segmentation model.", "In fact, in conventional simultaneous MT the history is limited to the current sentence being translated, while in streaming MT we could exploit the fact that the history could potentially span over all the previous tokens before the current sentence.", "To this purpose, the global delay function G ( i ) introduced above would replace the sentence-level delay function g ( i ) commonly used in simultaneous MT. However, it should be noticed that we could express g ( i ) as G ( i ) a n with b n i < b n +1 .", "Delay functions are defined as a result of the policy being applied.", "This policy decides what action to take at each timestep, whether to read 6973 a token from the input or to write a target token.", "Policies can be either fixed (Ma et al., 2019; Dalvi et al., 2018) depending only on the current timestep, or adaptive (Arivazhagan et al., 2019; Ma et al., 2020b; Zheng et al., 2020a) being also conditioned on the available input source words.", "Among those fixed policies, the sentence-level wait-k policy proposed by (Ma et al., 2019) is widely used in simultaneous MT with the simple local delay function g ( i ) = k + i 1 .", "This policy initially reads k source tokens without writing a target token, and then outputs a target token every time a source token is read.", "This is true in the case that the ratio between the source and target sentence lengths is one.", "However, in the general case, a catch-up factor computed as the inverse of the source-target length ratio defines how many target tokens are written for every read token, that generalises Eq.", "3 as g ( i ) = (cid:22) k + i 1 (cid:23) .", "The waitk policy can be reformulated in streaming MT so that the waitk behaviour is carried out for each sentence as (cid:22) (cid:23)", "n n +1", "In streaming MT, we could take advantage of the streaming history by learning the probability distribution stated in Eq.", "2, whenever streaming samples would be available.", "However, training such a model with arbitrarily long streaming samples poses a series of challenges that need to be addressed.", "Firstly, it would be necessary to carefully define G ( i ) and H ( i ) functions so that, at each timestep, the available source and target streams are perfectly aligned.", "Given that the source-target length ratio may vary over the stream, if one uses a waitk policy with a fixed , there is a significant chance that source and target are misaligned at some points over the stream.", "Secondly, every target token can potentially have a different G ( i ) and H ( i ) , so the encoder-decoder representation and contribution to the loss would need to be recomputed for each target token at a significant computational expense.", "Lastly, current MT architectures and training procedures have evolved conditioned by the availability of sentence-level parallel corpora for training, so they need to be adapted to learn from parallel streams.", "To tackle the aforementioned challenges in streaming MT, a compromise practical solution is to uncover the source and target sentence segmentations.", "At training time, parallel samples are extracted by a sentence-boundary sliding window spanning over several sentences of the stream that shifts to the right one sentence at a time.", "In other words, each sentence pair is concatenated with its corresponding streaming history that includes previous sentence pairs simulating long-span prefix training.", "Doing so, we ensure that source and target streams are properly aligned at all times, and training can be efficiently carried out by considering a limited history.", "The inference process is performed in a purely streaming fashion in a single forward pass as defined in Eq.", "2 with H ( i ) being consistently defined in line with training, so that the streaming history spans over previous sentences already translated.", "In simultaneous MT, the conventional Transformer-based bidirectional encoder representation (of the l -th layer) of a source token at any position j is constrained to the current n -th sentence", "where a n j G ( i ) , while the decoder can only attend to previous target words and the encoding of those source words that are available at each timestep", "As a result, the encoder and decoder representations for positions j and i , respectively, could be computed taking advantage of subsequent positions to position j up to position G ( i ) at inference time.", "However, at training time, this means that this bidirectional encoding-decoding of the source sentence has to be computed for every timestep, taking up to | y | times longer than the conventional Transformer model.", "To alleviate this problem, (Elbayad et al., 2020a) proposes a waitk simultaneous MT model based on a modification of the Transformer architecture that uses unidirectional encoders and multiple values of k at training time.", "In this way, the 6974 model is consistent with the limited-input restriction of simultaneous MT at inference time.", "The proposed unidirectional encoder can be stated as e ( l ) j = Enc (cid:16) e ( l 1) a n : j (cid:17) , (8) that is more restrictive than that in Eq.", "6, and it consequently conditions the decoder representation, since G ( i ) in Eq.", "7 depends on the specific k value employed at each training step.", "As mentioned above, the unidirectional encoder just requires a single forward pass of the encoder at training time, and therefore there is no additional computational cost compared with a conventional Transformer.", "However, it does not take into account all possible input tokens for different values of k .", "Indeed, the encoding of the j -th input token will not consider those tokens beyond the j -th position, even if including them into the encoding process does not prevent us from performing a single forward pass.", "A trade-off between the unidirectional and bidirectional encoders is what we have dubbed Partial Bidirectional Encoder (PBE), which modifies the unidirectional encoder to allow the first k 1 source positions to have access to succeeding tokens according to e ( l ) j = Enc (cid:16) e ( l 1) a n :max( a n + k 1 ,j ) (cid:17) .", "PBE allows for a longer context when encoding the initial positions and is consistent with Eq.", "7.", "At training time a single forward pass of the encoder-decoder is still possible as in the unidirectional encoder, and therefore no additional training cost is incurred.", "At inference time, we fall back to the bidirectional encoder.", "Figure 1 shows a graphical comparison of the attention mechanism in j = 3 across the bidirectional (left), unidirectional (center) and PBE (right) encoders with k = 4 for two consecutive timesteps i = 1 with G (1) = 4 (top) and i = 2 with G (2) = 5 (bottom).", "As observed, PBE can take advantage of additional positions from j + 1 up to k with respect to the unidirectional encoder.", "In a streaming setup, the bidirectional encoder-decoder of Eqs.", "6 and 7 are not necessarily constrained to the current sentence and could exploit a streaming history of H ( i ) tokens e ( l ) j = Enc (cid:16) e ( l 1) G ( i ) H ( i )+1: G ( i ) (cid:17) (10) s ( l ) i = Dec (cid:16) s ( l 1) i H ( i ): i 1 , e ( l 1) G ( i ) H ( i )+1: G ( i ) (cid:17) .", "A series of comparative experiments in terms of translation quality and latency have been carried out using data from the IWSLT 2020 Evaluation Campaign (Ansari et al., 2020), for both German English and English German.", "For the streaming condition, our system is tuned on the 2010 dev set, and evaluated on the 2010 test set for comparison with (Schneider and Waibel, 2020).", "Under this setting, words were lowercased and punctuation was removed in order to simulate a basic upstream ASR system.", "Also, a second non-streaming setting is used for the English German direction to compare our system with top-of-the-line sentence-based simultaneous MT systems participating in the IWSLT 2020 Simultaneous Translation Task.", "Table 1 summarizes the basic statistics of the IWSLT corpora used for training the streaming MT systems.", "Corpora for which document information is readily available are processed for training using the sliding window technique mentioned in Section 2.1.", "Specifically, for each training sentence, we prepend previous sentences, which are added one by one until a threshold h of history tokens is reached.", "Sentence boundaries are defined on the presence of special tokens ( <DOC>,<CONT>,<BRK>,<SEP> ) as in (Junczys-Dowmunt, 2019).", "Byte Pair Encoding (Sennrich et al., 2016) with 40K merge operations is applied to the data after preprocessing.", "Our streaming MT system is evaluated in terms of latency and translation quality with BLEU (Pap-ineni et al., 2002).", "Traditionally, latency evaluation in simultaneous MT has been carried out using 6975 X 1 X 2 X 3 X 4 X 5 G (2) = 5 e 3 X 1 X 2 X 3 X 4 X 5 G (1) = 4 e 3 X 1 X 2 X 3 X 4 X 5 G (2) = 5 e 3 X 1 X 2 X 3 X 4 X 5 G (1) = 4 e 3 X 1 X 2 X 3 X 4 X 5 G (2) = 5 e 3 X 1 X 2 X 3 X 4 X 5 G (1) = 4 e 3 Figure 1: Comparison of attention positions in j = 3 for bidirectional (left), unidirectional (center) and PBE (right) encoders with k = 4 in two consecutive timesteps i = 1 with G (1) = 4 (top) and i = 2 with G (2) = 5 (bottom).", "AP, AL and DAL.", "However, these measures have been devised for sentence-level evaluation, where the latency of every sentence is computed independently from each other and as mentioned before, they do not perform well on a streaming setup.", "Thus, we revert to the stream-based adaptation of these measures proposed in (Iranzo-Snchez et al., 2021) unless stated otherwise.", "Latency measures for a sentence pair ( x , y ) are based on a cost function C i ( x , y ) and a normalization term Z ( x , y ) L ( x , y ) = 1 Z ( x , y ) (cid:88) i C i ( x , y ) (13) where C i ( x , y ) = g ( i ) AP g ( i ) i 1 AL g (cid:48) ( i ) i 1 DAL (14) and Z ( x , y ) = | x | | y | AP argmin i : g ( i )= | x | i AL | y | DAL (15) Latency measures can be computed in a streaming manner by considering a global delay function G ( i ) , that is mapped into a relative delay so that it can be compared with the sentence-level oracle delay.", "For the i -th target position of the n -th sentence, the associated relative delay can be obtained from the global delay function as g n ( i ) = G ( i + b n ) a n .", "So, the stream-adapted cost function of the latency measures is defined as C i ( x n , y n ) = g n ( i ) AP g n ( i ) i 1 n AL g (cid:48) n ( i ) i 1 n DAL (16) with g (cid:48) n ( i ) defined as max g n ( i ) (cid:40) g (cid:48) n 1 ( | x n 1 | ) + 1 n 1 i = 1 g (cid:48) n ( i 1) + 1 n i > 1 (17) This definition assumes that the source and target sentence segmentation of the stream are uncovered, but this is not always the case (Schneider and Waibel, 2020) or they may not match that of the reference translations.", "However, sentence boundaries can be obtained by re-segmenting the system hypothesis following exactly the same procedure applied to compute translation quality in ST evaluation.", "To this purpose, we use the MWER segmenter (Matusov et al., 2005) to compute sentence boundaries according to the reference translations.", "Our streaming MT models have been trained following the conventional Transformer BASE (German English streaming MT) and BIG (English German simultaneous MT) configura-tions (Vaswani et al., 2017).", "As in (Schneider and Waibel, 2020), after training is finished, the models are finetuned on the training set of MuST-C (Di Gangi et al., 2019).", "The proposed model in Section 2 assumes that at inference time the source stream has been segmented into sentences.", "To this purpose, we opt for the text-based DS model (Iranzo-Snchez et al., 2020), a sliding-window segmenter that moves over the source stream taking a split decision at each 6976 token based on a local-context window that extends to both past and future tokens.", "This segmenter is streaming-ready and obtains superior translation quality when compared with other segmenters (Stolcke, 2002; Cho et al., 2017).", "As the future window length of the DS segmenter conditions the latency of the streaming MT system, this length was adjusted to find a tradeoff between latency and translation quality.", "The DS segmenter was trained on the TED corpus (Cettolo et al., 2012).", "Figure 2 reports the evolution of BLEU scores on the German-English IWSLT 2010 dev set as a function of the k value in the waitk policy for a range of streaming history lengths ( h = { 0 , 20 , 40 , 60 , 80 } ).", "We show results for the 3 encoders introduced previously.", "History lengths were selected taking into account that the average sentence length is 20 tokens.", "A history length of zero ( h = 0 ) refers to the conventional sentence-level simultaneous MT model.", "The BLEU scores for the offline MT systems with a bidirectional encoder are also reported using horizontal lines, in order to serve as reference values.", "We report offline results for h = 0 and the best performing history configuration, h = 60 .", "All systems used the reference segmentation during decoding.", "As observed, BLEU scores of the simultaneous MT systems leveraging on the streaming history ( h > 0 ) are systematically and notably higher than those of conventional sentence-based simultaneous MT system ( h = 0 ) over the range of waitk values.", "Indeed, as the streaming history increases, BLEU scores also do reaching what it seems the optimal history length at h = 60 and slightly degrading at h = 80 .", "As expected, when replacing the unidirectional encoder by the PBE, BLEU scores improve as the waitk value increases, since PBE has additional access to those tokens from j + 1 up to k .", "For instance, for k = 32 and h = 60 , PBE is 0 .", "7 BLEU points above the unidirectional encoder.", "On the other hand, it can be observed how using an encoder which is not fully bidirectional during training, creates a performance gap with respect to the offline bidirectional model when carrying out inference in an offline manner ( k 32 ).", "It can be also observed how the PBE model is better prepared for this scenario and shows a smaller gap.", "It is important to keep in mind that although both offline and PBE models behave the same way dur-24 26 28 30 32 34 36 38 40 1 2 4 8 16 32 Offline h=60 Offline h=0 BLEU wait-k h=80h=60h=40h=20h=0Unidir.Bidir.PBE Figure 2: BLEU scores on the German-English IWSLT 2010 dev set as a function of the k value in the waitk policy for a range of streaming history ( h ) lengths and encoder type (See Appendix A for a close-up).", "ing inference for a large enough k , during training time the PBE model, trained using the multik with k randomly sampled for each batch, has been optimized jointly for low, medium and high latencies.", "In general, the bidirectional encoder shows poor performance for simultaneous MT. This can be explained by the fact that there exists a mismatch between the training condition (whole source available) and the inference condition (only a prefix of the source is available for k < 32 ).", "These results are consistent with (Elbayad et al., 2020a).", "Keep in mind that this bidirectional model is different from the offline one because it has been subject to the constraints of Eq.", "7 during training.", "As a result of the BLEU scores reported in Figure 2, the streaming MT system with h = 60 and PBE was used in the rest of the German-English experiments.", "Following (Schneider and Waibel, 2020)'s setup, the test set is lowercased and concatenated into a single stream.", "In order to measure the latency of the pipeline defined by the segmenter followed by MT system, it is necessary to take into account not only the latency of the MT system but also that of the segmenter.", "Thankfully this is straightforward to do in our pipeline, as a segmenter with a 6977 20 22 24 26 28 30 32 34 2 4 6 8 10 12 BLEU AL oracle w=0w=1 w=2 w=3 w=4 20 22 24 26 28 30 32 34 4 6 8 10 12 14 16 18 BLEU DAL oracle w=0w=1w=2w=3 w=4 Figure 3: BLEU scores versus stream-adapted AL and DAL (scale s =0.85) with segmenters of future window length w = { 0 , 1 , 2 , 3 , 4 } on the IWSLT 2010 test set.", "future window of length w modifies the pipeline policy so that, at the start of the stream, w READ actions are carried out to fill up the future window.", "Then, every time the MT system carries out a READ action, it receives one token from the segmenter.", "Thus, the integration of the segmenter into the pipeline is transparent from a latency viewpoint.", "Figure 3 shows BLEU scores versus stream-adapted AL and DAL ( s scale = 0.85) figures reported with segmenters of future window length w = { 0 , 1 , 2 , 3 , 4 } for a streaming evaluation on the IWSLT 2010 test set.", "Points over each curve correspond to k = { 1 , 2 , 4 , 8 , 16 } values of the waitk policy used at inference time.", "Results for a w = 0 oracle are also shown as an upper-bound.", "As shown, stream-adapted AL and DAL figures achieved by our streaming MT system are reasonable, lagging 2-10 tokens behind the speaker for nearly maximum BLEU scores with a best BLEU score of 29.5 points.", "The same happens with AP figures ranging from 0.6 for w = 0 to 1.3 for w = 4 .", "These figures highlight the advantages of tying together our translation policy with the sentence segmentation provided by the DS model.", "Every time the DS model emits an end-of-sentence event, the MT model is forced to catch-up and translate the entire input.", "In this way, the MT model never strays too far from the speaker, even if the source-target length ratio differs from the defined at inference time.", "See Appendix A for streaming translation results in the reverse direction (English German).", "Next, we compare our proposed streaming MT (STR-MT) model with the = 0 .", "3 ACT system (Schneider and Waibel, 2020) in terms of BLEU score and stream-adapted latency measures on Table 2.", "Stream-level AL and DAL indicate that the ACT models lags around 100 tokens behind the speaker.", "Although both MT systems achieve similar translation quality levels, they do so at significantly different latencies, since the ACT model Table 2: Latency and quality comparison of ACT (Schneider and Waibel, 2020) and the proposed STR-MT on the IWSLT 2010 De-En test set.", "keep the pace of the speaker.", "The STR-MT model is now compared on the English-German IWSLT 2020 simultaneous text-to-text track (Ansari et al., 2020) with other participants: RWTH (Bahar et al., 2020), KIT (Pham et al., 2020) and ON-TRAC (Elbayad et al., 2020b).", "This comparison is carried out in order to assess whether the proposed streaming MT system is competitive with highly optimized systems for a simultaneous MT task.", "Given that the test set of this track remains blind, we use the results reported on the MuST-C corpus as a reference.", "In order to evaluate all systems under the same conditions, the reference segmentation of the MuST-C corpus is used instead of the DS model.", "Additionally, given that all other participants translate each sentence independently, the conventional sentence-level AL latency measure is reported.", "Figure 4 shows the comparison of BLEU scores versus AL measured in terms of detokenized tokens.", "As defined in the IWSLT text-to-text track, three AL regimes, low (AL 3 ), medium ( 3 < AL 6 ) and high ( 6 < AL 15 ) were considered.", "ON-TRAC and our streaming MT system exhibit a similar progression, which is to be expected given that they are both based on the multik approach.", "However, our system consistently outperforms the ON-TRAC system by 1-2 BLEU.", "This confirms the importance of utilizing streaming history in order to significantly improve results, and how the proposed PBE model can take better advantage of the history.", "RWTH and KIT systems are closer in translation quality to our proposal than ON-TRAC, for AL between 5 and 7.", "However, these systems do not show a flexible latency policy and are not comparable to our system at other regimes.", "Indeed, for that to be possible, these systems need to be re-trained, in contrast to our system in which latency is adjusted at inference time.", "In this work, a formalization of streaming MT as a generalization of simultaneous MT has been proposed in order to define a theoretical framework in which our two contributions have been made.", "On the one hand, we successfully leverage streaming history across sentence boundaries for a simultaneous MT system based on multiple wait-k paths that allows our system to greatly improve the results of 22 24 26 28 30 32 2 4 6 8 10 Low Medium High BLEU AL ON-TRAC KIT STR-MT RWTH Figure 4: Comparative BLEU scores versus AL at three regimes, low, medium, and high latency, for IWSLT 2020 simultaneous text-to-text track participants, RWTH, ON-TRAC, KIT and our streaming MT (STR-MT) system on the MuST-C corpus.", "the sentence-level baseline.", "On the other hand, our PBE is able to take into account longer context information than its unidirectional counterpart, while keeping the same training efficiency.", "Our proposed MT system has been evaluated under a realistic streaming setting being able to reach similar translation quality than a state-of-the-art segmentation-free streaming MT system at a fraction of its latency.", "Additionally, our system has been shown to be competitive when compared with state-of-the-art simultaneous MT systems optimized for sentence-level translation, obtaining excellent results using a single model across a wide range of latency levels, thanks to its flexible inference policy.", "In terms of future work, additional training and inference procedures that take advantage of the streaming history in streaming MT are still open for research.", "One important avenue of improvement is to devise more robust training methods, so that simultaneous models can perform as well as their offline counterparts when carrying out inference at 6979 higher latencies.", "The segmentation model, though proved useful in a streaming setup, adds complexity and can greatly affect translation quality.", "Thus, the development of segmentation-free streaming MT models is another interesting research topic.", "The research leading to these results has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreements no. 761758 (X5Gon) and 952215 (TAILOR), and Erasmus+ Education programme under grant agreement no. 20-226-093604-SCH (EXPERT); the Government of Spain's grant RTI2018-094879-B-I00 (Multisub) funded by MCIN/AEI/10.13039/501100011033 & ERDF A way of making Europe, and FPU scholarships FPU18/04135; and the Generalitat Valen-ciana's research project Classroom Activity Recognition (ref. PROMETEO/2019/111).", "The authors gratefully acknowledge the computer resources at Artemisa, funded by the European Union ERDF and Comunitat Valenciana as well as the technical support provided by the Instituto de Fsica Corpuscular, IFIC (CSIC-UV)." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "other", "objective", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "method", "other", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "method", "objective", "result", "abstain", "abstain", "abstain", "abstain", "other", "other" ]
[ "We describe a method to jointly pre-train speech and text in an encoder-decoder modeling framework for speech translation and recognition.", "The proposed method incorporates four self-supervised and supervised subtasks for cross modality learning.", "A self-supervised speech subtask leverages unlabelled speech data, and a (self-)supervised text to text subtask makes use of abundant text training data.", "Two auxiliary supervised speech tasks are included to unify speech and text modeling space.", "Our contribution lies in integrating linguistic information from the text corpus into the speech pre-training.", "Detailed analysis reveals learning interference among subtasks.", "Two pre-training configurations for speech translation and recognition, respectively, are presented to alleviate subtask interference.", "Our experiments show the proposed method can effectively fuse speech and text information into one model.", "It achieves between 1.7 and 2.3 BLEU improvement above the state of the art on the MUST-C speech translation dataset and comparable WERs to wav2vec 2.0 on the LIBRISPEECH speech recognition task.", "1 1 Introduction Pre-training can learn universal feature representations from a large training corpus and is benefi-cial for downstream tasks with limited amounts of training data (Peters et al., 2018; van den Oord et al., 2018; Chung et al., 2018; Zoph et al., 2020).", "With the advancement of computational power and self-supervised pre-training approaches, large volumes of unlabeled data may now be used in pre-training.", "Methods, such as BERT (Devlin et al., 2019), BART (Lewis et al., 2020b) and wav2vec2.0 (Baevski et al., 2020b), have emerged as the backbone of many speech and natural language processing tasks.", "1 https://github.com/pytorch/fairseq/tree/main/ examples/speech text joint to text.", "The aforementioned pre-training methods focus on learning feature representation either from text or speech.", "Many speech applications combine information learnt from both speech and text corpora to achieve state of the art results.", "In speech processing, transcribed speech training data is generally very scarce for many languages.", "It is difficult to build robust linguistic knowledge representation solely based on labeled speech training data.", "Jia et al. (2019); Chen et al. (2021) propose to generate synthetic data from text to augment speech training corpus.", "Li et al. (2021) demonstrate that models initialized with pre-trained wav2vec2.0 and mBART (Liu et al., 2020) modules are competitive for the multilingual speech to text translation task.", "Chuang et al. (2020) propose to concatenate the acoustic model and BERT model for speech Q&A.", "Chung et al. (2021b) align speech utterance representation to the corresponding text sentence representation, in which both representations are generated from unsupervised pre-trained models, for speech understanding.", "In this study, we are interested in pre-training for speech to text tasks using the Attention based Encoder-Decoder (AED) framework.", "In particular, we seek to answer the question whether the integration of data from different modalities is ben-eficial for representation learning.", "To answer this question, we propose Speech and Text joint PreTraining (STPT), a multi-task learning framework to combine different modalities, i.e., speech and text, in the pre-training stage.", "A self-supervised speech subtask and a (self-)supervised text to text subtask dominate the pre-training computation to leverage large amounts of unlabelled speech data and abundant text training corpus.", "Two auxiliary supervised speech subtasks are used to unify different modalities in the same modeling space.", "The proposed method fuses information from the text and speech training corpus into a single model, and it effectively improves the performance of down-1488 stream tasks, such as speech to text translation (ST) and automatic speech recognition (ASR).", "Our contributions are summarized as follows: 1. We propose a multi-task learning framework to learn four speech and text subtasks in one model and successfully integrate linguistic information from the text corpus into the speech pre-training.", "2. We conduct detailed analyses on the proposed pre-training method, which reveal the interference among different subtasks.", "3. Two joint pre-training configurations are proposed to alleviate learning interference for ASR and ST respectively.", "4. State-of-the-art results are achieved on the downstream tasks.", "We obtain at least 1.7 BLEU improvement compared with the best MUST-C ST system reported and comparable WERs as wav2vec 2.0 in the LIBRISPEECHASR task.", "Pre-training : Self-supervised pre-training is usually optimized with two different criteria: contrastive loss (van den Oord et al., 2018; Chung and Glass, 2020; Baevski et al., 2020b) and masked prediction loss (Devlin et al., 2019).", "Contrastive loss focuses on distinguishing the positive samples from the negative ones given the reference sample and it has achieved great success for speech recognition (Baevski et al., 2020b).", "Masked prediction loss has been first studied for natural language processing tasks (Devlin et al., 2019; Lewis et al., 2020b) with subsequent application to speech processing (Baevski et al., 2020a; Hsu et al., 2021).", "Chung et al. (2021a) combine contrastive loss and masked prediction loss, which shows good performance for the downstream ASR task.", "The optimization of our self-supervised speech task is more related to the masked prediction loss.", "Instead of predicting the hard discretized label for the masked frames, which is error prone, we use KL divergence to minimize the distribution difference between the same feature frames with and without masking.", "Please refer to subsection 3.2 for more details.", "Self-training (or iterative pseudo labelling): self-training is another widely used approach to take advantage of unlabelled speech data to improve the ASR performance (Kahn et al., 2020; Xu et al., 2020; Pino et al., 2020; Zhang et al., 2020; Wang et al., 2021a; Xiao et al., 2021; Wang et al., 2021b).", "A seed model, which usually is trained with a small amount of supervised speech training data, is employed to generate pseudo labels for the unlabelled speech data.", "The speech data with pseudo labels is augmented into the training dataset to build another model, which is expected to outperform the seed model due to more training data exposure.", "Similar to self-training, we also use small amounts of supervised data to unify the speech and text modeling space.", "However, the self-supervised speech training in this work avoids making hard predictions and uses KL divergence to maximize the mutual information between masked span and observed feature frames.", "Multi-task learning : Due to data scarcity, multitask learning is widely adopted to leverage parallel text training data for ST (Weiss et al., 2017; Anastasopoulos and Chiang, 2018; Tang et al., 2021b; Ye et al., 2021).", "Those methods primarily use supervised speech data sets during multi-task learning, whereas our method can leverage large amounts of unlabeled speech data during the pre-training stage, which has the potential to improve performance even further.", "A concurrent work from Ao et al. (2021) also proposes to jointly pre-train speech and text for ASR and text to speech application, which is fully unsupervised.", "Our method focuses on taking advantage of the supervised speech data, which could be the same data used for fine-tuning, to improve the joint speech text pre-training.", "Our results demonstrate the efficacy of supervised speech data in pretraining.", "Another concurrent work is from Bapna et al. (2021), which focuses on speech encoder pre-training using both speech and text data.", "Our method emphasizes the encoder-decoder framework and training both encoder and decoder in the pre-training stage.", "ASR and ST are the two main downstream tasks for the proposed pre-training method.", "Figure 1 depicts our joint pre-training framework, which consists of four subtasks: 1. (Self-)supervised T ext to T ext subtask (T2T) 2. S elf-supervised S peech L earning subtask (SSL) 1489", "3. Supervised S peech to P honeme classification subtask (S2P) 4. Supervised AED based S peech to T ext subtask, which is the same as the downstream task, i.e., ST or ASR (S2T) The choice of the T2T subtask depends on the downstream task.", "For ASR, the T2T subtask is a denoising autoencoder task (BART) (Lewis et al., 2020a) while ST utilizes a text based neural machine translation task.", "The SSL subtask is a self-supervised speech learning task to leverage large amounts of unlabelled speech data optimized by the masked prediction loss.", "The last two supervised speech tasks (S2P and S2T) unify two modalities, i.e., speech and text, into one modeling space.", "In this study, we find that the subtasks for the ASR pre-training are complementary, while subtask interference is observed in the ST pre-training at some encoder layers.", "We propose two different configurations: fully shared encoder (FSE) (Fig-ure", "1(a)) for the ASR pre-training, and partially shared encoder (PSE) (Figure", "1(b)) for the ST pre-training.", "The FSE configuration aims to encourage information sharing between different subtasks while the PSE configuration tries to minimize the information sharing between encoder only subtasks, i.e., SSL and S2P, and sequence to sequence AED tasks, i.e., subtask T2T and S2T.", "More subtask interference analysis is presented in subsection 5.2.", "We describe the details of each subtask in the following subsections.", "In the sequence to sequence ASR and ST tasks, the decoder is a text generator conditioned on the encoder outputs.", "Large amounts of training samples are required to cover different linguistic aspects of the target language.", "Abundant text is an ideal supplement to the limited supervised speech data corpus.", "Assume the target text sequence is Y = ( y 1 , y 2 , , y N ) , its corresponding corrupted version, X = NOISE ( Y ) = ( x 1 , x 2 , , x M ) , can be created by masking or replacing token spans in Y (Lewis et al., 2020a) for the ASR pre-training.", "If the downstream task is ST, X is the corresponding source token sequence.", "The task is optimized by maximizing cross entropy L T2T = N (cid:88) i log p ( y i | y 1: i 1 , X ) (1) In this subtask, we also convert the input text into the corresponding pronunciation form, i.e., phoneme sequence, as it would be easier to align the encoder outputs from speech and text (Tang et al., 2021b).", "The purple and black lines in Figure 1 describe the data flow in the T2T subtask.", "The SSL subtask aims to leverage vast amounts of unlabelled speech data and learn general speech representations.", "The model configuration follows wav2vec2.0 (Baevski et al., 2020b) where the speech model includes a feature extractor and a 1490 context encoder.", "The context encoder corresponds to the speech encoder in Figure", "1(b) in the ST pre-training.", "If ASR is the downstream task, the context encoder includes one extra shared encoder as shown in Figure", "1(a).", "We use different frameworks for the ST and ASR pre-training to reduce interference among subtasks.", "The detailed subtask interference is discussed in subsection 5.2.", "We propose a masked KL divergence loss to optimize the SSL subtask.", "It consists of two-pass computation.", "Given the speech input S = ( s 1 , s 2 , , s T ) , the feature extractor and context encoder outputs are Z = ( z 1 , z 2 , , z T (cid:48) ) and O = ( o 1 , o 2 , , o T (cid:48) ) respectively, where the speech input is down-sampled by the feature extractor and T > T (cid:48) .", "In the first pass, the output O is compared with the phoneme embedding E = ( e 1 , e 2 , , e I ) , which is from the T2T subtask described in subsection 3.1.", "I is the phoneme vocabulary size.", "The predicted phoneme distribution p ( o j | e i ) is defined as p ( o j | e i ) = exp( o j (cid:124) e i ) (cid:80) i (cid:48) exp( o (cid:124) j e i (cid:48) ) (2) In the second pass, speech feature spans Z Z are selected and corrupted as wav2vec2.0 (Baevski et al., 2020b).", "O is the corresponding context encoder output from Z .", "We train the model to infer the corrupted p ( o j | e i ) being similar as p ( o j | e i ) by minimizing KL divergence.", "Compared with the masked prediction loss, instead of predicting the hard discretized label for the masked frames, we use the soft label prediction, i.e., predicted phoneme distribution from the first pass, to learn speech representation and avoid the hard prediction errors.", "The S2P subtask is employed to unify the self-supervised trained speech and text models.", "It shares the same model as in the SSL subtask.", "In this subtask, a transcribed ASR data set is used and the goal of this task is to predict the frame level phoneme labels.", "A HMM-GMM model is trained with the same transcribed dataset using Kaldi (Povey et al., 2011) to generate the frame-level labels with forced-alignment.", "where a ( j ) is the phoneme label associated with the context encoder output o j .", "The data flow in the S2P subtask is depicted with steelblue lines in Figure 1. 3.4 Supervised AED based speech to text subtask Besides the S2P subtask mentioned in the previous subsection, we include the potential downstream AED based task, i.e. ASR or ST, as another auxiliary subtask during the pre-training stage.", "In many speech translation datasets, such as MuST-C (Gangi et al., 2019) or CoVoST (Wang et al., 2020), we have both speech transcription and translation labels.", "The speech transcription is used in the S2P subtask while the S2T subtask can make use of the corresponding translation labels.", "We hope this auxiliary task would make the transition from pre-training to fine-tuning smooth and result in better performance in downstream tasks.", "The components involved during optimization are connected with blue lines in encoder and black lines in decoder as shown in Figure 1. They are trained with cross entropy criterion, L S2T = (cid:88) t log p ( y i | y i 1 , O ) (5) where O is the input speech and Y = ( y 1 , , y N ) is the target labels.", "The overall pre-training loss is defined as the combination of four losses discussed above L = L T2T + LSSL + L S2P + L S2T (6) where , and are task weights for the SSL, S2P and S2T subtasks respectively.", "During the pre-training, the shared encoder inputs come from two sources, either from speech encoder outputs in the S2T subtask or phoneme embeddings in the T2T subtask.", "The shared encoder inputs might be in different numerical scales.", "In order to stabilize the multi-task training, a Lay-erNorm (Ba et al., 2016) is applied to the shared encoder inputs and places those inputs in the same numerical scale as shown in Figure 1. 1491 4 Experimental setting In the pre-training, we first train modules with the T2T subtask until they are converged.", "It helps to stabilize the training and achieve a better result.", "Then the entire model is jointly optimized with all subtasks mentioned in section 3. Finally, the pre-trained model is fine-tuned on the downstream tasks.", "In the fine-tuning stage, we keep optimizing the model with the T2T and S2T subtasks.", "Two encoder-only subtasks (SSL and S2P) are dropped, since the model has learnt good speech representation from the unlabeled speech data in pre-training.", "Two downstream tasks, ASR and ST, are examined.", "The ASR system is evaluated on four LIBRISPEECH (Panayotov et al., 2015) evaluation sets: dev-clean, dev-other, test-clean and test-other.", "WER is reported in the experiments.", "ST models are evaluated on two translation directions: English-Spanish (EN-ES) and English-French (EN-FR).", "Case-sensitive detokenized SACREBLEU (Post, 2018) is reported on the tst-COMMON testset from MUST-C (Gangi et al., 2019).", "For both ASR and ST pre-training, 60k hours of unlabelled English speech data from Librilight (Kahn et al., 2020) is used to build the self-supervised speech task if not specifically mentioned.", "We employ the same labelled data for the supervised learning in pre-training and fine-tuning, i.e., LIBRISPEECH training data for ASR and MUST-C for ST. For ASR pre-training, the LIBRISPEECH language model (LM) training dataset is used to build the monolingual BART model.", "For ST pre-training, we take the parallel training corpus from WMT.", "More details about the training data could be found in Appendix A. 4.1 Model configuration The model takes raw speech audio as input.", "The feature encoder contains seven blocks and the temporal convolutions in each block have 512 channels with strides (5,2,2,2,2,2,2) and kernel widths (10,3,3,3,3,2,2).", "The speech encoder, shared encoder and shared decoder are all with 6 transformer layers, model dimension 768, inner dimension (FFN) 3,072 and 8 attention heads.", "We adopt Pre-LN in the transformer block as Xiong et al. (2020).", "The total number of parameters is 169 million.", "The task weight for each subtask is set by the number of mini-batches used during training.", "In the pre-training, the ratio of mini-batch numbers for each subtasks are 1.0, 7.0, 0.5 and 0.5 for the T2T, SSL, S2P and S2T subtasks respectively.", "We mask 30% tokens in the T2T BART subtask in ASR pre-training, and no masking is applied for the T2T NMT subtask in the ST pre-training.", "7% of the feature frames in the SSL subtask and 3% of the feature frames in the two supervised speech subtasks are randomly selected as the mask span starting time-step.", "The mask span length is 10.", "The masking percentage is selected via grid search ( (20 , 30) for text masking, (6 , 6 . 5 , 7) and (2 , 3) for speech masking).", "Additional experimental details such as optimization hyper-parameters are included in Appendix B. 5 Experimental results 5.1 Main results We present the LIBRISPEECH recognition results in Table 1. Recognition results without/with an decoding LM are reported.", "The WERs obtained with LM are displayed within ().", "The second column shows the dataset used as unlabeled data in pre-training.", "LS-960 stands for LIBRISPEECH training dataset and LV-60k is the 60,000 hours Librilight dataset.", "The decoding LM is built with the LIBRISPEECH text training corpus , which is the text corpus used by the T2T subtask in the ASR pre-training and fine-tuning.", "The first part of the table shows results from the wav2vec 2.0 base model, which is a CTC based ASR system.", "Second part of the table presents results from two AED based ASR systems, and we mainly compare the proposed method with those two AED systems.", "LAS is a LSTM based system trained with the LIBRISPEECH data only.", "Transformer (Tang et al., 2021b) is based on multi-task learning and jointly trained with a text task.", "The results from STPT models are presented in the third part of the table.", "The fourth row shows results from a model that uses 960 hours LIBRISPEECH training data as the unlabelled pretraining data while the model in the fifth row is pre-trained with the 60k hours Librilight data.", "STPT outperforms all previous reported AED-based systems.", "On average, there is a 1.2 absolute WER reduction obtained compared to the jointly trained transformer model (Tang et al., 2021b).", "STPT also reduces 2.2 WER compared with the CTC based wav2vec model if no external LM is applied and achieves comparable WERs when it is decoded with a LM.", "One interesting observation is the decoding LM is not very helpful for the STPT model, 1492 Data set Unlabeled Dev Test ave. data clean other clean other wav2vec 2.0 (Baevski et al., 2020b) LS-960 3.2 (1.8) 8.9 (4.7) 3.4 (2.1) 8.5 (4.8) 6.0 (3.4) LAS (Park et al., 2019) --2.8 (2.5) 6.8 (5.8) Transformer (Tang et al., 2021b) -2.8 7.0 3.1 7.2 5.0 STPT LS-960 2.1 (1.9) 5.4 (5.2) 2.3 (2.2) 5.6 (5.3) 3.8 (3.6) STPT LV-60k 2.0 (2.1) 4.4 (4.2) 2.1 (2.1) 4.6 (4.5) 3.3 (3.2) Table 1: WER results on Librispeech.", "that only 0.2 WER reduction is observed when a decoding LM is applied.", "Other systems, on the other hand, show a considerable WER reduction when the LM is applied during decoding.", "It indicates that our multi-task learning in the pre-training and fine-tuning stages can effectively fuse linguistic information in the text data corpus into the ASR model.", "LM might not be required if it is trained on the same text corpus.", "We also report results from model pre-trained with 60k hours Librilight data at the fifth row.", "Compared with the LS-960 STPT model, Librilight data helps to reduce the WER in two difficult other datasets.", "In the following experiments, we will use Librilight as unlabelled data in pre-training.", "In Table 2, we present the speech translation results on the MuST-C datasets.", "Row one to four are the latest results from literature.", "Row one shows the results by training a speech to text translation task alone.", "Row two and three present results from two multi-task systems with speech and text jointly trained together.", "Row four is the best system reported, which is initialized with the pre-trained wav2vec 2.0 and machine translation model, then fine-tuned with the joint speech and text training.", "Our method achieves 2.3 and 1.7 more BLEU scores for EN-ES and EN-FR translation directions compared with the best system (Ye et al., 2021).", "Interference among subtasks may impede the progress of multi-task learning and lead to inferior results.", "In this study, we examine the task interference via comparing the gradient similarity between pair subtasks.", "We choose the pre-trained models using the FSE configuration discussed in section 3 and accumulate gradients from one of four jointly trained subtasks.", "We prepare 20 batches of training samples for each subtask, and retrieve the accumulated gradients by sending these batches to the models.", "Then we calculate the pairwise cosine similarity between gradients from any two subtasks.", "The pairwise subtask gradient similarity from the shared encoder are presented in Figure 2. The Figure", "2(a) demonstrates the gradient similarity in ASR pre-training.", "In most layers, the gradient similarities are small.", "No serious gradient interference is observed.", "The Figure", "2(b) depicts the gradient similarity from the ST pre-training.", "Compared with the ASR pre-training, the S2T and T2T subtasks are replaced by speech translation and text based neural machine translation subtasks in pre-training.", "The interference between different subtasks is significant as large positive and negative gradient similarities are observed in the third and fifth layers in Figure 2. Similarly, we compare task gradients in the speech encoder and no obvious task interference is observed within the speech encoder for both ASR and ST pre-training.", "Detailed analysis on the speech encoder is included in the Appendix C. In order to alleviate the task interference, the PSE configuration is proposed for the ST pretraining.", "Table 3 presents the performance comparison between two configurations on both ASR and ST pre-training.", "On the left part of the table, we list the ASR results using 100 hours labelled speech data (train-clean-100) in pre-training and fine-tuning.", "While the right part of the table shows the BLEU evaluated on the MUST-C dataset.", "As we expected, the FSE configuration encourages information sharing among tasks and it achieves lower WER for the ASR task.", "It indicates subtasks in the ASR pre-training are complementary to each other.", "On the other hand, the PSE configuration 1493 -0.4 -0.2 0 0.2 0.4 layer 0 layer 1 layer 2 layer 3 layer 4 layer 5 unsup_speech-sup_speech unsup_speech-sup_s2s unsup_speech-text sup_speech-sup_s2s sup_speech-text sup_s2s-text", "minimizes the information sharing between AED subtasks and encoder only subtasks, and it leads to higher BLEU for the ST task.", "The supervised speech data connects the text and speech modeling and unifies the representation from different modalities.", "An interesting question we want to investigate is how much supervised data is enough to learn a good cross modality representation.", "In this experiment, we choose different amounts of labelled data for ASR pre-training and fine-tuning, varied from 960 hours (the full dataset), 100 hours (train-clean-100) and 10 hours as (Kahn et al., 2020), to answer this question.", "In Table 4, the first column shows the amounts of supervised speech data available during the pretraining and the second column presents the amount of labelled data used in the fine-tuning stage.", "In pre-training, the same supervised speech data is used in the S2P and S2T subtasks.", "The first observation is that more supervised speech data in the pre-training stage is always helpful to get smaller WER.", "For example, if the models are fine-tuned with the full LIBRISPEECH training dataset, the average WER are 3.3 (row one), 3.6 (row two) and 4.0 (row four) for experiments with 960, 100 and 10 hours labelled data in the pre-training stage.", "The second observation is that PT (h) FT (h) Dev Test clean other clean other 960 960 2.0 4.4 2.1 4.6 100 960 2.3 4.9 2.2 5.1 100 3.2 6.8 3.5 7.2 10 960 2.7 5.3 2.8 5.3 100 3.8 7.8 4.0 7.7 10 19.9 27.5 22.0 28.8 Table 4: Impact of the amounts of supervised data.", "we are still able to obtain good speech representations even with small amounts of labelled data.", "In row four, the model is pre-trained with 10 hours labelled data, then fine-tuned with 960 hours supervised speech data.", "It can achieve an average 4.0 WER, which is better than the results of the AED systems in Table 1. However, we also notice the performance degrades quickly if only small amounts of labelled speech data are available.", "The average WER is increased to 24.6 (row six) when only 10 hours of supervised speech data is employed in both pre-training and fine-tuning.", "Another question we are interested is the generalizability of the pre-trained model.", "There are two data partitions in LIBRISPEECH : clean and other.", "The clean partition is supposed to be higher recording quality and with accents closer to US English while the other partition is difficult speakers with high WER (Panayotov et al., 2015).", "We create four data partitions for pre-training and fine-tuning to simulate the mismatched training conditions.", "train-clean-100 is used as the pretraining clean data set (PT C) and the first 30,000 utterance from train-clean-360 as the finetuning clean dataset (FT C).", "The first 30,000 ut-1494 FT C FT O clean other clean other PT C 3.0 6.7 3.2 5.9 PT O 3.0 5.9 3.2 5.8 Table 5: WER comparison under mismatched pretraining and fine-tuning conditions.", "terances and the following 30,000 utterances from train-other are used as the pre-training (PT O) and fine-tuning other (FT O) datasets.", "Each dataset includes approximately 100 hours speech data.", "In Table 5, models are trained under 4 different combinations with different supervised pretraining and fine-tuning data sets.", "We report average WER on the dev-clean and test-clean test sets as clean, and average WER on thedev-other and test-other as other to reduce the result variation.", "From Table 5, we have following observations.", "1) a model achieves the best results on the matched condition.", "The model PT C + FT C achieves the lowest WER on the clean set while PT O + FT O achieves the best results on the other set.", "2) training and test on totally different conditions could increase WER significantly.", "The model PT C + FT C increases 0.9 WER on the other set compared with the PT O + FT O model.", "3) mismatched pre-training and fine-tuning might slightly increase the WER, 0.1 to 0.2 in this experiment.", "In the SSL subtask, we optimize the model to reduce the KL divergence loss between input without masking and with masking as described in subsection 3.2.", "It is a variant of the masked prediction loss (Baevski et al., 2020a) and no target labels are required in our implementation.", "Contrastive loss is another widely used method for the self-supervised speech learning (Baevski et al., 2020b).", "We compare the both criteria in Table 6.", "The number of distractors in the contrastive loss is 100 as (Baevski et al., 2020b).", "Both ASR and ST results are reported in Table 6, where the masked KL divergence loss achieves about 0 .", "6 lower WER in the Librispeech dev sets and 0 .", "7 1 .", "4 more BLEU scores in the MuST-C tst-COMMON sets.", "It demonstrates the effectiveness of the proposed masked KL divergence loss for the SSL subtask.", "In Table 7, we present an ablation study by removing different steps/tasks in the pre-training stage.", "In order to make the pre-training more stable, the model training adopts a three-stage optimization strategy: 1) pre-training the T2T subtask to have a good initialization on the phoneme embeddings 2) joint pre-training with four subtasks to leverage large amounts of unlabelled speech data and abundant text data and 3) fine-tuning the model on the downstream task for best performance.", "In the second row, we skip the T2T pre-training step and initialize the model randomly for the joint pretraining.", "0.5 WER increase is observed in average on two LIBRISPEECH dev sets.", "It also has more impact on the EN-ES translation direction where 1.2 BLEU score is lost without proper initialization.", "In the third row, we present the results without the S2T subtask.", "For both ASR and ST, significant performance degradation is observed, with an average 1.1 WER increase for two ASR tests and 1.8 BLEU decrease for two ST directions.", "We also try removing the S2P subtask while still keeping the S2T subtask.", "The training doesn't converge.", "The SSL subtask is with very small or zero cost since all predictions collapse into one or two target phonemes.", "Also, little progress has been made for the S2T subtask even though it is co-trained with the SSL and T2T subtasks.", "In the last row, the model is trained without pretraining, i.e., only the T2T and S2T subtasks are optimized.", "Compared with the STPT results, there is about 1.4 WER increase for two LIBRISPEECH test sets and 3.4 BLEU decrease for the two ST directions on average.", "In this work, we present a method to jointly pretrain speech and text in one model for speech translation and recognition under the AED framework.", "It includes four self-supervised and supervised subtasks from two different input modalities, hence the proposed method can leverage large amounts of unlabelled speech data and abundant text data in the pre-training stage.", "We conduct detailed analysis on the interference among different subtasks and propose two model configurations for the ASR and ST pre-training respectively to alleviate the subtask interference.", "Our experimental results show STPT can effectively fuse information within text and speech training data into one model.", "We achieves between 1 .", "7 and 2 .", "3 BLEU improvement over the state of the art on the MUST-C EN-FR and EN-ES speech translation tasks, and comparable WERs as wav2vec 2.0 in the LIBRISPEECHASR task.", "We highlight the potential that this work has positive impact in the society: augmenting speech processing tasks with text corpus, and improving speech related applications.", "At the same time, this work may have some negative consequences if the text data is not handled in a proper way.", "Before using the text data to train a speech system, one should evaluate fairness in the collected data, and make sure not to train on offensive or any type of inappropriate data." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "result", "other", "other", "other", "other", "abstain", "method", "other", "other", "other", "other", "method", "abstain", "other", "abstain", "other", "abstain", "objective", "other", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "While Yu and Poesio (2020) have recently demonstrated the superiority of their neural multi-task learning (MTL) model to rule-based approaches for bridging anaphora resolution, there is little understanding of (1) how it is better than the rule-based approaches (e.g., are the two approaches making similar or complementary mistakes?) and (2) what should be improved.", "To shed light on these issues, we (1) propose a hybrid rule-based and MTL approach that would enable a better understanding of their comparative strengths and weaknesses; and (2) perform a manual analysis of the errors made by the MTL model.", "Bridging resolution is an anaphora resolution task that involves identifying and resolving bridg-ing/associative anaphors, which are anaphoric references to non-identical associated antecedents.", "To exemplify, consider the following sentences taken from the BASHI corpus (Rsiger, 2018a): Even if baseball triggers losses at CBS and he doesn't think it will I'd rather see the games on our air than on NBC and ABC, he says .", "In this example, a bridging link exists between the anaphor the games and its antecedent baseball , as the definite description cannot be interpreted correctly unless it is associated with baseball .", "Bridging resolution is arguably more challenging than entity coreference resolution.", "The reason is that unlike in entity coreference, in bridging resolution there are typically no clear syntactic or surface clues for identifying the antecedent of a bridging anaphor.", "In many cases, resolution requires the use of context as well as commonsense inference.", "Despite the difficulty of bridging resolution, the annotated corpora available for training bridging resolvers are much smaller than those for training entity coreference resolvers (e.g., OntoNotes (Hovy et al., 2006)).", "As a result, early work has focused on developing rule-based systems (e.g., Hou et al. (2014), Rsiger (2018b)).", "A key weakness of rule-based approaches is that the ruleset may have to be updated when it is applied to a new corpus (e.g., new rules may have to be added, and existing rules may have to be removed or modi-fied), as different bridging corpora are annotated with slightly different guidelines (to cover different kinds of bridging links, for instance).", "In light of this weakness, Yu and Poesio (2020) have recently proposed a neural bridging resolver based on multi-task learning (MTL).", "Despite being trained on the relatively small amount of labeled data that are currently available, their resolver has achieved state-of-the-art results on three evaluation corpora.", "In this paper, we seek to make sense of this state of the art by shedding light on two issues.", "First, how is the MTL model better than its rule-based counterparts?", "More specifically, while MTL is apparently making fewer mistakes than the rules, are the two approaches making similar or complementary mistakes?", "Second, given that the MTL model is the current state of the art, what needs to be improved in MTL?", "To investigate the first issue, we propose a hybrid approach to bridging resolution: we first apply the hand-crafted rules to identify bridging links, and then employ the MTL-based model to resolve any (anaphoric) mentions that are not resolved by the rules.", "The design of this pipelined resolver is motivated in part by sieve-based approaches to entity coreference resolution (Raghunathan et al., 2010; Lee et al., 2013).", "Specifically, given our hypothesis that hand-crafted rules typically have higher precision and lower coverage than machine-learned patterns, we employ the rules as our first sieve and MTL as our second sieve.", "If our hybrid approach outperformed both the rule-based and learning-based approaches, that would provide suggestive evidence that these two approaches have Corpora Docs Tokens Mentions Anaphors ISNotes 50 40292 11272 663 BASHI 50 57709 18561 459 ARRAU RST 413 228901 72013 3777 Table 1: Statistics on different corpora.", "different strengths and weaknesses and therefore should be viewed as complementary approaches to bridging resolution.", "Note that this would be an important ramification, as learning-based approaches and rule-based approaches to bridging resolution have thus far been viewed as competing approaches.", "For instance, when evaluating their MTL model, Yu and Poesio (2020) merely view the rule-based systems as baselines.", "To investigate the second issue, we perform a manual analysis of the major types of error made by MTL.", "Since interpretability remains a key weaknesses of neural models, we believe that our analysis could provide useful insights into what needs to be improved in MTL.", "Corpora.", "We use three English corpora that are arguably the most widely used corpora for bridging evaluation, namely ISNotes (composed of 50 WSJ articles in OntoNotes) (Markert et al., 2012) , BASHI (The Bridging Anaphors Hand-annotated Inventory, composed of another 50 WSJ articles in OntoNotes) (Rsiger, 2018a), and ARRAU (composed of articles from four domains, RST, GNOME, PEAR, and TRAINS) (Poesio and Art-stein, 2008; Uryupina et al., 2020).", "Following previous work, we report results only on RST, the most comprehensively annotated segment of ARRAU.", "Table 1 shows the statistics on these corpora.", "For ARRAU RST, we use the standard train-test split.", "For ISNotes and BASHI, we divide the available documents into 10 folds and report 10-fold cross validation results, following previous work (Hou, 2020; Yu and Poesio, 2020).", "The hybrid approach.", "Recall that our hybrid approach is composed of a rule-based system and Yu and Poesio's (2020) (learning-based) MTL approach.", "Below we provide a brief overview of the MTL approach and the rules.", "Yu and Poesio's (2020) MTL-based system is the first neural model for full bridging resolution.", "1 They presented two extensions to Kantor 1 In our experiments, we use their implementation publicly available from https://github.com/juntaoy/ dali-bridging .", "All model parameter values are the same as those used in Yu and Poesio (2020).", "and Globerson's (2019) span-based neural mention-ranking model (Denis and Baldridge, 2008) that was originally developed for entity coreference resolution.", "First, they provided gold mentions as input to the model, meaning that the model needs to learn the span representations but not the span boundaries.", "Second, they proposed to train the model to perform coreference and bridging in a MTL framework, where the span representation layer is shared by the two tasks so that information learned from one task can be utilized when learning the other task.", "Unlike feature-based approaches, where feature engineering plays a critical role in performance, this model employs only two features, the length of a mention and mention-pair distance.", "Different rule-based systems have been developed for the three evaluation corpora.", "We used Hou's (2014) rules for ISNotes, and Rsiger's (2018) rulesets for BASHI and ARRAU.", "2 Table 2 shows an example rule designed by Hou et al. (2014) for full bridging resolution in ISNotes.", "3 As can be seen, a rule is composed of two conditions: one on the anaphor and the other on the antecedent.", "If two mentions satisfy these conditions, the rule will posit a bridging link between them.", "In the table, we express the rule in terms of its name, the condition on the anaphor, the condition on the antecedent, and the motivation behind its design.", "4 Setting.", "We report results for full bridging resolution .", "In this setting, a system is given as input not only a document but also the gold mentions in the document.", "The goal is to identify the subset of the gold mentions that are bridging anaphors and resolve them to their antecedents, which are also chosen from the gold mentions.", "Postprocessing.", "Following previous work (Rsiger et al., 2018), we postprocess the output of a resolver by removing the gold coreferent anaphors from the predicted bridging anaphors.", "Evaluation metrics.", "We report results for recognition and resolution in terms of precision, recall, and F-score.", "For recognition, recall is the fraction of gold anaphors that are correctly identified, whereas precision is the fraction of anaphors iden-2 Rsiger et al. (2018) designed an additional rule for BASHI and another ruleset for ARRAU.", "3 The complete set of rules designed by Hou et al. (2014) and Rsiger et al. (2018) can be found in Appendix A. 4 In our experiments, we use the implementation of these rule-based systems publicly available from https: //github.com/InaRoesiger/BridgingSystem .", "tified by the system that are correct.", "For resolution, recall and precision are defined in a similar fashion.", "Bridging recognition and resolution results of the three approaches under comparison (i.e., Rules, MTL, and Hybrid) on the three evaluation corpora are shown in Table 3. The performance trends largely corroborate our hypothesis.", "On all three datasets, we see that the recall of Hybrid is substantially higher than those of Rules and MTL for both recognition and resolution, meaning that Rules and MTL are making different rather than similar mistakes and can therefore be used to complement each other's weaknesses.", "Moreover, Hybrid's F-scores on ISNotes and BASHI are better than those of Rules and MTL: on ISNotes, Hybrid outperforms MTL by 5.7% points and 4.8% points in F-score for recognition and resolution, respectively; and on BASHI, Hybrid outperforms MTL by 5.4% points and 2.0% points in F-score for recognition and resolution, respectively.", "On ARRAU RST, however, Hybrid's recognition and resolution F-scores are only slightly better than those of Rules and MTL.", "The failure of Hybrid to offer substantial gains on ARRAU RST w.r.t. F-score can be attributed to Rules's relatively low precision: unlike in ISNotes and BASHI, where Rules's precision is higher than MTL's, in ARRAU RST, Rules's precision are more or less at the same level as MTL's.", "Next, we compare in Table 4 the performance of our three resolvers on different categories of anaphors defined by the rules used in the rule-based resolver.", "5 Each rule category is identified using its rule ID (column 1).", "6 Each fraction in column 2 is 5 Owing to space limitations, only the results on ISNotes and BASHI are shown in Table 4. The results on ARRAU RST can be found in Appendix B. 6 The mapping between rule IDs and the rule categories the ratio of the number of gold anaphors that satisfy the anaphor condition of a rule to the number of gold mentions that satisfy the same condition.", "Finally, the recognition and resolution results shown in the remaining columns are expressed in terms of precision (P), recall (R), and F-score (F).", "We believe that these results can reveal the comparative strengths and weaknesses of the resolvers.", "A few points about the results in Table 4 deserve mention.", "On ISNotes (Table", "4(a)), while Rules outperforms MTL on the majority of the rule categories in resolution F-score, MTL achieves the state of the art by resolving anaphors in the largest category, Rule 18 (Other), which consists of anaphors that cannot be handled by any of the rules.", "On BASHI (Table", "4(b)), however, Rules outperforms MTL on only four rule categories.", "This is somewhat surprising because the rulesets used for ISNotes and BASHI are almost identical to each other.", "7 A closer look at the numbers in the second column of Table 4 reveals an interesting observation: in a majority of the rules, the number of gold anaphors that satisfy a rule condition is smaller in BASHI than in ISNotes, whereas the number of gold mentions that satisfy an anaphor condition is larger in BASHI than in ISNotes.", "This is again somewhat surprising because both ISNotes and BASHI contain 50 WSJ news articles taken from OntoNotes that are annotated with very similar annotation schemes.", "Consequently, we computed the average length of a document in the two datasets and found that BASHI indeed has more tokens per document on average (1154 tokens/doc in BASHI compared to 805 tokens/doc in ISNotes).", "The fact that BASHI has longer documents could explain why more gold mentions satisfy the anaphor condi-can be found in Appendix A. 7 As can be seen in Table 4, the ruleset for BASHI is simply the ruleset for ISNotes augmented with Rule 10, which handles comparative anaphors.", "tions of the rules in BASHI than in ISNotes.", "However, we still could not explain why the number of gold anaphors that satisfy the anaphor conditions of the rules is smaller in BASHI than in ISNotes.", "To understand the reason, we took a closer look at the documents in BASHI and found that there are cases of bridging that are not being annotated.", "Examples of such missing bridging links are shown in Table 6, where the missing anaphors are boldfaced and their antecedents are italicized.", "We therefore speculate that the lower resolution precision achieved by Rules on BASHI has to do with the incomplete gold annotations on BASHI.", "In Table 5, we quantify how different Rules and MTL are w.r.t. each rule category.", "Let GA i be the set of gold anaphors that are covered by rule category i .", "We show for each i the percentage of GA i that are (1) correctly recognized/resolved by both resolvers (B), (2) correctly recognized/resolved by Rules but not MTL (R), and (3) correctly recog-nized/resolved by MTL but not Rules (M).", "For both ISNotes and BASHI, the relatively large numbers under the \"R\" and \"M\" columns suggest that Rules and MTL are making different predictions; moreover, the fact that the numbers under \"R\" are larger than the corresponding numbers under \"M\" on a majority of categories implies that the number of gold anaphors that are solely recognized/resolved by Rules is larger than that by MTL.", "To better understand what areas of improvement are needed by the MTL model, we perform a manual analysis of its errors and discuss three major types of error in the following three subsections.", "Precision errors in recognition refer to errors in misclassifying a mention as a bridging anaphor.", "Coreference anaphor errors are the most common type of precision errors, contributing to 14-30% of the overall precision errors in recognition.", "Coreference anaphor errors occur when a gold coreference anaphor is predicted as a bridging anaphor.", "Consider the first example in Table 7.", "In this example, the gold coreference anaphor the stake is predicted as a bridging anaphor and resolved to the ground , but it has a coreference link with a big iron stake .", "By definition, a bridging anaphor (espe-cially referential bridging) should not be a coreference anaphor.", "We speculate that MTL makes these mistakes because it is trained on coreference and Rule Recognition Resolution B R M B R M 1 38 62 0 25 50 0 2 29 43 14 29 29 14 3 47 47 5 16 58 16 4 46 14 26 40 6 23 5 50 12 0 38 25 0 6 9 82 0 9 64 0 7 21 20 27 4 21 14 8 100 0 0 50 50 0 9 12 13 36 3 8 15 18 0 0 26 0 0 16", "Recall errors in recognition refer to the model's failure to identify bridging anaphors.", "Indefinite expression errors are the most common type of recall errors, contributing to 48-71% of the overall recall errors in recognition on the three datasets.", "Indefinite expression errors occur when a system misclassifies an indefinite bridging anaphor as a mention having the NEW information status.", "8 Consider the second example in Table 7.", "In this example, the indefinite bridging anaphor production is not detected by the MTL model.", "The reason is that the syntactic forms of many NEW instances and indefinite bridging anaphors are the same.", "Thus, it is not easy for model to distinguish between them.", "This observation has also been made by Hou et al. (2018).", "Precision errors in resolution refer to errors in identifying the antecedent for a bridging anaphor.", "Unmodified expression errors are the most common 8 Bridging is a subcategory of the MEDIATED .", "When Michael S. Perry took the podium at a recent cosmetics industry event , more than 500 executives packing the room snapped to attention .", "Folk doctors also prescribe it for kidney , bladder and urethra problems , duodenal ulcers and hemorrhoids .", "Some apply it to gouty joints .", "Table 6: Examples of unannotated bridging links in BASHI.", "After three Sagos were stolen from his home in Garden Grove , I put a big iron stake in the ground and tied the tree to the stake with a chain , he says proudly.", "Currently, Boeing has a backlog of about $80 billion, but production has been slowed by a strike of 55,000 machinists , which entered its 22nd day today .", "In addition, the government is figuring that the releases could create a split between the internal and external wings of the ANC and between the newly freed leaders and those activists who have emerged as leaders inside the country during their imprisonment .", "In order to head off any divisions , Mr. Mandela , in a meeting with his colleagues before they were released, instructed them to report to the ANC headquarters in Lusaka as soon as possible .", "type of precision errors, contributing to 23-63% of the overall precision errors in resolution.", "Unmodified expression errors occur when a predicted anaphor is a short mention without modifiers.", "Such a mention is semantically less rich than those that are modified and is therefore harder to resolve.", "Consider the third example in Table 7.", "In this example, the anaphor any divisions is resolved to a wrong antecedent their imprisonment rather than the correct antecedent the ANC .", "In this paper, we sought to make sense of the state of the art in bridging resolution.", "We combined the hand-crafted rules and the MTL model in a pipelined fashion, showing that (1) the rules and MTL were making complementary mistakes and (2) the resulting hybrid approach achieved state-of-the-art results on three standard evaluation datasets.", "In addition, we performed a manual error analysis to determine what needed to be improved in MTL.", "Finally, our findings suggested that BASHI's annotation quality may need to be reassessed.", "We thank the three anonymous reviewers for their detailed and insightful comments on an earlier draft of the paper.", "This work was supported in part by NSF Grants IIS-1528037 and CCF-1848608." ]
[ "abstain", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "result", "abstain", "abstain", "abstain", "objective", "result", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "other", "other" ]
[ "Representing entities and relations in an embedding space is a well-studied approach for machine learning on relational data.", "Existing approaches, however, primarily focus on improving accuracy and overlook other aspects such as robustness and interpretability.", "In this paper, we propose adversarial modifications for link prediction models: identifying the fact to add into or remove from the knowledge graph that changes the prediction for a target fact after the model is retrained.", "Using these single modifications of the graph, we identify the most influential fact for a predicted link and evaluate the sensitivity of the model to the addition of fake facts.", "We introduce an efficient approach to estimate the effect of such modifications by approximating the change in the embeddings when the knowledge graph changes.", "To avoid the combinatorial search over all possible facts, we train a network to decode embeddings to their corresponding graph components, allowing the use of gradient-based optimization to identify the adversarial modification.", "We use these techniques to evaluate the robustness of link prediction models (by measuring sensitivity to additional facts), study interpretability through the facts most responsible for predictions (by identifying the most influential neighbors), and detect incorrect facts in the knowledge base.", "Knowledge graphs (KG) play a critical role in many real-world applications such as search, structured data management, recommendations, and question answering.", "Since KGs often suffer from incompleteness and noise in their facts (links), a number of recent techniques have proposed models that embed each entity and relation into a vector space, and use these embeddings to predict facts.", "These dense representation models for link prediction include tensor factorization [Nickel et al., 2011, Socher et al., 2013, Yang et al., 2015], algebraic operations [Bordes et al., 2011, 2013b, Dasgupta et al., 2018], multiple embeddings [Wang et al., 2014, Lin et al., 2015, Ji et al., 2015, Zhang et al., 2018], and complex neural models [Dettmers et al., 2018, Nguyen et al., 2018].", "However, there are only a few studies [Kadlec et al., 2017, Sharma et al., 2018] that investigate the quality of the different KG models.", "There is a need to go beyond just the accuracy on link prediction, and instead focus on whether these representations are robust and stable, and what facts they make use of for their predictions.", "In this paper, our goal is to design approaches that minimally change the graph structure such that the prediction of a target fact changes the most after the embeddings are relearned , which we collectively call Completion Robustness and Interpretability via Adversarial Graph Edits (CRIAGE).", "First, we consider perturbations that remove a neighboring link for the target fact, thus identifying the most influential related fact, providing an explanation for the model's prediction.", "As an example, consider the excerpt from a KG in Figure 1a with two observed facts, and a target predicted fact that Princes Henriette is the parent of Violante Bavaria .", "Our proposed graph perturbation, shown in Figure 1b, identifies the existing fact that Fer-dinal Maria is the father of Violante Bavaria as the one when removed and model retrained, will change the prediction of Princes Henriette 's child.", "We also study attacks that add a new, fake fact into the KG to evaluate the robustness and sensitivity of link prediction models to small additions to the graph.", "An example attack for the original graph in Figure 1a, is depicted in Figure 1c.", "Such perturbations to the the training data are from a family of adversarial modifications that have been applied to other machine learning tasks, known as poisoning [Biggio et al., 2012, Corona et al., 2013, Biggio FerdinandMaria PrincessHenriette ViolanteBavaria isMarried h a s C h il d h a s C h il d target prediction (cid:104) s,r,o (cid:105)", "Since the setting is quite different from traditional adversarial attacks, search for link prediction adversaries brings up unique challenges.", "To find these minimal changes for a target link, we need to identify the fact that, when added into or removed from the graph, will have the biggest impact on the predicted score of the target fact.", "Unfortunately, computing this change in the score is expensive since it involves retraining the model to recompute the embeddings.", "We propose an efficient estimate of this score change by approximating the change in the embeddings using Taylor expansion.", "The other challenge in identifying adversarial modifications for link prediction, especially when considering addition of fake facts, is the combinatorial search space over possible facts, which is intractable to enumerate.", "We introduce an inverter of the original embedding model, to decode the embeddings to their corresponding graph components, making the search of facts tractable by performing efficient gradient-based continuous optimization.", "We evaluate our proposed methods through following experiments.", "First, on relatively small KGs, we show that our approximations are accurate compared to the true change in the score.", "Second, we show that our additive attacks can effectively reduce the performance of state of the art models [Yang et al., 2015, Dettmers et al., 2018] up to 27 .", "3% and 50 .", "7% in Hits@1 for two large KGs: WN18 and YAGO3-10.", "We also explore the utility of adversarial modifications in explaining the model predictions by presenting rule-like descriptions of the most influential neighbors.", "Finally, we use adversaries to detect errors in the KG, obtaining up to 55% accuracy in detecting errors.", "In this section, we briefly introduce some notations, and existing relational embedding approaches that model knowledge graph completion using dense vectors.", "In KGs, facts are represented using triples of subject, relation, and object, (cid:104) s, r, o (cid:105) , where s, o , the set of entities, and r R , the set of relations.", "To model the KG, a scoring function : R R is learned to evaluate whether any given fact is true.", "In this work, we focus on multiplicative models of link prediction 1 , specifi-cally DistMult [Yang et al., 2015] because of its simplicity and popularity, and ConvE [Dettmers et al., 2018] because of its high accuracy.", "We can represent the scoring function of such methods as ( s, r, o ) = f ( e s , e r ) e o , where e s , e r , e o R d are embeddings of the subject, relation, and object respectively.", "In DistMult, f ( e s , e r ) = e s (cid:12) e r , where (cid:12) is element-wise multiplication operator.", "Similarly, in ConvE, f ( e s , e r ) is computed by a convolution on the concatenation of e s and e r .", "We use the same setup as Dettmers et al.", "[2018] for training, i.e., incorporate binary cross-entropy loss over the triple scores.", "In particular, for subject-relation pairs ( s, r ) in the training data G , we use binary y s,ro to represent negative and positive facts.", "Using the model's probability of truth as ( ( s, r, o )) for (cid:104) s, r, o (cid:105) , the loss is defined as: L ( G ) = (cid:88) ( s,r ) (cid:88) o y s,ro log( ( ( s, r, o ))) + (1 y s,ro ) log(1 ( ( s, r, o ))) .", "(1) Gradient descent is used to learn the embeddings e s , e r , e o , and the parameters of f , if any.", "For adversarial modifications on KGs, we first define the space of possible modifications.", "For a target triple (cid:104) s, r, o (cid:105) , we constrain the possible triples that we can remove (or inject) to be in the form of (cid:104) s (cid:48) , r (cid:48) , o (cid:105) i.e s (cid:48) and r (cid:48) may be different from the target, but the object is not.", "We analyze other forms of modifications such as (cid:104) s, r (cid:48) , o (cid:48) (cid:105) and (cid:104) s, r (cid:48) , o (cid:105) in appendices A.1 and A.2, and leave empirical evaluation of these modifications for future work.", "For explaining a target prediction, we are interested in identifying the observed fact that has the most influence (according to the model) on the prediction.", "We define influence of an observed fact on the prediction as the change in the prediction score if the observed fact was not present when the embeddings were learned.", "Previous work have used this concept of influence similarly for several different tasks [Kononenko et al., 2010, Koh and Liang, 2017].", "Formally, for the target triple (cid:104) s, r, o (cid:105) and observed graph G , we want to identify a neighboring triple (cid:104) s (cid:48) , r (cid:48) , o (cid:105) G such that the score ( s, r, o ) when trained on G and the score ( s, r, o ) when trained on G {(cid:104) s (cid:48) , r (cid:48) , o (cid:105)} are maximally different, i.e. argmax ( s (cid:48) ,r (cid:48) ) Nei ( o ) ( s (cid:48) ,r (cid:48) ) ( s, r, o ) (2) where ( s (cid:48) ,r (cid:48) ) ( s, r, o ) = ( s, r, o ) ( s, r, o ) , and Nei ( o ) = { ( s (cid:48) , r (cid:48) ) |(cid:104) s (cid:48) , r (cid:48) , o (cid:105) G } .", "We are also interested in investigating the robustness of models, i.e., how sensitive are the predictions to small additions to the knowledge graph.", "Specifically, for a target prediction (cid:104) s, r, o (cid:105) , we are interested in identifying a single fake fact (cid:104) s (cid:48) , r (cid:48) , o (cid:105) that, when added to the knowledge graph G , changes the prediction score ( s, r, o ) the most.", "Using ( s, r, o ) as the score after training on G {(cid:104) s (cid:48) , r (cid:48) , o (cid:105)} , we define the adversary as: argmax ( s (cid:48) ,r (cid:48) ) ( s (cid:48) ,r (cid:48) ) ( s, r, o ) (3) where ( s (cid:48) ,r (cid:48) ) ( s, r, o ) = ( s, r, o ) ( s, r, o ) .", "The search here is over any possible s (cid:48) , which is often in the millions for most real-world KGs, and r (cid:48) R .", "We also identify adversaries that increase the prediction score for specific false triple, i.e., for a target fake fact (cid:104) s, r, o (cid:105) , the adversary is argmax ( s (cid:48) ,r (cid:48) ) ( s (cid:48) ,r (cid:48) ) ( s, r, o ) , where ( s (cid:48) ,r (cid:48) ) ( s, r, o ) is defined as before.", "There are a number of crucial challenges when conducting such adversarial attack on KGs.", "First, evaluating the effect of changing the KG on the score of the target fact ( ( s, r, o ) ) is expensive since we need to update the embeddings by retraining the model on the new graph; a very time-consuming process that is at least linear in the size of G .", "Second, since there are many candidate facts that can be added to the knowledge graph, identifying the most promising adversary through search-based methods is also expensive.", "Specifically, the search size for unobserved facts is | ||R| , which, for example in YAGO3-10 KG, can be as many as 4 .", "5 M possible facts for a single target prediction.", "In this section, we propose algorithms to address mentioned challenges by (1) approximating the effect of changing the graph on a target prediction, and (2) using continuous optimization for the discrete search over potential modifications.", "We first study the addition of a fact to the graph, and then extend it to cover removal as well.", "To capture the effect of an adversarial modification on the score of a target triple, we need to study the effect of the change on the vector representations of the target triple.", "We use e s , e r , and e o to denote the embeddings of s, r, o at the solution of argmin L ( G ) , and when considering the adversarial triple (cid:104) s (cid:48) , r (cid:48) , o (cid:105) , we use e s , e r , and e o for the new embeddings of s, r, o , respectively.", "Thus e s , e r , e o is a solution to argmin L ( G {(cid:104) s (cid:48) , r (cid:48) , o (cid:105)} ) , which can also be written as argmin L ( G ) + L ( (cid:104) s (cid:48) , r (cid:48) , o (cid:105) ) .", "Similarly, f ( e s , e r ) changes to f ( e s , e r ) after retraining.", "Since we only consider adversaries in the form of (cid:104) s (cid:48) , r (cid:48) , o (cid:105) , we only consider the effect of the attack on e o and neglect its effect on e s and e r .", "This assumption is reasonable since the adversary is connected with o and directly affects its embedding when added, but it will only have a secondary, negligible effect on e s and e r , in comparison to its effect on e o .", "Further, calculating the effect of the attack on e s and e r requires a third order derivative of the loss, which is not practical ( O ( n 3 ) in the number of parameters).", "In other words, we assume that e s (cid:39) e s and e r (cid:39) e r .", "As a result, to calculate the effect of the attack, ( s, r, o ) ( s, r, o ) , we need to compute e o e o , followed by: ( s, r, o ) ( s, r, o ) = z s,r ( e o e o ) (4) where z s,r = f ( e s , e r ) .", "We now derive an efficient computation for e o e o .", "First, the derivative of the loss L ( G ) = L ( G ) + L ( (cid:104) s (cid:48) , r (cid:48) , o (cid:105) ) over e o is: e o L ( G ) = e o L ( G ) (1 ) z s (cid:48) ,r (cid:48) (5) where z s (cid:48) ,r (cid:48) = f ( e (cid:48) s , e (cid:48) r ) , and = ( ( s (cid:48) , r (cid:48) , o )) .", "We perform first order Taylor approximation of e o L ( G ) to get: 0 (cid:39) (1 ) z (cid:124) s (cid:48) ,r (cid:48) + ( H o + (1 ) z (cid:124) s (cid:48) ,r (cid:48) z s (cid:48) ,r (cid:48) )( e o e o ) (6) where H o is the d d Hessian matrix for o , i.e., second order derivative of the loss w.r.t. e o , computed sparsely.", "Solving for e o e o gives us, e o e o = : (1 )( H o + (1 ) z (cid:124) s (cid:48) ,r (cid:48) z s (cid:48) ,r (cid:48) ) 1 z (cid:124) s (cid:48) ,r (cid:48) .", "Then, we compute the score change as: ( s, r, o ) ( s, r, o ) = z s,r ( e o e o ) (7) = z s,r ((1 )( H o + (1 ) z (cid:124) s (cid:48) ,r (cid:48) z s (cid:48) ,r (cid:48) ) 1 z (cid:124) s (cid:48) ,r (cid:48) ) .", "Similarly, we estimate the score change of (cid:104) s, r, o (cid:105) after removing (cid:104) s (cid:48) , r (cid:48) , o (cid:105) as: z s,r ((1 )( H o + (1 ) z (cid:124) s (cid:48) ,r (cid:48) z s (cid:48) ,r (cid:48) ) 1 z (cid:124) s (cid:48) ,r (cid:48) ) .", "At convergence, after retraining, we expect e o L ( G ) = 0 .", "In practice, H o is positive definite, making H o + (1 ) z (cid:124) s (cid:48) ,r (cid:48) z s (cid:48) ,r (cid:48) positive definite as well, and invertible.", "Calculating this expression is efficient since H o is a d d matrix ( d is the embedding dimension), and z s,r , z s (cid:48) ,r (cid:48) R d .", "Using the approximations provided in the previous section, Eq.", "(7) and (4.1), we can use brute force enumeration to find the adversary (cid:104) s (cid:48) , r (cid:48) , o (cid:105) .", "This approach is feasible when removing an observed triple since the search space of such modifications is usually small; it is the number of observed facts that share the object with the target.", "On the other hand, finding the most influential unobserved fact s e s r e r f ( e s , e r ) (Fixed) z s , r Inverter Network s e s r e r Figure 2: Inverter Network The architecture of our inverter function that translate z s,r to its respective ( s, r ) .", "to add requires search over a much larger space of all possible unobserved facts (that share the object).", "Instead, we identify the most influential unobserved fact (cid:104) s (cid:48) , r (cid:48) , o (cid:105) by using a gradient-based algorithm on vector z s (cid:48) ,r (cid:48) in the embedding space (reminder, z s (cid:48) ,r (cid:48) = f ( e (cid:48) s , e (cid:48) r ) ), solving the following continuous optimization problem in R d : argmax z s (cid:48) ,r (cid:48) ( s (cid:48) ,r (cid:48) ) ( s, r, o ) .", "After identifying the optimal z s (cid:48) ,r (cid:48) , we still need to generate the pair ( s (cid:48) , r (cid:48) ) .", "We design a network, shown in Figure 2, that maps the vector z s (cid:48) ,r (cid:48) to the entity-relation space, i.e., translating it into ( s (cid:48) , r (cid:48) ) .", "In particular, we train an auto-encoder where the encoder is fixed to receive the s and r as one-hot inputs, and calculates z s,r in the same way as the DistMult and ConvE encoders respectively (using trained embeddings).", "The decoder is trained to take z s,r as input and produce s and r , essentially inverting f and the embedding layers.", "As our decoder, for DistMult, we pass z s,r through a linear layer and then use two other linear layers for the subject and the relation separately, providing one-hot vectors as s and r .", "For ConvE, we pass z s,r through a decon-volutional layer, and then use the same architecture as the DistMult decoder.", "Although we could use maximum inner-product search [Shrivastava and Li, 2014] for DistMult instead of our defined inverter function, we are looking for a general approach that works across multiple models.", "We evaluate the performance of our inverter networks (one for each model/dataset) on correctly recovering the pairs of subject and relation from the test set of our benchmarks, given the z s,r .", "The accuracy of recovered pairs (and of each argument) WordNet YAGO DistMult ConvE DistMult ConvE Recover s 93.4 96.1 97.2 98.1 Recover r 91.3 95.3 99.0 99.6 Recover { s,r } 89.5 94.2 96.4 98.0 Table 1: Inverter Functions Accuracy , we calculate the accuracy of our inverter networks in correctly recovering the pairs of subject and relation from the test set of our benchmarks.", "is given in Table 1.", "As shown, our networks achieve a very high accuracy, demonstrating their ability to invert vectors z s,r to { s, r } pairs.", "Datasets To evaluate our method, we conduct several experiments on four widely used KGs.", "To validate the accuracy of the approximations, we use smaller sized Kinship and Nations KGs for which we can make comparisons against more expensive but less approximate approaches.", "For the remaining experiments, we use YAGO3-10 and WN18 KGs, which are closer to real-world KGs in their size and characteristics (see Table 2).", "Models We implement all methods using the same loss and optimization for training, i.e., Ada-Grad and the binary cross-entropy loss.", "We use validation data to tune the hyperparameters and use a grid search to find the best hyperparameters, such as regularization parameter, and learning rate of the gradient-based method.", "To capture the effect of our method on link prediction task, we study the change in commonly-used metrics for evaluation in this task: mean reciprocal rank (MRR) and Hits@K.", "Further, we use the same hyperparameters as in Dettmers et al.", "[2018] for training link prediction models for these knowledge graphs.", "Influence Function We also compare our method with influence function (IF) [Koh and Liang, 2017].", "The influence function approximates the effect of upweighting a training sample on the loss for a specific test point.", "We use IF to approximate the 20 40 60 80 100 Number of entities 0 200 400 600 800 1000 1200 1400 1600 T i m e ( s ) IF (d=5) IF (d=10) CRIAGE (d=5) CRIAGE (d=10) Figure 3: Influence function vs CRIAGE .", "change in the loss after removing a triple as: I up,loss ( (cid:104) s (cid:48) , r (cid:48) , o (cid:105) , (cid:104) s, r, o (cid:105) ) = L ( (cid:104) s, r, o (cid:105) , ) (cid:124) H 1 L ( (cid:104) s (cid:48) , r (cid:48) , o (cid:105) , ) (9) where (cid:104) s (cid:48) , r (cid:48) , o (cid:105) and (cid:104) s, r, o (cid:105) are training and test samples respectively, represents the optimum parameters and L ( (cid:104) s, r, o (cid:105) , ) represents the loss function for the test sample (cid:104) s, r, o (cid:105) .", "Influence function does not scale well, so we only compare our method with IF on the smaller size KGs.", "We evaluate CRIAGE by (6.1) comparing CRIAGE estimate with the actual effect of the attacks, (6.2) studying the effect of adversarial attacks on evaluation metrics, (6.3) exploring its application to the interpretability of KG representations, and (6.4) detecting incorrect triples.", "To evaluate the quality of our approximations and compare with influence function (IF), we conduct leave one out experiments.", "In this setup, we take all the neighbors of a random target triple as candidate modifications, remove them one at a time, retrain the model each time, and compute the exact change in the score of the target triple.", "We can use the magnitude of this change in score to rank the candidate triples, and compare this exact ranking with ranking as predicted by: CRIAGE-Remove, influence function with and without Hessian matrix, and the original model score (with the intuition that facts that the model is most confident of will have Methods Nations Kinship Adding Removing Adding Removing Ranking Based on Score 0.03 0.02 -0.01 -0.01 -0.09 -0.06 0.01 0.01 Influence Function without Hessian 0.15 0.12 0.12 0.1 0.77 0.71 0.77 0.71 CRIAGE (Brute Force) 0.95 0.84 0.94 0.85 0.99 0.97 0.99 0.95 Influence Function 0.99 0.95 0.99 0.96 0.99 0.98 0.99 0.98 Table 3: Ranking modifications by their impact on the target .", "the largest impact when removed).", "Similarly, we evaluate CRIAGE-Add by considering 200 random triples that share the object entity with the target sample as candidates, and rank them as above.", "The average results of Spearman's and Kendall's rank correlation coefficients over 10 random target samples is provided in Table", "3. CRIAGE performs comparably to the influence function, confirming that our approximation is accurate.", "Influence function is slightly more accurate because they use the complete Hessian matrix over all the parameters, while we only approximate the change by calculating the Hessian over e o .", "The effect of this difference on scalability is dramatic, constraining IF to very small graphs and small embedding dimensionality ( d 10 ) before we run out of memory.", "In Figure 3, we show the time to compute a single adversary by IF compared to CRIAGE, as we steadily grow the number of entities (randomly chosen subgraphs), averaged over 10 random triples.", "As it shows, CRIAGE is mostly unaffected by the number of entities while IF increases quadratically.", "Considering that real-world KGs have tens of thousands of times more entities, making IF unfeasible for them.", "Now we evaluate the effectiveness of CRIAGE to successfully attack link prediction by adding false facts.", "The goal here is to identify the attacks for triples in the test data, and measuring their effect on MRR and Hits@ metrics (ranking evaluations) after conducting the attack and retraining the model.", "Since this is the first work on adversarial attacks for link prediction, we introduce several baselines to compare against our method.", "For finding the adversarial fact to add for the target triple (cid:104) s, r, o (cid:105) , we consider two baselines: 1) choosing a random fake fact (cid:104) s (cid:48) , r (cid:48) , o (cid:105) ( Random Attack ); 2) finding ( s (cid:48) , r (cid:48) ) by first calculating f ( e s , e r ) and then feeding f ( e s , e r ) to the decoder of the inverter function ( Opposite Attack ).", "In addition to CRIAGE-Add, we introduce two other alternatives of our method: (1) CRIAGE-FT , that uses CRIAGE to increase the score of fake fact over a test triple, i.e., we find the fake fact the model ranks second after the test triple, and identify the adversary for them, and (2) CRIAGE-Best that selects between CRIAGE-Add and CRIAGE-FT attacks based on which has a higher estimated change in score.", "All-Test The result of the attack on all test facts as targets is provided in the Table", "4. CRIAGE-Add outperforms the baselines, demonstrating its ability to effectively attack the KG representations.", "It seems DistMult is more robust against random attacks, while ConvE is more robust against designed attacks.", "CRIAGE-FT is more effective than CRIAGE-Add since changing the score of a fake fact is easier than of actual facts; there is no existing evidence to support fake facts.", "We also see that YAGO3-10 models are more robust than those for WN18.", "Looking at sample attacks (provided in Appendix A.4), CRIAGE mostly tries to change the type of the target object by associating it with a subject and a relation for a different entity type.", "Uncertain-Test To better understand the effect of attacks, we consider a subset of test triples that 1) the model predicts correctly, 2) difference between their scores and the negative sample with the highest score is minimum.", "This Uncertain-Test subset contains 100 triples from each of the original Models YAGO3-10 WN18 All-Test Uncertain-Test All-Test Uncertain-Test MRR Hits@1 MRR Hits@1 MRR Hits@1 MRR Hits@1 D i s t M u l t DistMult 0.458 37 (0) 1.0 100 (0) 0.938 93.1 (0) 1.0 100 (0) + Adding Random Attack 0.442 34.9 (-2.1) 0.91 87.6 (-12.4) 0.926 91.1 (-2) 0.929 90.4 (-9.6) + Adding Opposite Attack 0.427 33.2 (-3.8) 0.884 84.1 (-15.9) 0.906 87.3 (-5.8) 0.921 91 (-9) + CRIAGE-Add 0.379 29.1 (-7.9) 0.71 58 (-42) 0.89 86.4 (-6.7) 0.844 81.2 (-18.8) + CRIAGE-FT 0.387 27.7 (-9.3) 0.673 50.5 (-49.5) 0.86 79.2 (-13.9) 0.83 74.5 (-25.5) + CRIAGE-Best 0.372 26.9 (-10.1) 0.658 49.3 (-50.7) 0.838 77.9 (-15.2) 0.814 72.7 (-27.3) C o n v E ConvE 0.497 41.2 (0) 1.0 100 (0) 0.94 93.3 (0) 1.0 100 (0) + Adding Random Attack 0.474 38.4 (-2.8) 0.889 83 (-17) 0.921 90.1 (-3.2) 0.923 89.7 (-10.3) + Adding Opposite Attack 0.469 38 (-3.2) 0.874 81.9 (-18.1) 0.915 88.9 (-4.4) 0.908 88.1 (-11.9) + CRIAGE-Add 0.454 36.9 (-4.3) 0.738 61.5 (-38.5) 0.897 87.8 (-5.5) 0.895 87.6 (-12.4) + CRIAGE-FT 0.441 33.2 (-8) 0.703 57.4 (-42.6) 0.865 80 (-13.3) 0.874 79.5 (-20.5) + CRIAGE-Best 0.423 31.9 (-9.3) 0.677 54.8 (-45.2) 0.849 79.1 (-14.2) 0.858 78.4 (-21.6) Table 4: Robustness of Representation Models , the effect of adversarial attack on link prediction task.", "test sets, and we provide results of attacks on this data in Table", "4. The attacks are much more effective in this scenario, causing a considerable drop in the metrics.", "Further, in addition to CRIAGE signifi-cantly outperforming other baselines, they indicate that ConvE's confidence is much more robust.", "Relation Breakdown We perform additional analysis on the YAGO3-10 dataset to gain a deeper understanding of the performance of our model.", "As shown in Figure 4, both DistMult and ConvE provide a more robust representation for isAli-atedTo and isConnectedTo relations, demonstrating the confidence of models in identifying them.", "Moreover, the CRIAGE affects DistMult more in playsFor and isMarriedTo relations while affecting ConvE more in isConnectedTo relations.", "To be able to understand and interpret why a link is predicted using the opaque, dense embeddings, we need to find out which part of the graph was most influential on the prediction.", "To provide such explanations for each predictions, we identify the most influential fact using CRIAGE-Remove.", "Instead of focusing on individual predictions, we aggregate the explanations over the whole dataset for each relation using a simple rule extraction technique: we Methods (cid:104) s (cid:48) , r (cid:48) , o (cid:105) Noise (cid:104) s (cid:48) , r, o (cid:105) Noise Hits@1 Hits@2 Hits@1 Hits@2 Random 19.7 39.4 19.7 39.4 Lowest 16 37 26 47 CRIAGE 42 62 55 76 Table 6: Error Detection Accuracy in the neighborhood of 100 chosen samples.", "find simple patterns on subgraphs that surround the target triple and the removed fact from CRIAGE-Remove, and appear more than 90% of the time.", "We only focus on extracting length2 horn rules, i.e., R 1 ( a, c ) R 2 ( c, b ) R ( a, b ) , where R ( a, b ) is the target and R 2 ( c, b ) is the removed fact.", "Table 5 shows extracted YAGO3-10 rules that are common to both models, and ones that are not.", "The rules show several interesting inferences, such that hasChild is often inferred via married parents, and isLocatedIn via transitivity.", "There are several differences in how the models reason as well; DistMult often uses the hasCapital as an intermediate step for isLocatedIn , while ConvE incorrectly uses isNeighbor .", "We also compare against rules extracted by Yang et al.", "[2015] for YAGO3-10 that utilizes the structure of DistMult: they require do-main knowledge on types and cannot be applied to ConvE.", "Interestingly, the extracted rules contain all the rules provided by CRIAGE, demonstrating that CRIAGE can be used to accurately interpret models, including ones that are not interpretable, such as ConvE.", "These are preliminary steps toward interpretability of link prediction models, and we leave more analysis of interpretability to future work.", "Here, we demonstrate another potential use of adversarial modifications: finding erroneous triples in the knowledge graph.", "Intuitively, if there is an error in the graph, the triple is likely to be inconsistent with its neighborhood, and thus the model should put least trust on this triple.", "In other words, the error triple should have the least influence on the model's prediction of the training data.", "Formally, to find the incorrect triple (cid:104) s (cid:48) , r (cid:48) , o (cid:105) in the neighborhood of the train triple (cid:104) s, r, o (cid:105) , we need to find the triple (cid:104) s (cid:48) , r (cid:48) , o (cid:105) that results in the least change ( s (cid:48) ,r (cid:48) ) ( s, r, o ) when removed from the graph.", "triples into the graph, and measure the ability of CRIAGE to detect the errors using our optimization.", "We consider two types of incorrect triples: 1) incorrect triples in the form of (cid:104) s (cid:48) , r, o (cid:105) where s (cid:48) is chosen randomly from all of the entities, and 2) incorrect triples in the form of (cid:104) s (cid:48) , r (cid:48) , o (cid:105) where s (cid:48) and r (cid:48) are chosen randomly.", "We choose 100 random triples from the observed graph, and for each of them, add an incorrect triple (in each of the two scenarios) to its neighborhood.", "Then, after retraining DistMult on this noisy training data, we identify error triples through a search over the neighbors of the 100 facts.", "The result of choosing the neighbor with the least influence on the target is provided in the Table", "6. When compared with baselines that randomly choose one of the neighbors, or assume that the fact with the lowest score is incorrect, we see that CRIAGE outperforms both of these with a considerable gap, obtaining an accuracy of 42% and 55% in detecting errors.", "Learning relational knowledge representations has been a focus of active research in the past few years, but to the best of our knowledge, this is the first work on conducting adversarial modifications on the link prediction task.", "Knowledge graph embedding There is a rich literature on representing knowledge graphs in vector spaces that differ in their scoring functions [Wang et al., 2017, Goyal and Ferrara, 2018, Fooshee et al., 2018].", "Although CRIAGE is primarily applicable to multiplicative scoring functions [Nickel et al., 2011, Socher et al., 2013, Yang et al., 2015, Trouil-lon et al., 2016], these ideas apply to additive scoring functions [Bordes et al., 2013a, Wang et al., 2014, Lin et al., 2015, Nguyen et al., 2016] as well, as we show in Appendix A.3.", "Furthermore, there is a growing body of literature that incorporates an extra types of evidence for more informed embeddings such as numerical values [Garcia-Duran and Niepert, 2017], images [Ooro-Rubio et al., 2017], text [Toutanova et al., 2015, 2016, Tu et al., 2017], and their combinations [Pezeshkpour et al., 2018].", "Using CRIAGE, we can gain a deeper understanding of these methods, especially those that build their embeddings wit hmultiplicative scoring functions.", "Interpretability and Adversarial Modification There has been a significant recent interest in conducting an adversarial attacks on different machine learning models [Biggio et al., 2014, Papernot et al., 2016, Dong et al., 2017, Zhao et al., 2018a,b, Brunet et al., 2018] to attain the interpretability, and further, evaluate the robustness of those models.", "Koh and Liang [2017] uses influence function to provide an approach to understanding black-box models by studying the changes in the loss occurring as a result of changes in the training data.", "In addition to incorporating their established method on KGs, we derive a novel approach that differs from their procedure in two ways: (1) instead of changes in the loss, we consider the changes in the scoring function, which is more appropriate for KG representations, and (2) in addition to searching for an attack, we introduce a gradient-based method that is much faster, especially for adding an attack triple (the size of search space make the influence function method infeasible).", "Previous work has also considered adversaries for KGs, but as part of training to improve their representation of the graph [Minervini et al., 2017, Cai and Wang, 2018].", "Adversarial Attack on KG Although this is the first work on adversarial attacks for link prediction, there are two approaches [Dai et al., 2018, Zgner et al., 2018] that consider the task of adversarial attack on graphs.", "There are a few fundamental differences from our work: (1) they build their method on top of a path-based representations while we focus on embeddings, (2) they consider node classifi-cation as the target of their attacks while we attack link prediction, and (3) they conduct the attack on small graphs due to restricted scalability, while the complexity of our method does not depend on the size of the graph, but only the neighborhood, allowing us to attack real-world graphs.", "Motivated by the need to analyze the robustness and interpretability of link prediction models, we present a novel approach for conducting adversarial modifications to knowledge graphs.", "We introduce CRIAGE, completion robustness and interpretability via adversarial graph edits: identifying the fact to add into or remove from the KG that changes the prediction for a target fact.", "CRIAGE uses (1) an estimate of the score change for any target triple after adding or removing another fact, and (2) a gradient-based algorithm for identifying the most influential modification.", "We show that CRIAGE can effectively reduce ranking metrics on link prediction models upon applying the attack triples.", "Further, we incorporate the CRIAGE to study the interpretability of KG representations by summarizing the most influential facts for each relation.", "Finally, using CRIAGE, we introduce a novel automated error detection method for knowledge graphs.", "We have release the open-source implementation of our models at: https://pouyapez.github.io/criage .", "We would like to thank Matt Gardner, Marco Tulio Ribeiro, Zhengli Zhao, Robert L. Logan IV, Dheeru Dua and the anonymous reviewers for their detailed feedback and suggestions.", "This work is supported in part by Allen Institute for Artificial Intelligence (AI2) and in part by NSF awards #IIS-1817183 and #IIS-1756023.", "The views expressed are those of the authors and do not reflect the official policy or position of the funding agencies." ]
[ "abstain", "abstain", "objective", "method", "method", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "objective", "objective", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "other", "method", "abstain", "other", "other", "method", "other", "method", "other", "other", "objective", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "other", "abstain", "other", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "other", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "result", "objective", "other", "other", "other", "method", "other", "other", "other", "other", "other", "method", "objective", "method", "abstain", "result", "objective", "objective", "other", "other", "other", "other" ]
[ "Over its three decade history, speech translation has experienced several shifts in its primary research themes; moving from loosely coupled cascades of speech recognition and machine translation, to exploring questions of tight coupling, and finally to end-to-end models that have recently attracted much attention.", "This paper provides a brief survey of these developments, along with a discussion of the main challenges of traditional approaches which stem from committing to intermediate representations from the speech recognizer, and from training cascaded models separately towards different objectives.", "Recent end-to-end modeling techniques promise a principled way of overcoming these issues by allowing joint training of all model components and removing the need for explicit intermediate representations.", "However, a closer look reveals that many end-to-end models fall short of solving these issues, due to compromises made to address data scarcity.", "This paper provides a unifying categorization and nomenclature that covers both traditional and recent approaches and that may help researchers by highlighting both trade-offs and open research questions.", "Speech translation (ST), the task of translating acoustic speech signals into text in a foreign language, is a complex and multi-faceted task that builds upon work in automatic speech recognition (ASR) and machine translation (MT).", "ST applications are diverse and include travel assistants (Takezawa et al., 1998), simultaneous lecture translation (Fugen, 2008), movie dubbing/subtitling (Sa-boo and Baumann, 2019; Matusov et al., 2019), language documentation and crisis response (Bansal et al., 2017), and developmental efforts (Black et al., 2002).", "Until recently, the only feasible approach has been the cascaded approach that applies an ASR to the speech inputs, and then passes the results on to an MT system.", "Progress in ST has come from two fronts: general improvements in ASR and MT models, and moving from the loosely-coupled cascade in its most basic form toward a tighter coupling.", "However, despite considerable efforts toward tight coupling, a large share of the progress has arguably been owed simply to general ASR and MT improvements.", "1 Recently, new modeling techniques and in particular end-to-end trainable encoder-decoder models have fueled hope for addressing challenges of ST in a more principled manner.", "Despite these hopes, the empirical evidence indicates that the success of such efforts has so far been mixed (Weiss et al., 2017; Niehues et al., 2019).", "In this paper, we will attempt to uncover potential reasons for this.", "We start by surveying models proposed throughout the three-decade history of ST. By contrasting the extreme points of loosely coupled cascades vs. purely end-to-end trained direct models, we identify foundational challenges: erroneous early decisions, mismatch between spoken-style ASR outputs and written-style MT inputs, and loss of speech information (e.g. prosody) on the one hand, and data scarcity on the other hand.", "We then show that to improve data efficiency, most end-to-end models employ techniques that re-introduce issues generally attributed to cascaded ST. Furthermore, this paper proposes a categorization of ST research into well-defined terms for the particular challenges, requirements, and techniques that are being addressed or used.", "This multidimensional categorization suggests a modeling 1 For instance, Pham et al. (2019)'s winning system in the IWSLT 2019 shared ST task (Niehues et al., 2019) makes heavy use of recent ASR and MT modeling techniques, but is otherwise a relatively simple cascaded approach.", "space with many intermediate points, rather than a dichotomy of cascaded vs. end-to-end models, and reveals a number of trade-offs between different modeling choices.", "This implies that additional work to more explicitly analyze the interactions between these trade-offs, along with further model explorations, can help to determine more favorable points in the modeling space, and ultimately the most favorable model for a specific ST application.", "This chapter surveys the historical development of ST and introduces key concepts that will be expanded upon later.", "2 2.1 Loosely Coupled Cascades Early efforts to realize ST (Stentiford and Steer, 1988; Waibel et al., 1991) introduced what we will refer to as the loosely coupled cascade in which separately built ASR and MT systems are employed and the best hypothesis of the former is used as input to the latter.", "The possibility of speech-to-speech translation, which extends the cascade by appending a text-to-speech component, was also considered early on (Waibel et al., 1991).", "These early systems were especially susceptible to errors propagated from the ASR, given the widespread use of interlingua-based MT which relied on parsers unable to handle mal-formed inputs (Woszczyna et al., 1993; Lavie et al., 1996; Liu et al., 2003).", "Subsequent systems Wang and Waibel (1998); Takezawa et al. (1998); Black et al. (2002); Sumita et al. (2007), relying on data driven, statistical MT, somewhat alleviated the issue, and also in part opened the path towards tighter integration.", "Researchers soon turned to the question of how to avoid early decisions and the problem of error propagation.", "While the desirable solution of full integration over transcripts is intractable (Ney, 1999), approximations are possible.", "Vidal (1997); Bangalore and Riccardi (2001); Casacuberta et al. (2004); Perez et al. (2007) compute a composition of FST-based ASR and MT models, which approximates the full integration up to search heuristics, but suffers from limited reordering capabilities.", "A much 2 For a good comparison of empirical results, which are not the focus of this paper, we refer to concurrent work (Sulubacak et al., 2019).", "Moreover, for conciseness we do not cover the sub-topic of simultaneous translation (F ugen, 2008).", "simpler, though computationally expensive, solution is the n -best translation approach which replaces the sum over all possible transcripts by a sum over only the n -best ASR outputs (Woszczyna et al., 1993; Lavie et al., 1996).", "Follow-up work suggested lattices and confusion nets (Saleem et al., 2004; Zhang et al., 2005; Bertoldi and Federico, 2005) as more effective and efficient alternatives to n -best lists.", "Lattices proved flexible enough for integration into various translation models, from word-based translation models to phrase-based ST Matusov et al. (2005, 2008) to neural lattice-to-sequence models (Sperber et al., 2017a, 2019b; Zhang et al., 2019; Beck et al., 2019).", "Another promising idea was to limit the detrimental effects of early decisions, rather than attempting to avoid early decisions.", "One way of achieving this is to train robust translation models by introducing synthetic ASR errors into the source side of MT corpora (Peitz et al., 2012; Tsvetkov et al., 2014; Ruiz et al., 2015; Sperber et al., 2017b; Cheng et al., 2018, 2019).", "A different route is taken by Dixon et al. (2011); He et al. (2011) who directly optimize ASR outputs towards translation quality.", "Beyond early decisions, research moved towards tighter coupling by addressing issues arising from ASR and MT models being trained separately and on different types of corpora.", "Domain adaptation techniques were used by Liu et al. (2003); Fugen (2008) to adapt models to the spoken language domain.", "Matusov et al. (2006); Fugen (2008) propose re-segmenting the ASR output and inserting punctuation , so as to provide the translation model with well-formed text inputs.", "In addition, disfluency removal (Fitzgerald et al., 2009) was proposed to avoid translation errors caused by disfluencies that are often found in spoken language.", "Aguero et al. (2006); Anumanchipalli et al. (2012); Do et al. (2017); Kano et al. (2018) propose prosody transfer for speech-to-speech translation by determining source-side prosody and applying transformed prosody characteristics to the aligned target words.", "It is important to realize that all efforts to this point had used separate ASR and MT corpora for training.", "This often led to a mismatch between ASR trained on data from the spoken domain, and MT trained on data from the written domain.", "End-to-end ST data (translated speech utterances) was only available in small quantities for test purposes.", "Paulik (2010) proposes the use of audio recordings of interpreter-mediated communication scenarios, which is not only potentially easier to obtain, but also does not exhibit such domain mismatches.", "Post et al. (2013) manually translate an ASR corpus to obtain an end-to-end ST corpus, and show that training both ASR and MT on the same corpus considerably improves results compared to using out-of-domain MT data.", "Unfortunately, high annotation costs prevent scaling of the latter approach, so follow-up work concentrates on compiling ST corpora from available web sources (Godard et al., 2018; Kocabiyikoglu et al., 2018; Sanabria et al., 2018; di Gangi et al., 2019a; Boito et al., 2020; Beilharz et al., 2020; Iranzo-Sanchez et al., 2020; Wang et al., 2020a).", "Note that despite these efforts, publicly available ST corpora are currently strongly limited in terms of both size and language coverage.", "For practical purposes, the use of separate ASR and MT corpora is therefore currently unavoidable.", "The availability of end-to-end ST corpora, along with the success of end-to-end models for MT and ASR, led researchers to explore ST models trained in an end-to-end fashion.", "This was fueled by a hope to solve the issues addressed by prior research in a principled and more effective way.", "Duong et al. (2016); Berard et al. (2016); Bansal et al. (2018) explore direct ST models that translate speech without using explicitly generated intermediate ASR output.", "In contrast, Kano et al. (2017); Anastasopoulos and Chiang (2018); Wang et al. (2020b) explore end-to-end trainable cascades and triangle models , i.e. models that do rely on transcripts, but are optimized in part through end-to-end training.", "Multi-task training and pre-training were proposed as a way to incorporate additional ASR and MT data and reduce dependency on scarce end-to-end data (Weiss et al., 2017; Berard et al., 2018; Bansal et al., 2019; Stoian et al., 2020; Wang et al., 2020b).", "As these techniques were not able to exploit ASR and MT data as effectively as the loosely coupled cascade, other approaches like subtask training for end-to-end-trainable cascades (Sperber et al., 2019a), data augmentation (Jia et al., 2019a; Pino et al., 2019), knowledge distillation (Liu et al., 2019), and meta-learning (In-durthi et al., 2020) were proposed.", "Salesky et al.", "(2019a) propose pre-segmenting speech frames, (Jia et al., 2019b; Tjandra et al., 2019) explore speech-to-speech translation.", "Sung et al. (2019); di Gangi et al. (2019b); Di Gangi et al. (2020); Bahar et al. (2019); Inaguma et al. (2019); di Gangi et al. (2019c) transfer ideas from MT and ASR fields to ST. 3 Central Challenges Given the abundance of prior work, a clear picture on where we currently stand is needed.", "For purposes of identifying the key challenges in ST research, this section will contrast the extreme cases of the loosely coupled cascade ( CC in Fig.", "1) 3 against the vanilla direct model ( Di in Fig. 1).", "4 We emphasize that these models are only extreme points in a modeling space with many intermediate points, as we will see in 4.", "We assume appropriate speech features X as inputs.", "T, T T denote candidate/best translations, respectively, from the MT hypothesis space.", "S H denotes a graphemic transcript from the ASR hypothesis space.", "The loosely coupled cascade justifies its decomposition into MT model PMT ( T S ) and ASR model PASR ( S X ) as follows:", "T = argmax T T P ( T X ) (1) = argmax T T S H P ( T S, X ) P ( S X ) (2) argmax T T S H PMT ( T S ) PASR ( S X ) (3) argmax T T S H PMT ( T S ) PASR ( S X ) (4) Note that here the set H contains only a single entry, the 1-best ASR output.", "The approximations in these derivations directly result in the following three foundational challenges: Erroneous early decisions: Committing to a potentially erroneous S during inference.", "This leads to the well-known problem of error propagation (Ruiz and Federico, 2014) and is caused by avoiding the intractable full integration over transcripts (Eq.", "3) and using only the 1-best ASR output instead (Eq. 4).", "Typical countermeasures include increasing H to cover a larger space using lattices or confusion nets, or improving the robustness of MT models.", "Mismatched source-language: ASR and MT components model the source-language (transcript) priors PMT ( S ) and PASR ( S ) differently.", "5 Causes include both modeling assumptions, e.g. ASR modeling only unpunctuated transcripts; and mismatched training data, leading to stylistic and topical divergence.", "Typical countermeasures are domain adaptation techniques, disfluency removal, text normalization, and segmentation/punctuation insertion.", "Information loss: Assumed conditional independence between inputs and outputs, given the transcript: ( T X ) S .", "This can be seen in Eq.", "3 and results in any information not represented in S to be lost for the translation step.", "In particular, the MT model is unaware of prosody which structures and disambiguates the utterances, thus playing a role similar to punctuation in written texts; and provides ways to emphasize words or parts of the messages that the speaker think are important.", "Prosody also conveys information on the speaker's attitude and emotional state (Jouvet, 2019).", "5 Note that our definition does not entail covariance shift and other forms of domain mismatch (Kouw and Loog, 2018) which, though relevant, are not unique to cascaded ST and are widely covered by general ASR and MT literature (Cuong and Sima'an, 2018).", "Consider instead the other extreme case: an encoder-decoder model trained to directly produce translations from speech (Eq. 1).", "Because this model avoids the decomposition in Eq.", "2-4, it is not subject to the three issues outlined in 3.1.", "Unfortunately, this second extreme case is often impractical due to its dependency on scarce end-to-end ST training corpora (2.3), rendering this model unable to compete with cascaded models that are trained on abundant ASR and MT training data.", "Most recent works therefore depart from this purely end-to-end trained direct model, and incorporate ASR and MT back into training, e.g. through weakly supervised training, or by exploring end-to-end trainable cascades or triangle models ( CT / MT in Fig. 1).", "This departure raises two questions: (1) To what extent does the re-introduction of ASR and MT data cause challenges similar to those found in loosely coupled cascades?", "(2) Are techniques such as weakly supervised training effective enough to allow competing with the loosely coupled cascade?", "To address the second question, we propose the notion of data efficiency as a fourth key challenge.", "Data efficiency: The increase in accuracy achievable through the addition of a certain amount of training data.", "To assess data efficiency, data ablations that contrast models over at least two data conditions are required.", "We argue that empirical evidence along these lines will help considerably in making generalizable claims about the relative performance between two ST models.", "Generalizable findings across data conditions are critical given that ST models are trained on at least three types of corpora (ASR, MT, and end-to-end corpora), whose availability vastly differs across languages.", "Consider how the incorporation of MT and ASR data into ST models of any kind may inherently cause the problems as outlined in 3.1: Training on MT data may weaken the model's sensitivity to prosody; the effectiveness of training on ASR+MT data may be impacted by mismatched source-language issues; even some types of end-to-end-trainable models make (non-discrete) early decisions that are potentially erroneous.", "models that trade off advantages and disadvantages in the most favorable way, it is therefore necessary to thoroughly analyze models across the dimensions of early decisions, mismatched source-language, information loss, and data efficiency.", "Analyzing early decisions: Problems due to erroneous early decisions are inference-time phenomena in which upstream ASR errors are responsible for errors in the final translation outputs.", "It follows that the problem disappears for hypothetical utterances for which the ASR can generate error-free intermediate representations.", "Thus, models that do not suffer from erroneous early decisions will expectedly exhibit an advantage over other models especially for acoustically challenging inputs, and less so for inputs with clean acoustics.", "This angle can provide us with strategies for isolating errors related to this particular phenomenon.", "Prior work in this spirit has demonstrated that lattice-to-sequence translation is in fact beneficial especially for acoustically challenging inputs (Sperber et al., 2017a), and that cascaded models with non-discrete intermediate representations are less sensitive to artificially perturbed intermediate representations than if using discrete transcripts as an intermediate representation (Sperber et al., 2019a).", "End-to-end ST corpora allow for controlled experiments in which one can switch between matched vs. mismatched (out-of-domain) MT corpora.", "Post et al. (2013) demonstrated that using a matched corpus can strongly improve translation quality for loosely coupled cascades.", "We are not aware of such analyses in more recent work.", "Analyzing information loss: Prior work (Aguero et al., 2006; Anumanchipalli et al., 2012; Do et al., 2017; Kano et al., 2018) has addressed prosody transfer in speech-to-speech translation, but to our knowledge the question of how such information should inform textual translation decisions is still unexplored.", "Table 1 shows examples that may motivate future work in this direction.", "Analyzing data efficiency: While several prior works aim at addressing this problem, often only a single data condition is tested, limiting the generalizability of findings.", "We are aware of three recent works that do analyze data efficiency across several data conditions (Jia et al., 2019a; Sperber et al., 2019a; Wang et al., 2020b).", "Findings indicate that both pretraining and data synthesizing outperform multi-task training in terms of data efficiency, and that end-to-end trainable cascades are on par with loosely coupled cascades, while strongly outperforming multi-task training.", "Let us now break apart modeling techniques from prior literature into four overarching categories, with the aim of exposing the ST modeling space between the extreme points of vanilla direct models and loosely coupled cascades.", "Almost all models use intermediate representations (IRs) in some form: non-direct models to support both training and inference, and direct models to overcome data limitations.", "IRs are often speech transcripts, but not necessarily so.", "A number of factors must be considered for choosing an appropriate IR, such as availability of supervised data, inference accuracy, expected impact of erroneous early decisions, and the feasibility of backpropaga-tion through the IR for end-to-end training.", "We list several possibilities below: Transcripts: Generally used in the loosely coupled cascade.", "Being a discrete representation, this option prevents end-to-end training via back-propagation, although future work may experiment with work-arounds such as the straight-through gradient estimator (Bengio et al., 2013).", "Besides graphemic transcripts, phonetic transcripts are another option (Jiang et al., 2011).", "Hidden representations: Kano et al. (2017); Anastasopoulos and Chiang (2018); Sperber et al. (2019a) propose the use of hidden representations that are the by-product of a neural decoder generating an auxiliary IR such as a transcript.", "Advantages of this representation are differentiability, prevention of information loss, and weakened impact of erroneous early decisions.", "A downside is that end-to-end ST data is required for training.", "Lattices: Lattices compactly represent the space over multiple sequences, and therefore weaken the impact of erroneous early decisions.", "Future work may explore lattices over continuous, hidden representations, and end-to-end training for ST models with lattices as intermediate representation.", "Other: Prior work further suggests pre-segmented speech frames (Salesky et al., 2019a) or unsupervised speech-unit clusters (Tjandra et al., 2019) as intermediate representation.", "Further possibilities may be explored in future work.", "The conditioning graph (Fig.", "1) reveals independence assumptions and use of IRs at inference time.", "Some strategies avoid the problem of early decisions ( MC , Di , MT , Jt ), while others remove the conditional independence assumption between inputs and outputs ( Di , CT , MT , Jt ).", "Committed cascade ( CC ): Compute one IR, rely on it to generate outputs (Eq. 4).", "Includes both the loosely coupled cascade, and recent end-to-end trainable cascaded models such as by Kano et al. (2017); Sperber et al. (2019a).", "Marginalizing cascade ( MC ): Compute outputs by relying on IRs, but marginalize over them instead of committing to one (Eq. 3).", "As marginaliza-tion is intractable, approximations such as n -best translation or lattice translation are generally used.", "Direct ( Di ): Compute outputs without relying on IRs (Eq. 1).", "To address data limitations, techniques such as multi-task training or data augmentation can be used, but may reintroduce certain biases.", "Committed triangle ( CTr ): Commit to an IR, then produce outputs by conditioning on both inputs and intermediate representation.", "Anastasopoulos and Chiang (2018), who introduce the triangle model, use it in its marginalizing form (see below).", "Unexplored variations include the use of discrete transcripts as IR, which interestingly could be seen as a strict generalization of the loosely coupled cascade and should therefore never perform worse than it if trained properly.", "Marginalizing triangle ( MTr ): Produce output by conditioning on both input and IR, while marginalizing over the latter (Eq. 2).", "Anastasopoulos and Chiang (2018) marginalize by taking an n -best list, with n set to only 4 for computational reasons.", "This raises the question of whether the more computationally efficient lattices could be employed instead.", "Similar considerations apply to the end-to-end trainable marginalizing cascade.", "Joint ( Jt ): Changes the problem formulation to S, T = argmax S H ,T T P r ( S, T X ) .", "This is a useful optimization for many applications which display both transcripts and translations to the user, yet to our knowledge has never been explicitly addressed by researchers.", "This group of techniques describes the types of supervision signals applied during training .", "Subtask training: Training of sub-components by pairing IRs with either the speech inputs or the output translations.", "Loosely coupled cascades rely on this training technique while recently proposed cascaded and triangle models often combine subtask training and end-to-end training.", "Auxiliary task training: Training by pairing either model inputs or outputs with data from an arbitrary auxiliary task through multi-task training.", "6 This technique has been used in two ways in literature: (1) To incorporate ASR and MT data into direct models by using auxiliary models that share parts of the parameters with the main model (Weiss et al., 2017).", "Auxiliary models are introduced for training purposes only, and discarded during inference.", "This approach has been found 6 This definition subsumes pretraining, which is simply using a specific multitask training schedule.", "inferior at exploiting ASR and MT data when compared to subtask training (Sperber et al., 2019a).", "(2) To incorporate various types of less closely related training data, such as the use of multitask training to exploit ASR data from an unrelated third language (Bansal et al., 2019; Stoian et al., 2020).", "End-to-end: Supervision signal that directly pairs speech inputs and output translations.", "This technique is appealing because it jointly optimizes all involved parameters and may lead to better optima.", "The main limitation is lack of appropriate data, which can be addressed by combined training with one of the alternative supervision types, or by training on augmented data, as discussed next.", "Manual: Speech utterances for training are translated (and possibly transcribed) by humans.", "This is the most desirable case, but such data is currently scarce.", "While we have seen growth in data sources in the past two years (2.3), collecting more data is an extremely important direction for future work.", "Augmented: Data obtained by either augmenting an ASR corpus with automatic translations, or augmenting an MT corpus with synthesized speech.", "This has been shown more data efficient than multitask training in the context of adding large MT and ASR corpora (Jia et al., 2019a).", "Pino et al. (2019) find that augmented ASR corpora are more effective than augmented MT corpora.", "This approach allows training direct models and end-to-end models even when no end-to-end data is available.", "Knowledge distillation can be seen as an extension (Liu et al., 2019).", "An important problem that needs analysis is to what extent mismatched source-language and information loss degrade the augmented data.", "Zero-Shot: Using no end-to-end data during training.", "While augmented data can be used in most situations in which no manual data is available, it suffers from certain biases that may harm the ST model.", "Similarly to how zero-shot translation enables translating between unseen combinations of source and target languages, it may be worth exploring whether some recent models, such as direct models or cascades with non-discrete IRs, can be trained without resorting to any end-to-end data for the particular language pair of interest.", "While we previously described the task of ST simply as the task of generating accurate text translations from speech inputs, the reality is in fact much more complicated.", "Future work may exploit new modeling techniques to explicitly address the aspects drawn out below.", "Batch mode: A (potentially large) piece of recorded speech is translated as a whole.", "Segmentation into utterances may or may not be given.", "This mode allows access to future context, and imposes no strict computational restrictions.", "Typical applications include movie subtitling (Matusov et al., 2019) and dubbing (Saboo and Baumann, 2019; Federico et al., 2020).", "Consecutive: Real-time situation where inputs are provided as complete utterances or other translatable units, and outputs must be produced with low latency.", "A typical example is a two-way translation system on a mobile device (Hsiao et al., 2006).", "This is the only mode of delivery that allows interaction between speaker and translator (Ayan et al., 2013).", "Simultaneous: Real-time situation where latency is crucial and outputs are produced incrementally based on incoming audio stream.", "Simultaneous translation is faced with an inherent delay vs. accuracy trade-off, such as in a typical lecture translation application (Fugen, 2008).", "In addition to computational latency, which is relevant also with consecutive translation, simultaneous translation suffers from inherent modeling latency caused by factors including reordering.", "Text: This is a standard setting, but is nevertheless worth discussing in more detail for at least two reasons: (1) as is well-known in the subtitling industry, reading speeds can be slower than speaking and listening speeds (Romero-Fresco, 2009), implying that a recipient may not be able to follow verbatim text translations in case of fast speakers, and that summarization may be warranted.", "(2) Text display makes repair strategies possible that are quite distinct from spoken outputs: One can alter, highlight, or remove past outputs.", "One possible way of exploiting this is Niehues et al. (2018)'s strategy of simultaneous translation through re-translation.", "Speech: Speech outputs have been used since the early days (Lavie et al., 1996), but whether to apply text-to-speech on top of translated text has often been seen as a question to leave to user interface designers.", "Here, we argue that ST researchers should examine in what ways speech outputs should differ from text outputs.", "For example, is disfluency removal (Fitzgerald et al., 2009) beneficial for speech outputs, given that human listeners are naturally able to repair disfluencies (Lickley, 1994)?", "Further examples that need more exploration are prosody transfer (Aguero et al., 2006) and models that directly translate speech-to-speech (Jia et al., 2019b).", "Mandatory transcripts: User interface displays both transcripts and translations to the user.", "This scenario has been implemented in many applications (Hsiao et al., 2006; Cho et al., 2013), but has received little attention in the context of end-to-end ST research.", "It ties together with the joint inference model (4.3).", "Note that with loosely coupled cascades, there is little need to consider this scenario explicitly because the application can simply display the by-product transcripts to the user.", "But this is not easily possible with direct models or with models using IRs other than transcripts.", "Auxiliary transcripts: Transcriptions are not needed as user-facing model outputs, but may be exploited as IRs during training and possibly inference.", "This is the most typical formal framing of the ST task, assuming that transcribed training data is useful mainly for purposes of improving the final translation.", "Transcript-free: No transcribed training data exists, so the model cannot rely on supervised transcripts as IR.", "The main scenario is endangered language preservation for languages without written script, where it is often easier to collect translated speech than transcribed speech (Duong et al., 2016).", "The method of translation is an especially relevant factor in ST, which commonly includes a transfer from the spoken into the written domain.", "Here, we provide two reference points for the method of translation, while referring to Newmark (1988) for a more nuanced categorization.", "Faithful: Keeps the contextual meaning of the original as precisely as possible within the grammatical constraints of the target language.", "With text as output medium, faithful translation may result in poor readability, e.g. due to the translation of disfluencies (Table 2).", "Arguably the most appropriate output medium for faithful ST would be speech, although user studies are needed to confirm this.", "Another application are high-stake political meetings in which translations must stay as close to the original sentence as possible.", "As we move toward more distant language pairs, the practicability of faithful translation of spoken language with disfluencies becomes increasingly questionable.", "Communicative: Renders the contextual meaning of the original such that both content and style are acceptable and comprehensible by the target audience.", "An important example for improving communicativeness is disfluency removal (Fitzger-ald et al., 2009).", "Given that human translators and interpreters adapt their translation method depending on factors that include input and output medium (He et al., 2016), more research is needed beyond disfluency removal.", "Communicative translations are especially relevant in casual contexts where convenience and low cognitive effort are mandative.", "Arguably the closest neighbor of spoken language style in the text realm is social media, it would be interesting to attempt speech-to-text translation with social-media style outputs.", "Recent works on end-to-end modeling techniques are motivated by the prospect of overcoming the loosely coupled cascade's inherent issues, yet of the issues outlined in 2.1, often only the goal of avoiding early decisions is mentioned motivationally.", "While early decisions and data efficiency have been recognized as central issues, empirical insights are still limited and further analysis is needed.", "Mismatched source-language and information loss are often not explicitly analyzed.", "We conjecture that the apparent trade-off between data efficiency and modeling power may explain the mixed success in outperforming the loosely coupled cascade.", "In order to make progress in this regard, the involved issues (early decisions, mismatched source-language, information loss, data efficiency) need to be precisely analyzed (3), and more model variants (4) should be explored.", "As a possible starting point one may aim to extend, rather than alter, traditional models, e.g. applying end-to-end training as a fine-tuning step, employing a direct model for rescoring, or adding a triangle connection to a loosely coupled cascade.", "We further suggest that more principled solutions to the different application-specific requirements (5) should be attempted.", "Perhaps it is possible to get rid of segmentation as a separate step in batch delivery mode, or perhaps text as output medium can be used to visualize repairs more effectively.", "Several of the application-specific requirements demand user studies and will not be sufficiently solved by relying on automatic metrics only.", "We started this paper with a chronological survey of three decades of ST research, focusing on carving out the key concepts.", "We then provided definitions of the central challenges, techniques, and requirements, motivated by the observation that recent work does not sufficiently analyze these challenges.", "We exposed a significant space of both modeling ideas and application-specific requirements left to be addressed in future research.", "Our hope is to encourage meaningful and generalizable comparisons on our quest toward overcoming the long-standing issues found in ST models." ]
[ "abstain", "objective", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "other", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "result", "result" ]
[ "Video Question Answering (VidQA) evaluation metrics have been limited to a single-word answer or selecting a phrase from a fixed set of phrases.", "These metrics limit the VidQA models' application scenario.", "In this work, we leverage semantic roles derived from video descriptions to mask out certain phrases, to introduce VidQAP which poses VidQA as a fill-in-the-phrase task.", "To enable evaluation of answer phrases, we compute the relative improvement of the predicted answer compared to an empty string.", "To reduce the influence of language-bias in VidQA datasets, we retrieve a video having a different answer for the same question.", "To facilitate research, we construct ActivityNet-SRL-QA and Charades-SRL-QA and benchmark them by extending three vision-language models.", "We perform extensive analysis and ablative studies to guide future work.", "Code and data are public.", "Given a video, Video Question Answering (VidQA) requires a model to provide an answer to a video related question.", "However, existing works treat VidQA as an N-way ( N 1 k ) classification task across a fixed set of phrases.", "Models trained under such formulations are strictly restricted in their recall rate, generalize poorly, and have severe limitations for end-user applications.", "In this work, we introduce Video Question Answering with Phrases (VidQAP) which treats VidQA as a fill-in-the-phrase task.", "Instead of a question, the input to VidQAP consists of a query expression with a query-token.", "Then, given a video, VidQAP requires replacing query-token with a sequence of generated words.", "To generate a query, we leverage video descriptions and assign semantic roles to each phrase in these descriptions.", "Replacing a particular semantic-role with a query token produces a query-answer pair.", "We illustrate this in Figure 1 (details in Section 3.1).", "First, existing language generation metrics like BLEU (Papineni et al., 2002) or BERTScore (Zhang* et al., 2020) operate on sentences rather than phrases.", "When applied to short phrases, in the absence of context, even close matches like A person and The man would be falsely rejected due to no n-gram overlap or poor contextual embeddings.", "Second, natural language questions often have strong language priors making it difficult to ascertain if the model retrieved information from the video.", "Since we know where exactly the generated answer fits in the original query, we can create a complete sentence.", "With this key insight, we propose relative scoring : using the description as reference sentence, we compute the metrics once by replacing the query-token once with the predicted answer phrase and once with an empty-string.", "The model's performance is measured by the relative improvement from the predicted answer compared to the empty string.", "In particular, substituting the answer phrase in the query expression allows the computing the contextual embeddings required by BERTScore.", "To mitigate the language-bias issue, we emulate the procedure proposed by (Goyal et al., 2017) where for a given question, another image (or video in our case) is retrieved which has a different answer for the same question.", "To retrieve such a video, we use a contrastive sampling method (Sadhu et al., 2020) over the dataset by comparing only the lemmatized nouns and verbs within the semantic roles (SRLs).", "We then propose contrastive scoring to combine the scores of the two answer phrases obtained from the contrastive samples (de-tails on evaluation in Section 3.2).", "To investigate VidQAP, we extend three vision-language models namely, Bottom-Up-Top-Down (Anderson et al., 2018), VOGNet (Sadhu et al., 2020) and a Multi-Modal Transformer by replacing their classification heads with a Transformer (Vaswani et al., 2017) based language decoder.", "To facilitate research on VidQAP we construct two datasets ActivityNet-SRL-QA (ASRL-QA) and Charades-SRL-QA and provide a thorough analysis of extended models to serve as a benchmark for future research (details on model framework in Section 3.3 and dataset creation in Section 4.1).", "moving away from N-way classification, and further show even among sequence generation models there exists a large disparity in performance across semantic-roles (i.e. queries for some roles can be answered very easily compared to other roles).", "Moreover, certain roles hardly benefit from vision-language models suggesting room for improvement.", "Finally, we investigate the effects of relative scoring and contrastive scoring for VidQAP with respect to BertScore.", "Our contributions in this work are two-fold:", "(i) we introduce VidQAP and propose a systematic evaluation protocol to leverage state-of-art language generation metrics and reduce language bias", "(ii) we provide extensive analysis and contribute a benchmark on two datasets evaluated using three vision-language models.", "Our code and dataset are publicly available.", "1 2 Related Works Question Answering in Images has received extensive attention in part due to its end-user applicability.", "Key to its success has been the availability of large-scale curated datasets like VQA v2.0 (Goyal et al., 2017) for visual question answering and GQA (Hudson and Manning, 2019) for relational reasoning.", "To address the strong language priors, the datasets are balanced by retrieving images which given the same question lead to a different answer.", "However, these procedures cannot be extended for VidQA since crowd-sourcing to retrieve videos is expensive and there exists no scene-graph annotations for videos.", "In this work, we perform the retrieval using lemmatized nouns and verbs of the semantic roles labels obtained from video descriptions to balance the dataset.", "Question Answering in Videos: has garnered less attention compared to ImageQA.", "A major bottleneck is that there is no principled approach to curating a VidQA dataset which reflects the diversity observed in ImageQA datasets.", "For instance, naively crowd-sourcing video datasets leads to questions about color, number which is same as ImageQA datasets and doesn't reflect any spatial-temporal structure.", "To address this issue, TGIF-QA (Jang et al., 2017) and ActivityNet-QA (Yu et al., 2019) use a question-template to enforce questions requiring spatio-temporal reasoning but forgo the question diversity.", "An orthogonal approach is to combine VidQA with movie scripts (Tapaswi et al., 2016) or subtitles (Lei et al., 2018).", "However, this severely restricts the domain of videos.", "Moreover, recent works have noted that language-only baselines often outperform vision-language baselines (Jasani et al., 2019; Yang et al., 2020; Zellers et al., 2019).", "A separate line of related research has focused on scene-aware dialogue (Alamri et al., 2019).", "Instead of a single annotator providing both questions and answers, the annotation procedure follows a two-player game setup with one player asking a question and the other player answering with the roles switching after each turn.", "However, the evaluation method utilizes recall metrics which require the set of phrases to be known apriori.", "As a result, it doesn't strictly measure the performance of free-form generation but rather how well the ground-truth answer is ranked given a competing set of phrases which is analogous to multiple-choice questions.", "Automatic Question Generation: Due to the above limitations, the dominant approach to create large-scale VidQA dataset has been automatic question generation from existing video descriptions which can be easily crowd-sourced.", "Our proposed formulation of using SRLs to generate query-expressions falls in this category.", "Prior works include VideoQA (Zeng et al., 2017), MSR-VTT-QA and MSVD-QA (Xu et al., 2017) which use a rule based question generator (Heilman and Smith, 2009) to convert descriptions to questions and Movie-Fill-in-the-Blanks (Maharaj et al., 2017) which mask outs at most one word which could be a noun, adjective or verb in a sentence.", "In comparison, our method poses VidQAP as fill-in-blanks but with phrases, explicitly asks questions about actions, and the answer phrases are not constrained to a fixed set.", "As a result of this increased space of phrases, methods on existing datasets cannot be directly applied to VidQAP.", "To enable further research, we contribute two datasets ASRL-QA and Charades-SRL-QA.", "In Table 1 we compare these with existing VidQA datasets.", "SRL in Vision: has been explored in the context of human object interaction (Gupta and Malik, 2015), situation recognition (Yatskar et al., 2016), and multi-media extraction (Li et al., 2020).", "Most related to ours is the usage of SRLs for grounding (Silberer and Pinkal, 2018) in images and videos (Sadhu et al., 2020).", "Our work builds on (Sadhu et al., 2020) in using SRLs on video descriptions, however, our focus is not on grounding.", "Instead, we use SRLs primarily as a query generation tool and use the argument as a question directive.", "The VidQAP task is conceptually simple: given a video and a query expression with a query-token, a model should output an answer phrase that best replaces the query-token.", "This leads to three main design considerations:", "(i) How to generate a query-expression from existing resources (Section 3.1)", "(ii) How to evaluate the answer phrases returned by a model (Section 3.2)", "(iii) What modeling framework choices enable VidQAP (Section 3.3).", "We first briefly describe semantic-role labels (SRLs) 2 .", "Then we detail how SRLs are used to create VidQAP queries.", "Query Generation Using SRLs: Semantic Role Labels (SRLs) provide a high-level label to entities extracted from a sentence in the form of who ( ARG0 ), did what ( V ) to whom ( ARG1 ) (Strubell et al., 2018).", "Other roles such as to whom / using what ( ARG2 ) and where ( LOC ) are also common.", "As a pre-processing step, we assign SRLs to video descriptions using a state-of-art SRL labeler (Shi and Lin, 2019).", "A particular description could consist of multiple verbs, in which case, we consider each verb and its associated SRLs independently.", "For a particular semantic-role, we substitute the corresponding phrase with a query token to generate the query expression.", "The replaced phrase is the corresponding answer.", "Using this method we 2 Detailed discussion is provided in supplementary.", "are able to generate multiple queries from a single description.", "An added merit of using SRLs is that query phrases are centered around verb-phrases which are highly relevant to the video content.", "Generating queries using every SRL is not ben-eficial as some SRLs are more concerned with phrasing of the language rather than the video.", "For instance, in the phrase Players are running around on the field, if we mask out the word around ( DIR ), it can be answered without looking at the video.", "To address the above issue, we confine our description phrases to a fixed set of semantic-roles namely: ARG0, ARG1, V, ARG2, ARGM-LOC .", "Only those phrases which belong to the above set of SRLs may appear in the query-expression or as an answer phrase.", "We further remove phrases which have only two arguments as these are too ambiguous to fill.", "Figure 2 illustrates these steps.", "While using a slot for each slot could potentially limit the vocabulary used in each slot (for instance, the vocabulary set for < Q ARG1 > could be limited to a small number of objects), empirically we don't find this to be the case (see Appendix A.3 for detailed statistics).", "As a result, VidQAP is no simpler than VidQA task.", "We also remark that generating queries need not be strictly limited to masking out a single SRL and one could easily mask multiple SRLs in the same description.", "However, we find two problems: first, for many cases, the output of masking multiple SRLs becomes exceedingly similar to video description task; second, using contrastive scoring (described in Section 3.2) for multiple SRLs be-Query Expression: A person <Q-V> exercise equipment.", "comes considerably more involved.", "As a result, in this work, we focus on using a single SRL and keep the generalization to include multiple SRL queries for future work.", "A key challenge in VidQAP is the lack of any standard protocol to evaluate free-form generated phrases.", "A simple way is to adopt metrics like BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), METEOR (Banerjee and Lavie, 2005), and CIDER (Vedantam et al., 2015) which are already used for captioning in images and videos.", "However, these metrics suffer from limited generalization: BLEU, ROUGE, and CIDER require exact n-gram matches.", "While this is fine for captioning where longer phrases average out errors, answers phrases are typically much smaller than a complete sentence.", "This leads to many near-correct answers receiving very low scores.", "This issue is resolved to a certain extent for captioning by learned metrics like BERTScore (Zhang* et al., 2020) which utilize contextual embeddings obtained from large pretrained models like BERT (Devlin et al., 2019) and RoBerta (Liu et al., 2019).", "However, answer phrases are usually short and don't provide meaningful contextual embeddings.", "In the extreme case when the answer is a single word, for instance when the query is about a Verb , these embeddings turn out to be very noisy leading to large number of false-positives.", "Relative Scoring: To enable usage of contextual embeddings, we propose evaluating the relative improvement of the generated answer phrase compared to the ground-truth phrase.", "We denote the input query expression as Q , the ground-truth answer is A gt ,and the predicted answer is A pred .", "Let Q ( X ) denote Q with the question tokens replaced by X .", "Then for a given metric B , we compute the relative metric B r as (see Figure 3 for illustration) Ref = Q ( A gt ) , Hyp = Q ( A pred ) , Base = Q () B r ( A gt , A pred ) = B ( Ref,Hyp ) B ( Ref,Base ) B ( Ref,Ref ) B ( Ref,Base ) (1) Note that B ( Ref, Ref )=1 for BLEU, METEOR, ROUGE, BERTScore but not for CIDEr.", "The empty-string baseline in Eqn 1 could be replaced with predictions from any model trained for this task.", "In this work, we restrict to only empty-string baseline due to two desirable properties: its computational simplicity and it being agnostic to models and datasets.", "We further observe that Eqn 1 is very similar to the re-scaling proposed in BERTScore.", "However, in BertScore re-scaling aims at making the score more readable and doesn't change the relative ranking of the hypothesis.", "In our case, Eqn 1 plays two roles: first, it allows computing the contextual embeddings because the answers are now embedded inside a complete phrase, second while the ranking is not affected for a particular query, the score would be different across queries and hence affect the overall relative metric.", "Contrastive Scoring: Visual Question Answering suffers from heavy language priors, and as a result, it is often difficult to attribute whether the image or video played a role in the success.", "For images, (Goyal et al., 2017) resolved this by balancing the dataset where they crowd-sourced the task of collecting an image that has a different answer for the same question.", "However, such a crowdsourcing method is difficult to extend to videos since searching for videos requires a much longer time.", "This is further complicated by accepting answer phrases compared to single word.", "We simulate the balancing process using the contrastive sampling method used in (Sadhu et al., 2020).", "Specifically, for a given video-query-answer ( V 1 , Q 1 , A 1 ) tuple we retrieve another video-query-answer ( V 2 , Q 2 , A 2 ) tuple which share the same semantic role structure as well as lemmatized noun and verbs for the question, but a different lemmatized noun for the answer.", "At test time, the model evaluates the question separately, but the evaluation function requires both answers to be correct.", "Since our answers comprise of phrases, the notion of correctness is not absolute (unlike say accuracy metric).", "Thus, we put a threshold t below which the answer is deemed incorrect.", "Mathematically, let S i = B r ( A gt i , A pred i ) be the relative score for sample i , and we are given sample j is a contrastive example for sample i .", "Then the contrastive score ( CS i ) for sample i at a threshold TCS would be CS i = max ( S i 1 [ S j > TCS B ( Ref j , Ref j )] , 0) (2) Here 1 [] is the indicator variable which is 1 if the expression within brackets is True, otherwise 0 .", "The max operator ensures the scores don't become negative.", "For our experiments, we use TCS =0 which requires that the answer for the contrastive sample should be better than an empty string.", "Cons i = 1 [( S i T cons ) ( S j T cons ) > 0] (3)", "As such, Consistency requires the model to be either correct or incorrect for both the original and the contrastive sample.", "B , for a given sample i and contrastive sample j 1. Compute relative metric (Eqn 1) for i, j 2. Compute contrastive score (Eqn 2) 3. Optionally compute Consistency (Eqn 3)", "We use the prefix Rsuch as in R-B to denote both relative scoring and contrastive scoring is being computed.", "We report Consistency for BertScore with T cons =0 .", "1 We note that, by construction, the relative scoring (Eqn 1) is positively correlated with human judgment, as the closer, the hypothesis is to the reference, the higher would the score be.", "The contrastive scoring is a metric used to prevent the model from guessing the correct answer by exploiting language biases and instead use the video to give a suitable prediction.", "Since humans don't have the ability to exploit such biases, it is difficult to relate to human evaluation.", "Models for VidQAP require a language encoder to encode the question, a visual encoder to extract video features, a multi-modal module to jointly learn over vision-language space and a decoder to generate a sequence of words.", "Inputs include query expression { w } Li =1 ( L is number of words), video segment features for F 1 frames and optionally k RCNN features for F 2 frames.", "In either case, frames are sampled uniformly from the video segment time-span.", "While the models differ in their encoding scheme, our language decoder model (Transformer based) used to generate the output answer phrase is kept same across all models with QAP suffix.", "Lang-QAP : is a language-only (video-blind) model using only the query input.", "It uses Transformer based encoder to encode the query into q RL d .", "The decoder subsequently uses the last layer output of the encoder", "(Figure5-(a)).", "BUTD-QAP : Bottom-up-Top-Down (Anderson et al., 2018) is a popular approach for image question answering as well as captioning.", "It first computes attention between the question and the RCNN visual features to generate an attended visual feature, which is then used with the question to produce an output answer.", "Here, we replace the RCNN features with the segment features ( v RF 1 d ).", "We can also include RCNN features by projecting them to same dimension as segment features and then concatenate them along the frame-axis ( v R ( F 1 + F 2 k ) d ).", "For language features, we use the [CLS] token representation from the last layer of the language encoder used in Lang-QAP.", "The output using the language and visual features is ( m R d ) passed to the decoder (Figure", "5(b)).", "VOG-QAP : VOGNet (Sadhu et al., 2020) has been proposed for grounding objects in videos given a natural language query.", "Following the architecture, we first derive phrase encoding which corresponds to a single SRL i.e. q RS d ( S is number of semantic roles).", "These phrase features are concatenated with the visual features (same as those used in BUTD-QAP (i.e. v )) to get multimodal features m [ l, i ]=[ v i || q l ] and then reshaped to get m RS F d .", "These multi-modal features are subsequently passed to decoder to generate the output sequence (Figure 5", "(c)).", "MTX-QAP : Recently, transformer models pretrained on large-scale paired image-text data have become popular.", "Even in the absence of pretraining, such architectures can achieve competitive performance (Lu et al., 2019).", "In the context of videos, ActBert (Zhu and Yang, 2020) has been proposed.", "We create a similar architecture to ActBert but we replace their proposed Tangled-Transformer with a vanilla Transformer 3 .", "Specifically, we jointly encode the language and visual features in a single transformer and feed the output to the decoder (Figure 5", "(d)).", "LangCL and MTxCL: Apart from QAP models, we also consider their phrase classification counterparts where the decoder is replaced with a N-way classifier (two-layered MLP in our case) across a fixed set of phrases.", "For our experiments, we used N =1 k phrases for LangCL and N { 1 k, 10 k } for MTxCL.", "We briefly discuss the dataset creation process (Sec-tion 4.1), followed by experimental setup (Section 4.2).", "We then summarize our results (Section 4.3) and discuss key-findings.", "We provide implementation details, qualitative visualizations of our dataset, metrics and trained models in the appendix.", "We create two datasets ASRL-QA and Charades-SRL-QA derived from ActivityNet-Captions (Kr-ishna et al., 2017) and Charades (Sigurdsson et al., 2016) respectively.", "There are three key steps to create QA datasets from descriptions:", "(i) assign semantic-roles to the descriptions", "(ii) perform co-reference resolution so that the questions are self-contained", "(iii) obtain lemmatized nouns and verbs to perform contrastive sampling.", "For semantic-role labeling, we use (Shi and Lin, 2019).", "For co-reference resolution, we use the co-reference resolution model provided by allennlp library (Gardner et al., 2017) which uses the model by (Lee et al., 2017) but replaces the GloVe (Pennington et al., 2014) embeddings with SpanBERT embeddings (Joshi et al., 2019) 4 .", "Since Charades primarily involves videos with a single person, we discard questions involving ARG0 .", "We limit to using a single description per video to avoid repetitive questions.", "We re-use the same train split for both datasets.", "For ASRL-QA, test set of ActivityNet is not public and Charades only has a test set but no official validation set.", "Thus, we split the existing validation set by video names and create the validation and test sets.", "For both validation and test splits, we remove those questions for which no contrastive sample was found as it indicates data-biases.", "Dataset Statistics: ASRL-QA has 35.7k videos and 162k queries split into train, validation and test sets with 30.3k, 2.7k, 2.7k videos and 147k, 7.5k, 7.5k queries.", "We observe that the size of validation and test sets are proportionately smaller compared to their respective train sets.", "This is because only queries with corresponding contrastive sample are included while no such filtering is done for the train set ( 95 k queries in train set have a contrastive pair).", "Charades-SRL-QA contains 9.4k videos and 71.7k queries split across train, validation and test 4 https://demo.allennlp.org/coreference-resolution sets with 7.7k, 0.8k, 0.8k videos and 59.3k, 6.1k, 6.2k queries.", "Despite its smaller size, the size of validation, test sets of Charades-SRL-QA is comparable to ASRL-QA as Charades is curated with the goal of diversifying subject, verb, object tuples.", "Supplementary material provides further details on the dataset statistics and visualizations.", "Evaluation Metrics: As discussed in Section 3.2, we report the combined metric (i.e. metrics prefixed with R-) for the commonly used generation metrics: BLEU, METEOR, ROUGE, CIDEr and BertScore (implementations from (Chen et al., 2015; Zhang* et al., 2020)).", "For BLEU, we report the sentence level BLEU-2.", "All reported results are test set results using the model which performs best on validation set.", "Table 2 compares performance of the proposed VidQAP models with N-way classification baselines (denoted with suffix CL) on ASRL-QA and Charades-SRL-QA.", "Comparing Metrics: It is evident that compared to other metrics, R-BertScore shows a higher relative improvement.", "This is because BertScore allows soft-matches by utilizing contextual embeddings obtained from a pre-trained BERT (Devlin et al., 2019) or Roberta (Liu et al., 2019) model.", "Comparison Across Datasets: We find that performance on both datasets follow very similar trends across all metrics.", "Charades-SRL-QA has slightly higher scores compared to ASRL-QA likely because it has lesser data variations (Cha-rades is mostly confined indoor videos) suggesting findings on either dataset would transfer.", "Comparison within N-way Classification: We notice that when 1 k fixed set of phrases are used classification models show very limited performance.", "Allowing 10 k phrases gives a significant improvement in performance on Charades-SRL-QA ( 12 points on R-BS) however this doesn't translate to ASRL-QA.", "This is because ASRL-QA contains many more probable phrases ( 29 K compared to 8 K ) in their respective training sets.", "We also notice that increasing the number of phrases vocabulary coincides with decreasing consistency.", "Comparing Free-from Answer Generation (QAP) with N-way Classification (CL): We investigate the advantages of using a decoder network to generate phrases compared to an N-way classification over a fixed set of phrases (denoted ASRL-QA Charades-SRL-QA R-BS Cons R-B@2 R-R R-M R-C R-BS Cons R-B@2 R-R R-M R-C LangCL ( 1 k ) 0.253 0.889 0.120 0.098 0.071 0.044 0.293 0.697 0.224 0.209 0.114 0.077 MTxCL ( 1 k ) 0.255 0.869 0.130 0.114 0.080 0.050 0.288 0.707 0.215 0.208 0.116 0.075 MTxCL ( 10 k ) 0.286 0.788 0.157 0.133 0.100 0.061 0.408 0.695 0.286 0.261 0.142 0.108 Lang-QAP 0.402 0.728 0.228 0.182 0.125 0.095 0.406 0.719 0.277 0.253 0.147 0.121 BUTD-QAP 0.413 0.716 0.237 0.203 0.147 0.105 0.399 0.714 0.271 0.231 0.115 0.105 VOG-QAP 0.414 0.717 0.239 0.204 0.142 0.108 0.442 0.739 0.297 0.274 0.165 0.136 MTX-QAP 0.414 0.715 0.247 0.206 0.149 0.113 0.439 0.757 0.294 0.267 0.157 0.139 Table 2: Comparison of our extended models for VidQAP and Classification based (CL) models across two datasets on our proposed Metric.", "Table 2 shows that both Lang-QAP and MTX-QAP outperform their classification counterparts, namely Lang-CL and MTX-CL on both datasets.", "This implies the free-form generation are not limited to simply generating the most frequently appearing phrases in the training set, thereby showing its effectiveness.", "Comparison Across Models: We find that multi-modal models outperform language-only baseline.", "However, the improvement over language baseline is small.", "The reason for the small gap is elucidate in Table 3 where we report R-BertScore ARG0 V ARG1 ARG2 LOC Overall BUTD-QAP 0.706 0.506 0.388 0.36 0.196 0.431 VOG-QAP 0.704 0.516 0.366 0.352 0.202 0.429 MTX-QAP 0.685 0.465 0.378 0.355 0.19 0.416 Table 5: Effect of Adding Region Proposals.", "We find a large disparity in performance depending on the SRL.", "Most strikingly, multi-modal models perform worse than language-only model on ARG0 and V .", "For ARG0 , the strong performance of the Lang-QAP arises because most of the time the agent who causes an action is a human.", "Therefore answer phrases having simply A man or A woman or A person leads to reasonable performance.", "This additionally suggests that grounding who is performing the action remains non-trivial.", "The more surprising result is the strong performance of Lang-QAP on V which is consistent across both datasets despite using contrastive sampling.", "There are two likely causes.", "First, the distinction between verbs is not as strict as object nouns, i.e. even similar verbs are classified as a separate verb diminishing the returns of contrastive sampling.", "For instance, jumping and hoping have different lemma and thus considered distinct verbs but R-BS would treat them as similar even if the specific action would be classified jumping rather than hoping.", "Second, SRLs such as ARG1 confines the set of possible verbs.", "For instance, if the object is glass, only limited verbs such as drink, hold are probable.", "On the remaining arguments namely ARG1 , ARG2 , and LOC , multi-modal models show a steady improvement over language-only baseline ranging from 1 10% .", "However, the performance in absolute terms remains very low.", "As such, our proposed task VidQAP remains extremely challenging for current multi-modal models.", "Evaluation Metric Scores: In Table 4 we record the BertScore computation in three parts: directly computing over the answer phrases, performing relative scoring, finally performing contrastive scoring with different thresholds.", "We observe that for V , naive computation leads to absurdly high scores.", "This is because verbs consist of a single word which means the embeddings are not contextual.", "This is remedied by relative scoring and is further controlled by combining with contrastive sampling.", "Further note that relative scoring operates differently based on the SRLs.", "For instance, it increases the score for ARG0 and ARG1 where the answers more often paraphrased the ground-truth questions while for ARG2 and LOC , it decreases the score due to incorrect matches.", "While contrastive scoring is aimed at reducing language-only bias and as such should always reduce the relative score, we observe increased score in ARG2 for both Lang-QAP and MTX-QAP.", "This is caused by the max function which restricts the lower-limit to be 0 .", "Effect of Region Boxes: As noted earlier, the visual features can also include region features extracted from an object detector like FasterRCNN (Ren et al., 2015).", "In Table 5 we record the effect of including regional features.", "In particular, we use the GT5 setting used in (Sadhu et al., 2020) where 5 region proposals are used from 10 frames uniformly sampled from the video segment.", "Interestingly, MTX-QAP under-performs than both BUTD-QAP and VOG-QAP on ARG0 .", "A possible reason is that the transformer is unable to effectively reason over both language and vision over such a large range of inputs.", "In this work, we introduce Video Question Answering with Phrases (VidQAP) where we pose VidQA as a fill-in-the-phrase task.", "Given a video and query expression, a model needs to compose a sequence of words to answer.", "We then propose a method to leverage semantic roles from video descriptions to generate query expressions and outline a robust evaluation protocol.", "This involves computing the relative improvement of the prediction answer compared to an empty string followed by a contrastive sampling stage which reduces language-only biases.", "We then contribute two datasets ASRL-QA and Charades-SRL-QA to facilitate further on VidQAP and benchmark them with three vision-language models extended for our proposed task.", "We thank the anonymous reviewers for their suggestions and feedback.", "This research was supported, in part, by the Office of Naval Research under grant #N00014-18-1-2050.", "In this work, we propose an extension to the existing video question answering framework to include free-form answers and suggest how to evaluate such a task.", "Direct Application (Positive): A direct application of our task would be to enrich existing descriptions obtained from video captioning models which could lead to better video retrieval results.", "For instance, one could query about what tool to use in order to cut a piece of cardboard by querying A person cutting a piece of cardboard < Q-ARG2 > \". Direct Application (Negative): Caution must be taken in directly applying models trained on descriptions without properly balancing the data-distributions as it is possible that hidden data-biases are amplified. As an example, ASRL-QA has many videos involving men throwing shot puts. As a result, a model could learn this biased correlation and whenever queried who ( < Q-ARG0 > throws a shot put) it would always produce the answer man even if the video clearly shows a woman.", "Broader Societal Impacts (Positive): Question answering is an excellent tool for diagnosing a model's understanding due to its high interactivity.", "Our proposed formulation takes this a step forward with answer phrases and can in-turn facilitate human-computer interactions.", "Our proposed model can be extended to down-stream tasks such as retrieving a video or retrieving a part of the video given a question or query.", "Broader Societal Impacts (Negative): Since our method is agnostic to the end user case, it can be re-purposed to extract out sensitive information and be a threat to privacy." ]
[ "abstain", "abstain", "abstain", "method", "method", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "method", "objective", "abstain", "method", "abstain", "abstain", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "other", "abstain", "abstain", "objective", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "other", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method" ]
[ "This study introduces and analyzes WikiTalkEdit , a dataset of conversations and edit histories from Wikipedia, for research in online cooperation and conversation modeling.", "The dataset comprises dialog triplets from the Wikipedia Talk pages, and editing actions on the corresponding articles being discussed.", "The exchanges occur between two turn-taking individuals and span all of Wikipedia.", "We show how the data supports the classic understanding of style matching, where positive emotion and the use of first-person pronouns predict a positive emotional change in a Wikipedia contributor.", "However, they do not predict editorial behavior.", "On the other hand, feedback invoking evidentiality and criticism, and references to Wikipedia's community norms, is more likely to persuade the contributor to perform edits but is less likely to lead to a positive emotion.", "We developed baseline classifiers trained on pretrained RoBERTa features that can predict editorial change with an F 1 score of .54, as compared to an F 1 score of .66 for predicting emotional change.", "A diagnostic analysis of persisting errors is also provided.", "We conclude with possible applications and recommendations for future work.", "The dataset is publicly available for the research community at https://github.com/kj2013/WikiTalkEdit/.", "Dialogue is a language game of influence, action, and reaction that progresses in a turn-taking manner.", "Persuasion occurs through dialogue when a listener favorably evaluates the authority, claims, and evidentiality through the cues and arguments made by the speaker (Krippendorff, 1993; Schulte, 1980; Durik et al., 2008).", "Discussions on Wikipedia Talk pages can be useful for determining strategies that lead to an improvement of the article discussed, and for examining if they also lead to an amicable dialogic exchange.", "Previous work (Yang et al., 2016a,b, 2017) has explored the role of editors and the types of edits made on Wikipedia, but have not related them to the ongoing conversation on the Wikipedia Talk pages.", "We introduce the WikiTalkEdit dataset , a novel dataset for research in online collaboration.", "The dataset is a subset of the Wikipedia Talk Corpus available as of May 2018 1 .", "It contains 12,882 dialogue triples with labels about editors' subsequent editorial (editing) behavior, and 19,632 triplets with labels corresponding to editors' emotion as manifested in their replies.", "Table 1 has examples from the dataset.", "2 This new dataset enables various language and behavior modeling tasks.", "In general, the dataset is important for understanding linguistic coordination, online cooperation, style matching, and teamwork in online contexts.", "More specifically, it offers linguistic insights about the norms on Wikipedia, such as", "(i) the feedback which is associated with a positive emotion vs a positive editing action,", "(ii) identifying and characterizing successful editorial coordination (Lerner and Lomi, 2019),", "(iii) generating constructive suggestions based on a given Wikipedia edit, and", "(iv) identifying and resolving disagreements on Wikipedia before they go awry (Zhang et al., 2018).", "In this study, we examine the first research problem.", "That is, we demonstrate how the dataset is helpful to compare and contrast the linguistic strategies that evoke favorable dialogic responses from those evoking behavioral compliance.", "Conversational quality is largely the focus of a body of work modeling the formal (Pavlick and Tetreault, 2016), polite (Niculae et al., 2015) and", "toxic (Zhang et al., 2018) features of comments on Wikipedia, Reddit, and other online public forums.", "The labels in such a task are often subjective as they depend mostly on annotated or crowdsourced labels.", "On the other hand, gauging the impact of a conversation in terms of a reader's subsequent behavior is a rather different problem.", "A few studies have modeled the language of arguments to predict their upvotes (Wei et al., 2016a,b; Habernal and Gurevych, 2016; Tan et al., 2016).", "The best result reported by Habernal and Gurevych (2016) was an F 1 score of .35 for the task of predicting which of two arguments was better, using SVMs and bi-directional LSTMs.", "The study by Tan et al. (2016) reported an accuracy of 60% for predicting which argument was most likely to change the original poster's (OP's) point of view.", "Althoff et al. (2014) report an AUC of .67 on predicting the success on 5700 requests.", "Studies predicting users' stance (Lin and Utz, 2015; Sridhar et al., 2015) have done better, but do not usually factor in the feedback from a turn-taking partner, during a dialogic exchange.", "Furthermore, to the best of our knowledge, we did not find an equivalent study to measure the actual subsequent behavior of a conversation partner after a dialogic exchange on social media platforms, forums, or Wikipedia.", "In recent years, computational linguistics has developed computational models of dialogic text that predict the emotional responses associated with any utterance.", "The findings suggest that interacting speakers generally reinforce each others' point of view (Kramer et al., 2014; Rim, 2007), use emotions to signal agreement, and mirror each other's textual cues (Niculae et al., 2015).", "On the other hand, predicting behavioral responses is potentially a more challenging task for text modeling and prediction, and it is also less explored in the literature.", "The existing research on online turn-taking behavior has focused on modeling emotional reactions, with little interest in predicting actual behavioral change.", "This research is discussed in more detail in the Supplementary Materials 3 .", "For now, we contextualize the contributions of this dataset by demonstrating how it is applicable to address the following gaps in the scholarship: How well do language models trained on editorial feedback predict subsequent emotional and editorial change?", "What are the linguistic features of editorial feedback which predict emotional change in the person that initiates the discussion (hence-forth, OP, original poster)?", "What are the linguistic features of editorial feedback which predict subsequent editorial behavior by the OP?", "First, we report the predictive performance on predicting emotional and editorial behavior change from the linguistic features of the comments, using regression baselines and state-of-the-art deep learning models.", "Performance is evaluated as an F 1 score of predicted labels against the ground truth labels as implemented in scikitlearn .", "Then, we compare the linguistic features associated with emotional change with those associated with subsequent edits.", "Finally, we offer a diagnostic analysis of the prediction errors observed.", "In this dataset, we describe how we collected our data from the Wikipedia Talk dataset and formulated a task around emotional and behavioral actions of an article's editors, who are taking turns in a conversation.", "After contributing to a Wikipedia article, the OP usually updates the Talk page with a summary of the edit.", "At this point, the OP may get zero or more responses, and they may respond to all, some, or OP d(OP, OP') OP' OP' pos OP pos OP' neg OP neg d(OP, OP') 2 = (OP' pos OP pos ) 2 + (OP' neg OP neg ) 2 OP' pos OP pos OP' neg OP neg Figure 1: Calculation of OP's emotional change as the signed two-dimensional Euclidean distance between OP and OP'.", "none of them.", "To study the effect of editorial feedback, we defined a complete interaction between an OP and another Editor as a dialog triplet of the form OP Editor OP (cid:48) .", "Our dependent variables are the OP's reaction to an Editor's comment in terms of the emo-tional change' in their language and their edito-rial change' in terms of subsequent edits to the Wikipedia article.", "First we downloaded the entire Wikipedia Talk Corpus available as of May 2018 and extracted 128,231 dialogue triplets.", "Next, we used the Wiki-media API to download the edits corresponding to each of the OP's comments in our dataset of triplets.", "In the following paragraphs, we further describe how we operationalized the labels for the dataset.", "Emotional change: The emotional change label is the signed Euclidean distance between the positive and negative emotions of OP' and OP (see Figure 1).", "The positive and negative emotion measurements are calculated using the emotion dictionaries from the Linguistic Inquiry and Word Count (LIWC) (Pennebaker et al., 2007).", "The assigned labels were manually examined by the authors for face validity.", "A change over one standard deviation above the mean is coded as 1' and is a positive emotional change.", "A change under one standard deviation below the mean is coded as a 0' and is a negative emotional change.", "All other values are marked null as there is no evident change in emotion.", "Editorial change: The edits, if any, performed by the OP to the article in the week following the Editor's feedback, are operationalized as a binary value (1'=edit, 0'=no edit).", "In the following sections, we analyze what types of linguistic feedback from the Editor is effective at creating a positive emotional change or an editorial", "action by the OP.", "In preliminary data explorations, we found no correlation between the Editor's politeness or status, and emotional or editorial change.", "We observed that the Editor's comments that are associated with positive comments (Mean = 273 characters, Mean Jaccard coefficient, JC = .16) are significantly shorter and have less overlap (content interplay) with the OP's comment than those associated with negative comments (Mean = 417 characters, Mean JC = .18).", "There was no substantial difference for editorial changes.", "We examine the different linguistic features and discourse markers in predicting emotional and editorial change in the WikiTalkEdit dataset.", "Our inde-pendent variables comprise the linguistic features of the Editor's feedback, and the dependent variables are the OP's change in emotional and editorial behavior after receiving the feedback.", "We used logistic regression and many deep learning implementations from the pytorch-pretrained-bert package to predict both the emotional and editorial change of the user.", "We represented the Editor's feedback as a normalized frequency distribution of the following feature sets:", "General lexical features (500 features and 50 LDA topics): The most frequent unigrams, and 50 topics modeled using Latent Dirichlet Allocation in Python's MALLET package, with =5.", "Stylistic features (73 features): These include cognitive features (discrepancy, past tense, present tense, work) and emotional features (positive emotion, negative emotion, reward) from LIWC (Pen-nebaker et al., 2007).", "A politeness index was generated using Stanford's Politeness API (Danescu-Niculescu-Mizil et al., 2013).", "Syntactic features (4 features) : The Stanford parser was used to generate dependency parses.", "Dependency parses were used to identify and categorize all the adjectival modifiers that occurred at least ten times in the data.", "We distinguished the first-, second-, and third-person pronouns.", "Finally, we created part-of-speech n-grams based on the dependency trees.", "Social features (2 features): We measured content interplay , the Jaccard coefficient of similarity between the unigrams of the Editor's feedback and the OP's first comment.", "The Editor's status may pressure the OP to conform by performing edits; therefore, we quantified the Editor's experience in terms of their number of contributions to Wikipedia.", "We also experimented with a variety of deep learning baselines:", "CNN: The CNN framework (Kim, 2014) involves applying convolutional filters followed by max-over-time pooling to the word vectors for a post.", "RCNN: The RCNN framework (Lai et al., 2015), recurrent convolutional layers followed by max-pooling.", "A fully connected layer then follows it with a softmax for output.", "biLSTM: The word embeddings for all words in a post are fed to bidirectional LSTM, followed by a softmax layer for output (Yang et al., 2016c).", "biLSTM-Attention: For each sentence, convolutional and max-over-time pooling layers are applied on the embeddings of its words.", "The resultant sentence representations are put through bi-LSTM with the attention mechanism (Yang et al., 2016c).", "NeuralMT: Embeddings are fed into a bidirectional-GRU followed by a decoder with the attention mechanism (Bahdanau et al., 2015).", "FastText: Word representations are averaged into a sentence representation, which is, in turn, fed to a linear classifier (Joulin et al., 2017).", "A softmax function is used to compute the probability distribution over the predefined classes, and a cross-entropy loss is used for tuning.", "Hierarchical softmax is used to speed up the training process.", "Transformer: The architecture implemented was based on recent previous work (Vaswani et al., 2017).", "OpenAI GPT: The Generative Pretrained Transformer implementation (Radford, 2018) with the original hyperparameter settings.", "BERT and RoBERTa: The pre-trained BERT model (Devlin et al., 2018) and the Robustly optimized BERT model (RoBERTa) (Liu et al., 2019), where BERT is retrained with more data and an improved methodology.", "Models were fine-tuned using the simple transformers library.", "XLNET: Finally, we evaluate the performance of XLNet (Yang et al., 2019), which combines bidirectional learning with the state-of-the-art autoregressive model such as Transformer-XL.", "In the case of CNN and BiLSTM based models, we used the referral hyper-parameters from the original implementation for all models 4 .", "For Neural MT, FastText, and Transformer based models, implementations by original authors are used.", "All the models were evaluated using 5-fold cross-validation with a split ratio of 80:20 for train and test set, respectively.", "In fine-tuning the RoBERTa model on editorial change, the model parameters included a learning rate of 9e -6 , 3 epochs, and train batch size of 8.", "For emotional change, model parameters include a learning rate of 1e -5 , 3 epochs, and a train batch size of 8.", "The maximum input sequence length was 128, which included 91% of all the inputs.", "The time taken was 6-8 minutes/epoch on Tesla k80, running on a Google Colab implementation.", "Five hyperparameter search trials were conducted with cross-validation.", "A manual tuning strategy was followed to identify the setting with the best performance.", "We now examine the test-set performance of these models trained on a subset of the WikiTalkEdit dataset.", "The dataset for emotion analysis comprises the 15% of overall dataset where editorial feedback yielded a substantial positive or negative change in the emotion vector (i.e., the emotional change was above or below one standard deviation from the mean).", "Similarly, the dataset for editorial actions (edits performed) comprises the 10% of the conversations that started within 24 hours since an OP 4 https://tinyurl.com/brightmart edited the page.", "A pairwise correlation found no relationship between emotional and editorial change ( = .01, p > .1).", "The dataset statistics are provided in Table 2.", "Baseline logistic regression models: Table 3 shows that emotional change is more straightforward to predict than editorial change, and style provides marginally better predictive performance than content.", "The best performance was obtained using POS n-grams, with an F 1 score of .57 for predicting emotional change, and of .51 for predicting behavioral change.", "Unexpectedly, social features were not good predictors of emotional change.", "Deep learning models: In comparison to the logistic regression baselines, the deep learning models in Table 4 offer a remarkable predictive advantage, especially for emotional change.", "The best performing deep learning classifier is trained on pre-trained RoBERTa features and reports an F 1 score of .", "66 for emotional change and .", "54 for editorial change.", "We observed instances of misclassification from the best logistic regression classifier and the XLNet model (Yang et al., 2019).", "We have diagnosed the likely sources of errors in this section.", "We randomly selected an assortment of false positives predicted by a logistic regression classifier and by XLNet and have provided them in Table 5 5 .", "First, we find that since the logistic regression methods rely heavily on stylistic features, the errors we identified seemed to occur when the style does not match the intended meaning: Feedback about notability and relevance : In the first example in Table 5, we see that despite the polite feedback, the conversation was not resolved positively and resulted in negative responses.", "Reverted edits : Similarly, in conversations where the OP contest their reverted edits, the dialogue appears to regularly derail into further negative replies despite the civility of the Editor's feedback.", "The XLNet model did not repeat these particular errors.", "Its errors, on the other hand, appear to be driven by fact-checks and questions: Fact-checks : In contradicting the OP with facts and personal opinions, a disagreement is sometimes implied but not obvious.", "The model predicts a positive emotional change, but the OP responds to the implication with a negative reaction.", "Counter-questions : When Editors asked questions of the OP, it appears likely that the OP would turn defensive, even if the response included facts.", "Table 6 shows the false positives in predicting editorial change.", "Starting with the errors from models trained on stylistic features, we observed that in general, the errors centered on: Controversial topics: The errors arising from logistic classifiers reflect ideological disagreements, often involving hot-button topics such as race and ethnicity.", "The OP is not likely to change their mind despite what might be a well-reasoned argument from the Editor.", "Reverted edits: Dialog around why edits were reverted, or content was removed are 5 More examples of errors are provided in the Supplementary Materials.", "usually requests for greater clarity for documentation purposes, and are rarely followed up with edits to the page.", "False positives in predicting editorial change by XLNet also appear to arise when feedback is nuanced.", "Aside from feedback that implicitly discourages further editing, similar to what was observed in Table 5, we also observed other types of feedback that leads to errors by the XLNet model: Opinions: Editorial feedback that uses opinions rather than facts to persuade the OP appears to lead to an edit rarely, and this was a common error observed among the predicted labels.", "Mixed feedback: The models also appear to get confused when the feedback included content from the page as a quote, and included suggestions but made no direct requests.", "Based on the results in Table 3, in this section, we examine the stylistic, lexical, and topical features which best predict emotional and behavioral change.", "These findings offer us a way to examine whether emotional and editorial change are indeed different, and to compare the results against previous studies which have examined these problems in some capacity.", "Comparing the most predictive stylistic and content features suggests that emotional and editorial change have different predictors.", "Table 7 summarizes the most significant predictors of emotional change based on an ordinary least squares regression analysis.", "Positive feedback through words related to rewards and positive emotions typically predict a positive emotional change, besides the use of stance words (the first person pronoun, I ) and reference to past experiences (past tense).", "This finding is in line with the literature (Zhang et al., 2018; Althoff et al., 2014).", "Conversely, excessive use of adjectival modifiers (e.g., comparative words or words used to emphasize quantity or impact) is associated with a negative emotional change.", "The insights look very different for editorial change (Table 8).", "Second person pronouns and present tense, both of which occur in directed speech, are associated with editorial changes, in sharp contrast with the features that emerged in the analysis of emotional change.", "Aligned with this, the use of words related to criticism (discrepancy) and work is also among the significant predictors of editorial change.", "Among the parts of speech, comments about the content (NN, NNP) appear to reduce the likelihood of an editorial change.", "Except for superlative modifiers, style seems not to be relevant in this case.", "These results support previous studies in showing that emotion and politeness do not always signal editorial change (Hullett, 2005; Althoff et al., 2014), as it is true for stylistic markers (Durik et al., 2008), while direct requests (Burke et al., 2007), assertiveness, evidentiality (Chambliss and Garner, 1996) and other content-based features usually perform better.", "No feature appeared to correlate with both emotional and editorial behavior.", "Further lexical insights are provided in the Supplementary Materials.", "6 8 Insights from topics We conducted a Benjamini Hochberg (BH)-corrected Pearson correlation of the topic features of comments by the Editor.", "We visualize it as a language confusion matrix introduced in recent work (Jaidka et al., 2020) to compare the topics predictive of emotional vs. editorial change of the OP.", "The word clouds in Figure 2 show the correlation of LDA topics with emotional change on the X-axis, and the correlation with editorial change on the Y-axis.", "The grey bands depict zones where the topics do not have a statistically significant correlation with either emotional or editorial change.", "We have distinguished the themes related to content (e.g., medicine, religion, and ethnicity ) by coloring them in red.", "The topics in black are related to Wikipedia's content guidelines (i.e., mentions of NPOV, sources, cite, information ).", "7 These themes involve the neutrality ( neutral point of view, NPOV ), general importance ( notability ), and verifiability ( sources, evidence ) of information.", "Finally, the blue topics are meta-commentary centered around the elements in a Wikipedia article (mentions of edit, page, title, section ).", "Our analysis of the WikiTalkEdit dataset suggests that mentions of Wikipedia's guidelines are associated with a positive editorial change, but a 6 We further tested the effect of only positive or only negative features; we found that positive emotion is a better predictor of emotional change (F1=.42 vs. F1=.45) but not of editorial change (F1=.48 for both positive and negative features).", "7 See https://en.wikipedia.org/wiki/Wikipedia:Five_pillars Sentiment change false positives LIWC-based classifier XLNet Page Talk triplet Page Talk triplet Kiev OP: I am so confused!", "negative emotional change.", "Suggestions based on evidence are associated with both, a positive editorial and a positive emotional change.", "First, we look at the spread of the content-themed topics around the figure.", "Some of the topics related to religion ( god, church, christian ) and ethnicity ( israel, people, jewish, indian ) are associated with a negative emotional change (-.06 < r < -.02, p < .05).", "Content topics related to medical research and health inspire a negative emotional change but a positive editorial change ( r = .05, p < .05).", "page structure ( page, title, move and review, section, add ).", "We observe that these are associated with positive emotional changes (.06 < r < .10, p < .05), possibly because they offer concrete and minor suggestions.", "Those meta-commentary topics which directly request an edit or a review inspire editorial change (.03 < r < .06, p < .05).", "Finally, topics related to the source, i.e., about Wikipedia's guidelines, generate a more nuanced reaction.", "Topics related to evidentiality ( source, news, evidence ) and notability ( notable, articles, deletion ) are the strongest predictors of negative emotion (-.18 < r < Sentiment change Feature Coefficients Examples First person pronouns .08** I, me Past tense .04** ago, did, talked Positive emotion .06** great, nice, sweet Reward .06** take, prize, benefit Comparative modifiers -.08* significant amount, in particular Quantitative modifiers -.06* many articles, most cases, vast majority JJ NN (adjective+noun) -.04* relevant description, rare possibility IN VBZ (preposition+verb) -.04* that is, in position VBZ JJ (verb+adjective) -.03* seems correct, is only Table 7: Analysis of the features significantly associated with a sentiment change, with *p < 10 -3 , **p < 10 -6 .", ".10, p < .05) but they generally lead to editorial changes (.03 < r < .08, p < .05).", "An exploration of the WikiTalkEdit dataset suggests that strategies that elicit a positive emotional change may not affect editorial behavior.", "Negative responses should not be the only yardstick to measure the successful outcome of a conversation.", "Editorial changes occur when Editors use interpersonal language in talking about evidentiality and notability.", "However, these strategies are also associated with a negative emotional change.", "Despite the apparent negative feedback, referencing norms and sources is a successful strategy to prompt behavioral compliance.", "In related work, social influence through mentioning community norms was more effective than the Editor's status at achieving compliance on Wikipedia; however, the latter was an important predictor in a similar modeling task on Reddit (Althoff et al., 2014).", "Although the findings would be correlational, there would be ways to establish cause and effect through a rigorous research design (Zhang et al., 2018).", "In some cases, the measurements may be thrown off if the replies to feedback are appreciative, but include some negative emotion words.", "Secondly, inordinately long or short feedback confounds the classifiers, but we expect that improvements in accuracy can be achieved by using differential attention models that focus on the emotions expressed in the first few words in the dialogic exchanges.", "Finally, we could encode the latent space with information about the type of editorial feedback (Yang et al., 2017), which would be helpful in predicting how the OP responds.", "The WikiTalkEdit dataset offers insights that have important implications for understanding online disagreements and better supporting the Wikipedia community (Klein et al., 2019).", "We recommend the use of the WikiTalkEdit dataset to model the dynamics of consensus among multiple contributors.", "Scholars can also use the WikiTalkEdit dataset to address issues of quality, retention, and loyalty in online communities.", "For instance, the insights could shed light on how new OPs can be retained as sustaining Wikipedia contributors (Yang et al., 2017).", "Our exploratory analyses suggest that disagreements on Wikipedia arise over errors: doubts that a given entry leaves no room for improvements.", "But errors serve a good faith purpose on Wikipedia by perpetuating participation and shared collective action (Nunes, 2011).", "The dataset would also be useful to understand how references are debated and interpreted as objective pieces of evidence (Luyt, 2015).", "Acknowledgements: Supported by a Nanyang Presidential Postdoctoral fellowship and Templeton Religion Trust, grant TRT-0048.", "Thanks to Dr. Nicholas Palomares for their early feedback." ]
[ "method", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "objective", "other", "other", "objective", "other", "method", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "Collecting together microblogs representing opinions about the same topics within the same timeframe is useful to a number of different tasks and practitioners.", "A major question is how to evaluate the quality of such thematic clusters.", "Here we create a corpus of microblog clusters from three different domains and time windows and define the task of evaluating thematic coherence.", "We provide annotation guidelines and human annotations of thematic coherence by journalist experts.", "We subsequently investigate the efficacy of different automated evaluation metrics for the task.", "We consider a range of metrics including surface level metrics, ones for topic model coherence and text generation metrics (TGMs).", "While surface level metrics perform well, outperforming topic coherence metrics, they are not as consistent as TGMs.", "TGMs are more reliable than all other metrics considered for capturing thematic coherence in microblog clusters due to being less sensitive to the effect of time windows.", "As social media gains popularity for news tracking, unfolding stories are accompanied by a vast spectrum of reactions from users of social media platforms.", "Topic modelling and clustering methods have emerged as potential solutions to challenges of filtering and making sense of large volumes of microblog posts (Rosa et al., 2011; Aiello et al., 2013; Resnik et al., 2015; Surian et al., 2016).", "Providing a way to access easily a wide range of reactions around a topic or event has the potential to help those, such as journalists (Tolmie et al., 2017), police (Procter et al., 2013), health (Furini and Menegoni, 2018) and public safety professionals (Procter et al., 2020), who increasingly rely on social media to detect and monitor progress of events, public opinion and spread of misinformation.", "Recent work on grouping together views about tweets expressing opinions about the same entities has obtained clusters of tweets by leveraging two topic models in a hierarchical approach (Wang et al., 2017b).", "The theme of such clusters can either be represented by their topN highest-probability words or measured by the semantic similarity among the tweets.", "One of the questions regarding thematic clusters is how well the posts grouped together relate to each other ( thematic coherence ) and how useful such clusters can be.", "For example, the clusters can be used to discover topics that have low coverage in traditional news media (Zhao et al., 2011).", "Wang et al. (2017a) employ the centroids of Twitter clusters as the basis for topic specific temporal summaries.", "The aim of our work is to identify reliable metrics for measuring thematic coherence in clusters of microblog posts .", "We define thematic coherence in microblogs as follows: Given clusters of posts that represent a subject or event within a broad topic, with enough diversity in the posts to showcase different stances and user opinions related to the subject matter, thematic coherence is the extent to which posts belong together, allowing domain experts to easily extract and summarise stories underpinning the posts.", "To measure thematic coherence of clusters we require robust domain-independent evaluation metrics that correlate highly with human judgement for coherence.", "A similar requirement is posed by the need to evaluate coherence in topic models.", "Roder et al. (2015) provide a framework for an extensive set of coherence measures all restricted to word-level analysis.", "Bianchi et al. (2020) show that adding contextual information to neural topic models improves topic coherence.", "However, the most commonly used word-level evaluation of topic coherence still ignores the local context of each word.", "Ultimately, the metrics need to achieve an optimal balance between coherence and diversity, such that resulting topics describe a logical exposition of views and beliefs with a low level of duplication.", "Here we evaluate thematic coherence in microblogs on the basis of topic coherence metrics, while also using research in text generation evaluation to assess semantic similarity and thematic relatedness.", "We consider a range of state-of-the-art text generation metrics (TGMs), such as BERTScore (Zhang et al., 2019), MoverScore (Zhao et al., 2019) and BLEURT (Sellam et al., 2020), which we repurpose for evaluating thematic coherence in microblogs and correlate them with assessments of coherence by journalist experts.", "The main contributions of this paper are: We define the task of assessing thematic coherence in microblogs and use it as the basis for creating microblog clusters (Sec. 3).", "We provide guidelines for the annotation of thematic coherence in microblog clusters and construct a dataset of clusters annotated for thematic coherence spanning two different domains (political tweets and COVID-19 related tweets).", "The dataset is annotated by journalist experts and is available 1 to the research community (Sec. 3.5).", "We compare and contrast state-of-the-art TGMs against standard topic coherence evaluation metrics for thematic coherence evaluation and show that the former are more reliable in distinguishing between thematically coherent and incoherent clusters (Secs 4, 5).", "common approach to evaluating topic model coherence is to identify the latent connection between topic words representing the topic.", "Once a function between two words is established, topic coherence can be defined as the (average) sum of the function values over all word pairs in the set of most probable words.", "Newman et al. (2010) use Pointwise Mutual Information (PMI) as the function of choice, employing co-occurrence statistics derived from external corpora.", "Mimno et al. (2011) subsequently showed that a modified version of PMI correlates better with expert annotators.", "AlSumait et al. (2009) identified junk topics by measuring the distance between topic distribution and corpus-wide 1 https://doi.org/10.6084/m9.figshare.", "distribution of words.", "Fang et al. (2016a) model topic coherence by setting the distance between two topic words to be the cosine similarity of their respective embedded vectors.", "Due to its general-isability potential we follow this latter approach to topic coherence to measure thematic coherence in tweet clusters.", "We consider GloVe (Pennington et al., 2014) and BERTweet (Nguyen et al., 2020) embeddings, derived from language models pre-trained on large external Twitter corpora.", "To improve performance and reduce sensitivity to noise, we followed the work of Lau and Baldwin (2016), who consider the mean topic coherence over several topic cardinalities | W | { 5 , 10 , 15 , 20 } .", "Another approach to topic coherence involves detecting intruder words given a set of topic words, an intruder and a document.", "If the intruder is identified correctly then the topic is considered coherent.", "Researchers have explored varying the number of intruders' (Morstatter and Liu, 2018) and automating the task of intruder detection (Lau et al., 2014).", "There is also work on topic diversity (Nan et al., 2019).", "However, there is a tradeoff between diversity and coherence (Wu et al., 2020), meaning high diversity for topic modelling is likely to be in conflict with thematic coherence, the main focus of the paper.", "Moreover, we are ensuring semantic diversity of microblog clusters through our sampling strategy (See Sec. 3.4).", "Text Generation Metrics : TGMs have been of great use in applications such as machine translation (Zhao et al., 2019; Zhang et al., 2019; Guo and Hu, 2019; Sellam et al., 2020), text summarisation (Zhao et al., 2019) and image captioning (Vedantam et al., 2015; Zhang et al., 2019; Zhao et al., 2019), where a machine generated response is evaluated against ground truth data constructed by human experts.", "Recent advances in contextual language modeling outperform traditionally used BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004) scores, which rely on surface-level n-gram overlap between the candidate and the reference.", "In our work, we hypothesise that metrics based on contextual embeddings can be used as a proxy for microblog cluster thematic coherence.", "Specifi-cally, we consider the following TGMs:", "(a) BERTScore is an automatic evaluation metric based on BERT embeddings (Zhang et al., 2019).", "The metric is tested for robustness on adversarial paraphrase classification.", "However, it is based on a greedy approach, where every reference token is linked to the most similar candidate token, leading to a time-performance trade-off.", "The harmonic mean FBERT is chosen for our task due to its most consistent performance (Zhang et al., 2019).", "(b) MoverScore (Zhao et al., 2019) expands from the BERTScore and generalises Word Mover Distance (Kusner et al., 2015) by allowing soft (many-to-one) alignments.", "The task of measuring semantic similarity is tackled as an optimisation problem with the constraints given by n-gram weights computed in the corpus.", "In this paper, we adopt this metric for unigrams and bigrams as the preferred embedding granularity.", "(c) BLEURT (Sellam et al., 2020) is a state-of-the-art evaluation metric also stemming from the success of BERT embeddings, carefully curated to compensate for problematic training data.", "Its authors devised a novel pre-training scheme leveraging vast amounts of synthetic data generated through BERT mask-filling, back-translation and word dropping.", "This allows BLEURT to perform robustly in cases of scarce and imbalanced data.", "Notation We use C = { C 1 , ..., C n } to denote a set of clusters C i .", "Each cluster C i is represented by the pair C i = ( T i , W i ) , where T i and W i represent the set of tweets and top-20 topic words of the dominant latent topic in C i , respectively.", "The task of identifying thematic coherence in microblog clusters is formalised as follows: Given a set of clusters C , we seek to identify a metric function f : C R s.t. high values of f ( C i ) correlate with human judgements for thematic coherence.", "Here we present", "(a) the creation of a corpus of topic clusters of tweets C and", "(b) the annotation process for thematic coherence.", "(a) involves a clustering (Sec. 3.2), a filtering (Sec. 3.3) and a sampling step (Sec. 3.4);", "(b) is described in (Sec. 3.5).", "Experiments to identify a suitable function f are in Sec. 4. 3.1 Data Sources We used three datasets pertaining to distinct domains and collected over different time periods as the source of our tweet clusters.", "The COVID-19 dataset (Chen et al., 2020) was collected by tracking COVID-19 related keywords (e.g., coronavirus , pandemic , stayathome ) and accounts (e.g., @CDCemergency , @HHSGov , @DrT-edros ) through the Twitter API from January to May 2020.", "This dataset covers specific recent events that have generated significant interest and its entries reflect on-going issues and strong public sentiment regarding the current pandemic.", "The Election dataset was collected via the Twitter Firehose and originally consisted of all geo-located UK tweets posted between May 2014 and May 2016 2 .", "It was then filtered using a list of 438 election-related keywords relevant to 9 popular election issues 3 and a list of 71 political party aliases curated by a team of journalists (Wang et al., 2017c).", "The PHEME dataset (Zubiaga et al., 2016) of rumours and non-rumours contains tweet conversation threads consisting of a source tweet and associated replies, covering breaking news pertaining to 9 events (i.e., Charlie Hebdo shooting, German-wings airplane crash, Ferguson unrest, etc.).", "These datasets were selected because they cover a wide range of topics garnering diverse sentiments and opinions in the Twitter sphere, capturing newsworthy stories and emerging phenomena of interest to journalists and social scientists.", "Of particular interest was the availability of stories, comprising groups of tweets, in the PHEME dataset, which is why we consider PHEME tweet clusters separately.", "The task of thematic coherence evaluation introduced in this paper is related to topic modelling evaluation, where it is common practice ( Mimno et al. (2011), Newman et al. (2010)) to gauge the coherence level of automatically created groups of topical words.", "In a similar vein, we evaluate thematic coherence in tweet clusters obtained automatically for the Election and COVID-19 datasets.", "The clusters were created in the following way: Tweets mentioning the same keyword posted within the same time window (3 hours for Election , 1 hour for Covid-19 ) were clustered according to the two-stage clustering approach by Wang et al. (2017b), where two topic models (Yin and Wang, 2014; Nguyen et al., 2015) with a tweet pooling step are used.", "We chose this as it has shown competitive performance over several tweet clustering tasks, without requiring a pre-defined number of clusters.", "2 Unlike the Twitter API, the firehose provides 100% of the tweets that match user defined criteria, which in our case is a set of geo-location and time zone Twitter PowerTrack operators.", "3 EU and immigration, economy, NHS, education, crime, housing, defense, public spending, environment and energy.", "The PHEME dataset is structured into conversation threads, where each source tweet is assigned a story label.", "We assume that each story and the corresponding source tweets form a coherent thematic cluster since they have been manually annotated by journalists.", "Thus the PHEME stories can be used as a gold standard for thematically coherent clusters.", "We also created artificial thematically incoherent clusters from PHEME.", "For this purpose we mixed several stories in different proportions.", "We designed artificial clusters to cover all types of thematic incoherence, namely: Random , Intruded , Chained (See Sec. 3.5 for defini-tions).", "For Intruded , we diluted stories by eliminating a small proportion of their original tweets and introducing a minority of foreign content from other events.", "For Chained , we randomly chose the number of subjects (varying from 2 to 5) to feature in a cluster, chose the number of tweets per subject and then constructed the chain of subjects' by sampling tweets from a set of randomly chosen stories.", "Finally, Random clusters were generated by sampling tweets from all stories, ensuring no single story represented more than 20% of a cluster.", "These artificial clusters from PHEME serve as ground-truth data for thematic incoherence.", "For automatically collected clusters ( COVID-19 and Election ) we followed a series of filtering steps: duplicate tweets, non-English 4 tweets and ads were removed and only clusters containing 20-50 tweets were kept.", "As we sought to mine stories and associated user stances, opinionated clusters were priori-tised.", "The sentiment analysis tool VADER (Gilbert and Hutto, 2014) was leveraged to gauge subjectivity in each cluster: a cluster is considered to be opinionated if the majority of its tweets express strong sentiment polarity.", "5 VADER was chosen for its reliability on social media text and for its capacity to assign granulated sentiment valences; this allowed us to readily label millions of tweets and impose our own restrictions to classify neutral/non-neutral instances by varying the thresholds for the VADER compound score.", "4 https://pypi.org/project/langdetect/ 5 The absolute value of VADER compound score is required to be > 0 .", "5 , a much stricter condition than that used originally (Gilbert and Hutto, 2014).", "Work on assessing topic coherence operates on either the entire dataset (Fang et al., 2016b) or a random sample of it (Newman et al., 2010; Mimno et al., 2011).", "Fully annotating our entire dataset of thematic clusters would be too time-consuming, as the labelling of each data point involves reading dozens of posts rather than a small set of topical words.", "On the other hand, purely random sampling from the dataset cannot guarantee cluster diversity in terms of different levels of coherence.", "Thus, we opt for a more complex sampling strategy inspired by stratified sampling (Singh and Mangat, 2013), allowing more control over how the data is partitioned in terms of keywords and scores.", "After filtering Election and COVID-19 contained 46,715 and 5,310 clusters, respectively.", "We chose to sample 100 clusters from each dataset s.t. they: derive from a semantically diverse set of keywords (required for Elections only); represent varying levels of coherence (both); represent a range of time periods (both).", "We randomly subsampled 10 clusters from each keyword with more than 100 clusters and keep all clusters with under-represented keywords (associ-ated with fewer than 100 clusters).", "This resulted in 2k semantically diverse clusters for Elections .", "TGM scores were leveraged to allow the selection of clusters with diverse levels of thematic coherence in the pre-annotation dataset.", "Potential score ranges for each coherence type were modelled on the PHEME dataset (See Sec. 3.2, 3.5), which is used as a gold standard for cluster coher-ence/incoherence.", "For each metric M and each coherence type CT , we defined the associated interval to be: I ( M ) CT = [ 2 , + 2 ] , where , are the mean and standard deviation for the set of metric scores M characterising clusters of coherence type CT .", "We thus account for 95% of the data 6 .", "We did not consider metrics M for which the overlap between I ( M ) Good , I ( M ) Intruded-Chained7 and I ( M ) Random was significant as this implied the metric was unreliable.", "6 Both the Shapiro-Wilk and Anderson-Darling statistical tests had showed the PHEME data is normally distributed.", "7 Intruded and Chained clusters mostly define the intermediate level of coherence, so their score ranges are similar, hence the two groups are unified.", "As we did not wish to introduce metric bias when sampling the final dataset, we subsampled clusters across the intersection of all suitable metrics for each coherence type CT .", "In essence, our final clusters were sampled from each of the sets CCT = { C i | M ( C i ) I ( M ) CT metric M} .", "For each of COVID-19 and Elections we sampled 50 clusters C Good , 25 clusters C Intruded-Chained and 25 clusters C Random .", "Coherence annotation was carried out in four stages by three annotators.", "We chose experienced journalists as they are trained to quickly and reliably identify salient content.", "An initial pilot study including the journalists and the research team was conducted; this involved two rounds of annotation and subsequent discussion to align the team's understanding of the guidelines (for the guidelines see Appendix B).", "The first stage tackled tweet-level annotation within clusters and drew inspiration from the classic task of word intrusion (Chang et al., 2009): annotators were asked to group together tweets discussing a common subject; tweets considered to be intruders' were assigned to groups of their own.", "Several such groups can be identified in a cluster depending on the level of coherence.", "This grouping served as a building block for subsequent stages.", "This sub-clustering step offers a good trade-off between high annotation costs and manual evaluation since manually creating clusters from thousands of tweets is impractical.", "We note that agreement between journalists is not evaluated at this first stage as obtaining exact sub-clusters is not our objective.", "However, vast differences in sub-clustering are captured in the next stages in quality judgment and issue identification (See", "below).The second stage concerned cluster quality assessment, which is our primary task.", "Similar to Newman et al. (2010) for topic words, annotators evaluated tweet cluster coherence on a 3-point scale ( Good , Intermediate , Bad ).", "Good coherence is assigned to a cluster where the majority of tweets belong to the same theme (sub-cluster), while clusters containing many unrelated themes (sub-clusters) are assigned bad coherence.", "The third stage pertains to issue identification of low coherence, similar to Mimno et al. (2011).", "When either Intermediate or Bad are chosen in stage 2 annotators can select from a list of issues to justify their choice: Chained : several themes are identified in the cluster (with some additional potential random tweets), without clear connection between any two themes.", "Intruded : only one common theme can be identified among some tweets in the cluster and the rest have no clear connection to the theme or to each other.", "Random : no themes can be identified and there is no clear connection among tweets in the cluster.", "Inter-annotator agreement (IAA) was computed separately for the second and third stages as they serve a different purpose.", "For the second stage (cluster quality), we obtain average Spearman correlation r s = 0 .", "73 which is comparable to previous coherence evaluation scores in topic modelling literature ((Newman et al., 2010) with rs = 0.73 / 0.78 and (Aletras and Stevenson, 2013) with rs = 0.70 / 0.64 / 0.54) and average Cohen's Kappa = 0 .", "48 (moderate IAA).", "For the third stage (issue identifi-cation), we compute average = 0 .", "36 (fair IAA).", "Analysis of pairwise disagreement in stage 2 shows only 2 % is due to division in opinion over Good-Bad clusters.", "Good-Intermediate and Intermediate-Bad cases account for 37 % and 61 % of disagreements respectively.", "This is encouraging as annotators almost never have polarising views on cluster quality and primarily agree on the coherence of a good cluster, the main goal of this task.", "For issue identification the majority of disagreements ( %49 ) consists in distinguishing Intermediate-Chained cases.", "This can be explained by the expected differences in identifying subclus-ters in the first stage.", "For the adjudication process, we found that a majority always exists and thus the final score was assigned to be the majority label (2/3 annotators).", "Table 1 presents a summary of the corpus size, coherence quality and issues identified for COVID-19 and Election (See Appendix C for a discussion).", "Our premise is that a pair of sentences scoring high in terms of TGMs means that the sentences are semantically similar.", "When this happens across many sentences in a cluster then this denotes good cluster coherence.", "Following Douven and Meijs (2007), we consider three approaches to implementing and adapting TGMs to the task of measuring thematic General Cluster Quality Cluster Issue Dataset Clusters Tweets Tokens Good Intermediate Bad Intruded Chained Random COVID-19 100 2,955 100K 18 31 51 32 25 25 Election 100 2,650 52K 25 50 25 28 33 14 Table 1: Statistics of the annotated clusters where the final label is assigned to be the majority label.", "coherence.", "The differences between these methods consist of:", "(a) the choice of the set of tweet pairs S T T on which we apply the metrics and", "(b) the score aggregating function f ( C ) assigning coherence scores to clusters.", "The TGMs employed in our study are BERTScore (Zhang et al., 2019), MoverScore (Zhao et al., 2019) for both unigrams and bigrams and BLEURT (Sellam et al., 2020).", "We also employed a surface level metric based on cosine similarity distances between TF-IDF representations 8 of tweets to judge the influence of word co-occurrences in coherence analysis.", "Each approach has its own advantages and disadvantages, which are outlined below.", "In this case S = T T , i.e., all possible tweet pairs within the cluster are considered.", "The cluster is assigned the mean sum over all scores.", "This approach is not biased towards any tweet pairs, so is able to penalise any tweet that is off-topic.", "However, it is computationally expensive as it requires O ( | T | 2 ) operations.", "Formally, given a TGM M , we define this approach as: f ( C ) = 1 (cid:0) | T | 2 (cid:1) (cid:88) tweet i , tweet j T i<j M ( tweet i , tweet j ) .", "We assume there exists a representative tweet able to summarise the content in the cluster, denoted as the representative tweet (i.e. tweet rep ).", "This is formally defined as: tweet rep ( C ) = arg min tweet i CDKL ( , tweet i ) , where we compute the KullbackLeibler divergence ( DKL ) between the word distributions of the topic representing the cluster C and each tweet in C (Wan and Wang, 2016); we describe the computation of DKL in Appendix A. We also considered other text summarisation methods (Basave et al., 2014; Wan and Wang, 2016) such as MEAD (Radev et al., 2000) and Lexrank (Erkan and Radev, 8 Tweets are embedded into a vector space of TF-IDF representations within their corresponding cluster. 2004) to extract the best representative tweet, but our initial empirical study indicated DKL consistently finds the most appropriate representative tweet.", "In this case cluster coherence is defined as below and has linear time complexity O ( | T | ) : f ( C ) = 1 | T | (cid:88) tweet i TM ( tweet i , tweet rep ) .", "As S = { ( tweet , tweet rep ) | tweet T } T T , the coherence of a cluster is heavily influenced by the correct identification of the representative tweet.", "Similar to the work of Erkan and Radev (2004), each cluster of tweets C can be viewed as a complete weighted graph with nodes represented by the tweets in the cluster and each edge between tweet i , tweet j assigned as weight: w i,j = M ( tweet i , tweet j ) 1 .", "In the process of constructing a complete graph, all possible pairs of tweets within the cluster are considered.", "Hence S = T T with time complexity of O ( | T | 2 ) as in Section 4.1.", "In this case, the coherence of the cluster is computed as the average closeness centrality of the associated cluster graph.", "This is a measure derived from graph theory, indicating how close' a node is on average to all other nodes; as this definition intuitively corresponds to coherence within graphs, we included it in our study.", "The closeness centrality for the node representing tweet i is given by: CC ( tweet i ) = | T | 1 (cid:80) tweet j T d ( tweet j , tweet i ) , where d ( tweet j , tweet i ) is the shortest distance between nodes tweet i and tweet j computed via Dijkstra's algorithm.", "Note that as Dijkstra's algorithm only allows for non-negative graph weights and BLEURT 's values are mostly negative, we did not include this TGM in the graph approach implementation.", "Here cluster coherence is defined as the average over all closeness centrality scores of the nodes in the graph: f ( C ) = 1 | T | (cid:88) tweet TCC ( tweet i ) .", "Table 2 presents the four best and four worst performing metrics (for the full list of metric results refer to Appendix A).", "MoverScore variants are not included in the results discussion as they only achieve average performance.", "Election and COVID-19 Exhaustive TF-IDF and Graph TF-IDF consistently outperformed TGMs, implying that clusters with a large overlap of words are likely to have received higher coherence scores.", "While TF-IDF metrics favour surface level co-occurrence and disregard deeper semantic connections, we conclude that, by design all posts in the thematic clusters (posted within a 1h or 3 h window) are likely to use similar vocabulary.", "Nevertheless, TGMs correlate well with human judgement, implying that semantic similarity is a good indicator for thematic coherence: Exhaustive BERTScore performs the best of all TGMs in Election while Exhaustive BLEURT is the strongest competitor to TF-IDF based metrics for COVID-19 .", "On the low end of the performance scale, we have found topic coherence to be overwhelmingly worse compared to all the TGMs employed in our study.", "BERTweet improves over Glove embeddings but only slightly as when applied at the word level (for topic coherence) it is not able to benefit from the context of individual words.", "We followed Lau and Baldwin (2016), and computed average topic coherence across the top 5 , 10 , 15 , 20 topical words in order to obtain a more robust performance (see Avg Topic Coherence Glove , Avg Topic Coherence BERTweet ).", "Results indicate that this smoothing technique correlates better with human judgement for Election , but lowers performance further in COVID-19 clusters.", "In terms of the three approaches, we have found that the Exhaustive and Graph approaches perform similarly to each other and both outperform the Representative Tweet approach.", "Sacrificing time as trade off to quality, the results indicate that metrics considering all possible pairs of tweets account for higher correlation with annotator rankings.", "PHEME The best performance on this dataset is seen with TGM BLEURT , followed closely by BERTScore .", "While TF-IDF based metrics are still in the top four, surface level evaluation proves to be less reliable: PHEME stories are no longer constrained by strict time windows 9 , which allows the tweets within each story to be more lexically diverse, while still maintaining coherence.", "In such instances, strategies depending exclusively on word frequencies perform inconsistently, which is why metrics employing semantic features ( BLEURT , BERTScore ) outperform TF-IDF ones.", "Note that PHEME data lack the topic coherence evaluation, as these clusters were not generated through topic modelling (See Subsection 3.2).", "We analysed several thematic clusters to get a better insight into the results.", "Tables 3 and 4 show representative fragments from 2 clusters labelled as good' in the COVID-19 dataset.", "The first cluster contains posts discussing the false rumour that bleach is an effective cure to COVID-19, with the majority of users expressing skepticism.", "As most tweets in this cluster directly quote the rumour and thus share a significant overlap of words, not surprisingly, TF-IDF based scores are high Exhaustive 9 Stories were generated across several days, rather then hours, by tracking on-going breaking news events on Twitter.", "Cluster Annotation: Good Common Keyword: coronavirus' Trump-loving conspiracy nuts tout drinking bleach as a miracle' cure for coronavirus They may have found a cure for Trump lovers and MAGA but not anything else # MAGAIDIOTS # TestOnDonJr # OneVoice ProTrump conspiracy theorists tout drinking bleach as a 'miracle' cure for coronavirus Trump-loving conspiracy nuts tout drinking bleach as a miracle' cure for coronavirus DRINK UP, MAGAts!", "Isn't this just a problem solving itself?", "# Darwinism Trump-loving conspiracy nuts tout drinking bleach as a miracle' cure for coronavirus Trump-loving conspiracy nuts tout drinking bleach as a miracle' cure for coronavirus ...", "Is a quart each sufficient?", "Will go multiple gallons-gratis.", "TF-IDF = 0.109 .", "In the second cluster, however, users challenge the choices of the American President regarding the government's pandemic reaction: though the general feeling is unanimous in all posts of the second cluster, these tweets employ a more varied vocabulary.", "Consequently, surface level metrics fail to detect the semantic similarity Exhaustive TF-IDF = 0.040 .", "When co-occurrence statistics are unreliable, TGMs are more successful for detecting the common story' diversely expressed in the tweets: in fact, Exhaustive BLEURT assigns similar scores to both clusters (-0.808 for Cluster 1 and -0.811 for Cluster 2) in spite of the vast difference in their content intersection, which shows a more robust evaluation capability.", "We analyse the correlation between topic coherence and annotator judgement in Tables 5 and 6.", "Both are illustrative fragments of clusters extracted from the Election dataset.", "Though all tweets in Table 5 share the keyword oil', they form a bad random cluster type, equivalent to the lowest level of coherence.", "On the other hand, Table 6 clearly presents a good cluster regarding an immigration tragedy at sea.", "Although this example pair contains clusters on opposite sides of the coherence spectrum, topic coherence metrics fail to distinguish the clear difference in quality between the two.", "Moreover, Table 6 receives lower scores (TC Glove = 0.307) than its incoherent counterpart (TC Glove = 0.330) for Glove Topic Coherence.", "However, TGM metric BERTScore and surface-level metric TF-IDF correctly evaluate the two clusters by penalising incoherence (Exhaustive BERTScore = 0.814 and Exhaustive TF-IDF = 0.024) and awarding good clusters (Exhaustive BERTScore = 0.854 and Exhaustive TF-IDF = 0.100).", "We have defined the task of creating topic-sensitive clusters of microblogs and evaluating their thematic coherence.", "To this effect we have investigated the efficacy of different metrics both from the topic modelling literature and text generation metrics TGMs.", "We have found that TGMs correlate much better with human judgement of thematic coherence compared to metrics employed in topic model evaluation.", "TGMs maintain a robust performance across different time windows and are generalisable across several datasets.", "In future work we plan to use TGMs in this way to identify thematically Cluster Annotation: Bad Random Common Keyword: oil' M'gonna have a nap, I feel like I've drank a gallon of like grease or oil or whatever bc I had fish & chips like 20 minutes ago Check out our beautiful, nostalgic oil canvasses.", "These stunning images will take you back to a time when life...", "Five years later, bottlenose dolphins are STILL suffering from BP oil disaster in the Gulf.", "Take action!", "Once the gas and oil run out countries like Suadia Arabia and Russia won't be able to get away with half the sh*t they can now Ohhh this tea tree oil is burning my face off Table 5: Cluster fragment from Election dataset, TC Glove = 0.330, Exhaustive BERTScore = 0.814 and Exhaustive TF-IDF = 0.024.", "This work was supported by a UKRI/EPSRC Turing AI Fellowship to Maria Liakata (grant no. EP/V030302/1) and The Alan Turing Institute (grant no. EP/N510129/1).", "We would like to thank our 3 annotators for their invaluable expertise in constructing the datasets.", "We also thank the reviewers for their insightful feedback.", "Finally, we would like to thank Yanchi Zhang for his help in the redundancy correction step of the pre-processing.", "Ethics approval to collect and to publish extracts from social media datasets was sought and received from Warwick University Humanities & Social Sciences Research Ethics Committee.", "During the annotation process, tweet handles, with the except of public figures, organisations and institutions, were anonymised to preserve author privacy rights.", "In the same manner, when the datasets will be released to the research community, only tweets IDs will be made available along with associated cluster membership and labels.", "Compensation rates were agreed with the annotators before the annotation process was launched.", "Remuneration was fairly paid on an hourly rate at the end of task." ]
[ "abstain", "abstain", "method", "method", "objective", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "objective", "method", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "method", "method", "abstain", "other", "other", "other", "other", "other", "abstain", "other", "other", "method", "method", "other", "other", "other", "method", "other", "other", "method", "other", "other", "other", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "Abstract Cross-lingual natural language inference (XNLI) is a fundamental task in cross-lingual natural language understanding.", "Recently this task is commonly addressed by pre-trained cross-lingual language models.", "Existing methods usually enhance pre-trained language models with additional data, such as annotated parallel corpora.", "These additional data, however, are rare in practice, especially for low-resource languages.", "Inspired by recent promising results achieved by prompt-learning, this paper proposes a novel prompt-learning based framework for enhancing XNLI.", "It reformulates the XNLI problem to a masked language modeling problem by constructing cloze-style questions through cross-lingual templates.", "To enforce correspondence between different languages, the framework augments a new question for every question using a sampled template in another language and then introduces a consistency loss to make the answer probability distribution obtained from the new question as similar as possible with the corresponding distribution obtained from the original question.", "Experimental results on two benchmark datasets demonstrate that XNLI models enhanced by our proposed framework significantly outperform original ones under both the full-shot and few-shot cross-lingual transfer settings.", "Cross-lingual language understanding (XLU) plays a vital role in multilingual systems.", "It aims at training a model in a source language which is then applied to other languages.", "Cross-lingual natural language inference (XNLI) is a challenge task for evaluating XLU (Conneau et al., 2018).", "Natural language inference (NLI) aims to determine the inferential relationship between the text of a premise and the text of a hypothesis while XNLI upgrades NLI to the cross-lingual scenarios.", "Nowadays pre-trained cross-lingual language models (Conneau and Lample, 2019; Conneau et al., 2020) have become a dominant paradigm for XLU, significantly improving the performance in various XLU tasks included XNLI.", "Existing methods (Huang et al., 2019; Chi et al., 2021a,b) usually utilize various auxiliary tasks to improve the cross-lingual transferability of a pre-trained cross-lingual language model, mainly relying on annotated parallel corpora.", "In practice, these methods can hardly work for low-resource language scenarios where parallel corpora are rare.", "Recently, prompt-learning based methods (Schick and Schtze, 2021; Shin et al., 2020) have shown to achieve promising results for few-shot natural language processing (NLP).", "These methods reformulate the text classification problem into a masked language modeling problem.", "In particular, the work (Zhao and Schtze, 2021) demonstrates that prompt-learning outperforms fine-tuning in few-shot XNLI.", "We argue that the effectiveness of prompt-learning in XNLI still needs to be explored by a larger margin.", "The reasons are two-fold.", "On one hand, the effectiveness of prompt-learning in XNLI under the full-shot setting is still unknown.", "On the other hand, the way to make the best of question templates is unexplored yet.", "The work (Zhao and Schtze, 2021) uses a uniform template in English for all examples in different languages.", "This way can hardly capture language-specific characteristics in XNLI, especially for those languages that are right-to-left written such as Arabic and Urdu.", "We naturally expect that language-specific question templates lead to higher performance in XNLI.", "Figure 1 illustrates how language-specific question templates are used.", "The second sub-figure shows the uniform question template used in (Zhao and Schtze, 2021) to handle an Arabic example, where the corresponding example in English is shown in the first sub-figure.", "The last sub-figure shows the Arabic-specific question template used for the same Arabic example, which is right-to-left written and conforms to the Arabic grammar.", "In order to introduce language-specific characteristics in question templates while capturing correspondence between different languages, we propose a novel prompt-learning based framework named PCT (shot for Prompt-learning from Cross-lingual Templates ) for XNLI.", "As illustrated in Figure 2, PCT first constructs a cloze-style question by filling the template in the source language (namely English), then randomly samples a template in another language (such as Chinese) to construct an augmented question, where the augmented question is written in two languages and thus its template is called a cross-lingual template .", "Both the original question and the augmented question are fed into a pre-trained cross-lingual language model to calculate the answer probability distributions for inferential relationships that are represented by predefined tokens mapped from the mask token.", "To enforce answer consistency for the two questions, i.e., to make the two probability distributions of inferential relationships as similar as possible, the two probability distributions are regularized by the Kullback-Leibler divergence (KLD) loss.", "The entire model is trained by minimizing the sum of the cross-entropy loss for classification accuracy and the KLD loss for answer consistency.", "We employ PCT to enhance pre-trained cross-lingual language models XLM-R (Conneau et al., 2020) and INFOXLM (Chi et al., 2021a).", "Experimental results on the XNLI (Conneau et al., 2018) benchmark and the PAWS-X (Yang et al., 2019) benchmark show that PCT improves the original models by a significant margin under both the full-shot and few-shot cross-lingual transfer settings.", "1. We propose a novel prompt-learning based framework for XNLI.", "In this framework, a data augmentation strategy is introduced which relies merely on predefined cross-lingual templates; moreover, a consistency loss is introduced to enforce similar output probability distributions for arbitrary two languages so as to capture correspondence between different languages.", "2. We conduct extensive experiments on two large-scale benchmarks to demonstrate significant improvements achieved by the proposed framework, under both the full-shot and few-shot cross-lingual transfer settings.", "Up to date XLU including XNLI are widely addressed by pre-trained cross-lingual language models (Devlin et al., 2019; Conneau and Lample, 2019; Conneau et al., 2020).", "Multilingual BERT (mBERT) (Devlin et al., 2019) extends the basic pre-trained language model BERT by training with multilingual corpora.", "XLM (Conneau and Lam-ple, 2019) enhances mBERT by introducing the translation language modeling (TLM) objective.", "XLM-RoBERTa (XLM-R) (Conneau et al., 2020) trains XLM with larger corpora and more epochs.", "Cross-lingual language models can further be enhanced by post-training tasks that rely on large-scale parallel corpora.", "UNICODER (Huang et al., 2019) introduces several post-training tasks to utilize parallel corpora.", "INFOXLM (Chi et al., 2021a) enhances XLM-R by introducing the cross-lingual contrastive learning task using 42 GB parallel corpora.", "XLM-ALIGN (Chi et al., 2021b) introduces a denoising word alignment pre-training task using several parallel corpora.", "These enhancements can hardly be applied to low-resource languages for which parallel corpora are rare.", "To alleviate the dependence on parallel corpora, some data augmentation strategies have been proposed for XNLI.", "TMAN (Qi and Du, 2020) enhances XNLI by exploiting adversarial training from translated data.", "The work (Dong et al., 2021) proposes a data augmentation strategy for XNLI by generating augmented data from a pre-trained sequence-to-sequence model.", "UXLA (Bari et al., 2021) improves the performance of XNLI by data augmen-1911 KLD Loss Yes Vocabulary <s> I hope to hear from you soon.", "tation and unsupervised sample selection.", "All these strategies require a large amount of external resources for data augmentation.", "In contrast, our proposal augments data by only predefined cross-lingual templates.", "Recently prompt-learning based methods have shown to achieve promising results in various few-shot NLP tasks.", "The key of these methods is reformulating the text classification problem into a masked language modeling problem by constructing cloze-style questions.", "The work (Schick and Schtze, 2021) applies prompt-learning to text classification (including NLI) with manually defined templates.", "The work (Shin et al., 2020) proposes to search for optimal discrete templates by a gradient based approach.", "Several approaches (Li and Liang, 2021; Liu et al., 2021; Han et al., 2021) have been proposed to search continuous prompts.", "The work (Zhao and Schtze, 2021) compares prompt-learning with fine-tuning in few-shot XNLI.", "Different from (Zhao and Schtze, 2021), this work significantly advances prompt-learning in XNLI further by introducing a new data augmentation strategy and a new consistency loss for regularization.", "The effectiveness of prompt-learning is also demonstrated further under both the full-shot and few-shot cross-lingual transfer settings.", "The proposed PCT framework is illustrated in Figure 2. For every training triple (premise, hypothesis, label) in English, PCT first constructs a cloze-style", "cloze-style question by filling the template in English, then samples a predefined template from another language such as Chinese to construct an augmented question.", "Both the original question and the augmented question are fed into a pre-trained cross-lingual model to calculate the answer distributions of the mask token, through the masked language modeling (MLM) layer in the pre-trained cross-lingual model.", "The entire model is trained by minimizing the cross-entropy loss for classification accuracy and the Kullback-Leibler divergence (KLD) loss for answer consistency.", "The training phase of PCT is formalized in Algorithm 1. For every training triple ( P i , H i , Y i ) in English, where P i = { w Pj } mj =1 denotes the word sequence of the premise, H i = { w H j } n j =1 the word sequence of the hypothesis, Y i Y the index of the NLI label, PCT first constructs a cloze-style question X i by filling the English template, and then randomly samples a template from other languages to construct an augmented question X i .", "A template in an arbitrary language is a textual string with three unfilled slots: a input slot [P] to fill the input premise, a input slot [H] to fill the input hypothesis and an answer slot [Z] that allows language models to fill label words.", "[Z] is usually filled by the mask token [MASK] when using pretrained language models.", "For instance, the English template is expressed as <s>[P]</s></s>Question: [H]? Answer: [MASK]</s>, where <s> and </s> 1912 Algorithm 1 The training phase of PCT Require: the number of epochs E and the training set D = { ( P i , H i , Y i ) } Mi =1 in English.", "1: Reform D to a set of tuples S = { X i , Y i } Mi =1 by filling the English template.", "2: Extend S to T = { ( X i , X i , Y i ) } Mi =1 by filling a randomly sampled template from other languages for each ( P i , H i ) .", "3: Divide T into a set of mini-batches B .", "4: for epoch from 1 to E do 5: Shuffle B .", "6: for each mini-batch { ( X i , X i , Y i ) } 1 i N in B do 7: Compute total loss L by Eq.", "(5).", "8: Update parameters by gradient descent.", "9: end for 10: end for are special tokens in XLM-R to separate sentences.", "The verbalizer M : Y V is a function to map NLI labels to indices of answer words in the given vocabulary.", "Let l denote the size of the given vocabulary and d the dimension of the contextualized representation of a token, output by a pre-trained cross-lingual language model with an MLM layer, such as XLM-R (Conneau et al., 2020).", "The answer probability distribution is calculated by: y i = softmax( W lm h [MASK] i ) (1) where W lm R l d denotes the parameters of the pre-trained MLM layer and h [MASK] i R d denotes the contextualized representation of the [MASK] token of the i th training triple.", "Compared with the standard fine-tuning method, no extra parameters are required to be initialized, therefore the model can be optimized by fewer samples.", "Given a mini-batch ( X i , X i , Y i ) 1 i N of N triples, the two cross-entropy losses for the original question and the augmented question are respectively calculated by: LX = 1 NN (cid:88) i =1 l (cid:88) j =1 I ( j = M ( Y i )) log y X i i,j (2) LX = 1 NN (cid:88) i =1 l (cid:88) j =1 I ( j = M ( Y i )) log y X i i,j (3) where y X i i,j (resp. y X i i,j ) denotes the j th element of y i R l for the input X i (resp. for the input X i ).", "More generally, all cross-lingual templates are obtained from the English template by translating prompt words in English to prompt words in other languages using Google translator 1 , where for Arabic 1 https://translate.google.com/ 1913 and Urdu, the prompt part is written from right to left rather than from left to right as in English and other languages.", "We observe that, given the same input premise and hypothesis, the answer probability distribution of the question constructed by a cross-lingual template may evidently deviate from that of the question constructed from the English template.", "Such a deviation may lead to an increase of errors when applying cross-lingual templates to examples in other languages.", "Our ablation study in Section 4 confirms this phenomenon.", "To eliminate the negative effect of this deviation, we propose a consistency loss function to regularize the answer probability distributions.", "More precisely, we employ the symmetric Kullback-Leibler divergence (KLD) loss to enforce the answer probability distributions y X i i and y X i i to be as similar as possible, which is formally defined bellow.", "LKLD = 1 NN (cid:88) i =1 (KL( y X i i || y X i i ) + KL( y X i i || y X i i )) = 1 NN (cid:88) i =1 l (cid:88) j =1 ( y X i i,j log y X i i,j y X i i,j + y X i i,j log y X i i,j y X i i,j ) (4) The entire model is trained by minimizing the total loss L formally defined as: L = LX + LX + LKLD (5) where we simply apply the same weight for the three loss terms.", "Since the English template may not conform to the grammar of other languages such as Arabic and Urdu, PCT uses the cross-lingual template in the target language for predicting test examples in the target language.", "For instance, every Chinese test example is reformed to a Chinese cloze-style question by filling the Chinese template <s>[P]</s></s> : [H]? : [MASK]</s>, which is obtained from the English template by translating prompt words in English to prompt words in Chinese, where the slots [P] and [H] are filled by the premise and the hypothesis in Chinese, respectively.", "By considering that the English label words have been fine-tuned to work for different languages during the training phase, we use the same English verbalizer M for all languages in the inference phase.", "To evaluate the effectiveness of the proposed PCT framework, we applied PCT to enhance several pretrained cross-lingual language models including XLM-R base , XLM-R large and INFOXLM large .", "We call the enhanced models PCT-X, where X denotes the original pre-trained cross-lingual model.", "XNLI : The XNLI (Conneau et al., 2018) benchmark 2 extends the MultiNLI (Williams et al., 2018) benchmark (in English) to 15 languages through translation and comes with manually annotated development set and test set.", "For each language, the training set comprises 393K annotated sentence pairs, whereas the development set and the test set comprises 2.5 K and 5K annotated sentence pairs, respectively.", "PAWS-X : The PAWS-X (Yang et al., 2019) is a cross-lingual paraphrase identification benchmark 3 , which extends the Wikipedia portion of the PAWS (Zhang et al., 2019) dataset to 7 languages through translation.", "For each language, the training set comprises 49.5K annotated sentence pairs, whereas both the development set and the test set comprise 2K annotated sentence pairs each.", "We implemented our enhanced models by Tensor-flow 2.4.0 and trained all the models with 8 TPUs on the Google Colab platform 4 .", "PCT-XLM-R base was initialized by the pretrained XLM-R base model with 12 transformer layers, which outputs 768-dimensional token embed-dings.", "The transformer encoder was built with 12 heads.", "We applied dropout (Srivastava et al., 2014) to each layer by setting the dropout rate to 2 http://www.nyu.edu/projects/bowman/ xnli/ 3 https://github.com/ google-research-datasets/paws 4 https://colab.research.google.com/ 0.1.", "The model was trained by Adam (Kingma and Ba, 2015) with the warmup mechanism (Devlin et al., 2019) and two training epochs, where the initial learning rate was set to 5e-5, the warmup proportion to 10%, and the mini-batch size to 64.", "PCT-XLM large and PCT-INFOXLM large were respectively initialized by the pre-trained XLM-R large and INFOXLM large models with 24 transformer layers, both of which output 1024-dimensional token embeddings.", "The transformer encoder was built with 16 heads.", "The models were trained by RMSProp (Dauphin et al., 2015) with one training epoch, where the initial learning rate was set to 5e-6, the mini-batch size to 32, and the dropout rate to 0.1.", "We used RMSProp instead of Adam for these large models since the training memory is limited by the Google Colab platform.", "For all the above models, the input sentence pairs were truncated to maximum 128 tokens.", "Code and data about our implementations are available at https://github.com/qikunxun/PCT .", "We compared our models with the following pretrained cross-lingual language models: (1) multilingual BERT (mBERT; Devlin et al. (2019)) is a BERT model pre-trained on Wikipedia with 102 languages; (2) XLM (Conneau and Lample, 2019) is pre-trained for two tasks (MLM and TLM) on Wikipedia with 100 languages; (3) XLM-R (Con-neau et al., 2020) extends XLM with larger corpora (i.e. the CC-100 corpora with 100 languages) and more training epochs; (4) UNICODER (Huang et al., 2019) continues training XLM by introducing several post-training tasks using parallel corpora; (5) INFOXLM (Chi et al., 2021a) enhances XLM-R by introducing the cross-lingual contrastive learning task using 42 GB parallel corpora; (6) XLM-ALIGN (Chi et al., 2021b) enhances XLM-R by introducing the denoising word alignment pre-training task using several parallel corpora; (7) The work (Dong et al., 2021) proposes an adversarial data augmentation strategy for XNLI based-on XLM-R; (8) UXLA (Bari et al., 2021) extends XLM-R with data augmentation and unsupervised sample selection.", "(9) The work (Zhao and Schtze, 2021) proposes three prompt-learning methods for few-shot XNLI, including DP (direct prompting), SP (soft prompting) and MP (mixed prompting).", "We conducted experiments on both XNLI and PAWS-X under the cross-lingual transfer setting, where models are trained on data in the source language (usually English) and tested on data in the target language.", "This setting is commonly used to evaluate XNLI models.", "It can be further divided into two sub-settings: the full-shot setting using the whole training set, and the few-shot setting using a fixed number of training samples.", "For both XNLI and PAWS-X we evaluated models under the full-shot setting, whereas for XNLI we additionally evaluated models under the few-shot setting.", "Table 1 reports the results for comparing PCT-enhanced models with other models on XNLI under the full-shot setting.", "The results of compared models are taken from (Chi et al., 2021a) and (Liang et al., 2020).", "PCT-XLM-R base achieves 75.3% accuracy on the XNLI test set averaged by 15 target languages, significantly outperforming its basic model XLM-R base by an absolute gain of 1.1% accuracy on average.", "The difference between PCT-XLM-R base and XLM-R base in average accuracy is statistically significant with p-value 1.7e-6 by a two-tailed t-test.", "Meanwhile, PCT-XLM-R base outperforms the three prompt-learning approaches (i.e. DP-XLM-R base , SP-XLM-R base and MP-XLM-R base ) in (Zhao and Schtze, 2021) under the full-shot setting.", "PCT-XLM-R large achieves 81.3% accuracy on the XNLI test set averaged by 15 target languages, pushing XLM-R large by an absolute gain of 0.9% accuracy on average.", "The difference between PCT-XLM-R large and XLM-R large in average accuracy is statistically significant with p-value 2.5e-4 by a two-tailed t-test.", "Furthermore, it can be seen that the average accuracy of PCT-XLM-R large is close to that of the current state-of-the-art model INFOXLM large (i.e. 81.4%), which is trained with additional data.", "To further verify the effectiveness of PCT, we also applied PCT to INFOXLM large , denoted by PCT-INFOXLM large .", "It can be seen that PCT-INFOXLM large achieves 82.0% accuracy on average, pushing INFOXLM large by an absolute gain of 0.6% on average.", "The difference between PCT-INFOXLM large and INFOXLM large in average accuracy is statistically significant with p-value 7.5e-3 by a two-tailed t-test.", "These results imply that PCT is able to further improve the cross-lingual transferability of state-of-the-art models.", "Table 2 reports the comparison results on PAWS-X under the full-shot setting.", "The results of compared models are taken from (Hu et al., 2020).", "Since the work (Hu et al., 2020) has not reported the result of XLM-R base , we produced the result of 1915 Shots Models en fr es de el bg ru tr ar vi th zh hi sw ur K=16 FT 34.7 33.8 33.8 34.3 33.5 33.8 34.1 34.1 33.6 34.0 33.1 33.5 33.1 33.7 33.2 33.7 DP 38.2 36.6 36.9 37.5 37.4 37.1 36.5 35.7 35.1 35.8 37.2 37.9 35.9 33.8 34.9 36.4 SP 39.5 40.9 39.4 40.2 40.4 40.6 40.6 36.3 38.9 38.5 39.5 37.4 36.9 37.1 35.9 38.8 MP 33.2 34.4 34.5 34.0 32.6 33.0 33.9 34.7 32.5 33.3 33.5 35.7 34.3 33.3 32.7 33.7 PCT (this work) 46.5 44.3 41.5 36.9 45.7 40.8 42.4 43.7 43.6 44.7 43.9 44.8 44.8 40.1 42.5 43.1 K=32 FT 36.6 36.5 36.0 36.0 36.1 36.3 35.7 35.9 35.8 36.1 35.7 35.7 36.2 35.3 34.8 35.9 DP 43.7 43.9 42.8 43.5 42.5 43.5 42.5 42.0 41.8 41.9 40.5 39.9 39.3 37.5 39.8 41.7 SP 44.7 42.3 42.3 42.1 42.3 43.4 43.8 38.8 40.3 42.1 40.0 39.6 38.9 37.5 38.8 41.1 MP 45.5 44.7 41.2 42.6 42.3 42.2 42.2 41.2 41.0 41.7 40.2 40.9 40.2 36.5 40.5 41.5 PCT (this work) 49.6 48.8 45.5 44.4 47.4 45.4 45.5 44.3 45.7 46.7 41.6 45.6 46.7 40.3 42.9 45.4 K=64 FT 41.7 39.5 40.3 40.1 39.9 39.6 38.3 39.5 40.2 40.9 39.2 39.6 39.5 39.6 39.2 39.8 DP 48.9 48.0 45.0 48.1 46.9 47.6 44.9 45.7 45.6 47.3 45.7 45.2 41.6 41.0 43.3 45.7 SP 49.0 46.1 45.8 46.0 43.7 43.8 44.5 41.9 43.5 45.3 44.7 44.2 40.9 40.5 40.1 44.0 MP 51.8 48.3 46.6 48.2 46.8 46.0 44.8 44.8 43.9 48.3 45.0 43.0 40.1 37.8 44.0 45.3 PCT (this work) 51.5 51.3 50.9 49.3 50.6 50.2 49.1 47.4 48.1 49.7 47.3 48.2 47.6 44.6 44.0 48.6 K=128 FT 46.9 46.0 45.8 45.6 44.4 45.5 44.9 43.7 43.5 44.8 43.3 44.8 43.0 41.4 41.8 44.4 DP 53.7 49.3 48.5 51.0 47.4 50.5 46.9 49.6 46.2 48.9 44.8 49.6 44.8 42.0 44.2 48.0 SP 49.5 46.4 45.8 45.0 46.3 46.2 45.0 41.9 44.8 45.0 45.6 45.7 43.3 41.2 41.2 44.9 MP 52.6 50.3 49.7 49.0 49.1 48.0 46.4 48.5 46.5 48.2 48.1 50.5 47.0 42.9 44.0 48.0 PCT (this work) 55.0 53.3 53.8 52.8 53.4 51.9 51.7 50.9 50.4 51.7 50.0 51.2 51.5 47.0 47.9 51.5 K=256 FT 57.8 55.4 55.9 54.4 54.0 54.6 52.9 52.3 52.1 54.2 51.2 52.1 50.7 50.0 48.6 53.1 DP 60.1 54.4 50.6 55.4 55.1 55.6 51.4 50.8 53.2 55.1 53.4 52.7 46.1 45.3 48.4 52.5 SP 60.6 55.8 54.8 53.0 53.1 56.0 52.5 52.1 52.3 54.5 54.5 54.6 49.4 47.3 48.5 53.3 MP 60.1 55.3 51.6 50.7 54.6 54.0 53.5 51.3 52.8 52.3 53.4 53.8 49.6 45.3 47.2 52.4 PCT (this work) 60.3 58.3 58.3 56.3 57.9 56.7 55.2 54.6 54.7 57.4 55.6 55.8 54.6 51.6 52.6 56.0 Table 3: Comparison results on XNLI under the few-shot setting.", "XLM-R base (denoted by XLM-R base ).", "PCT-XLM-R base achieves 85.4% accuracy on the test set averaged by 7 languages, pushing XLM-R base by an absolute gain of 1.1% accuracy on average.", "The difference between PCT-XLM-R base and XLM-R base in average accuracy is statistically significant with p-value 3.2e-3 by a two-tailed t-test.", "PCT-XLM-R large achieves 88.3% average accuracy on the PAWS-X test set, pushing XLM-R large by an absolute gain of 1.9% accuracy on average.", "The difference between PCT-XLM-R large and XLM-R large in average accuracy is statistically significant with p-value 3.2e-3 by a two-tailed t-test.", "Table 3 reports the results for comparing PCT-XLM-R base with all approaches proposed in (Zhao and Schtze, 2021).", "Note that all compared models are based on XLM-R base and we evaluated PCT-XLM-R base using the same split of data from (Zhao and Schtze, 2021).", "The training and validation data are randomly sampled by (Zhao and Schtze, 2021) with K { 16 , 32 , 64 , 128 , 256 } shots per class from the English training data in XNLI.", "Results show that PCT-XLM-R base statistically outperforms all baselines in all experiments.", "In particular, PCT-XLM-R base outperforms the fine-tuning baseline by an absolute gain of 9.4% accuracy on average in the 16-shot experiments.", "It can also be seen that the difference between PCT-XLM-R base and fine-tuning baseline becomes larger as K decreases, implying that the PCT framework becomes more effective when training data are fewer.", "We also evaluated PCT on XNLI under the TRANSLATE-TRAIN-ALL setting, where all translated data are used in training, to see how well PCT is adapted to this setting.", "We construct an original question from the template of each of the 15 languages and an augmented question from a sampled template of other languages.", "Table 4 reports the comparison results.", "PCT-XLM-R large under this setting achieves significantly better performance than under the cross-lingual transfer setting, but fails to outperform its original model XLM-R large .", "This inferiority may be caused by the relatively low quality of examples in source languages.", "Note that an example in a source language other than English is translated from an English example 1916 Variant Models en fr es de el bg ru tr ar vi th zh hi sw ur p-value Original PCT-XLM-R base 84.9 79.4 79.7 77.7 76.6 78.9 76.9 74.3 72.9 76.0 72.0 74.9 71.7 65.9 67.3 75.3 (1) W/o the consistency loss 84.6 79.6 79.5 76.7 76.3 78.1 76.0 73.9 72.1 75.0 72.3 73.9 71.1 63.9 66.8 74.7 1.2e-3 (2) W/o the PCT framework 83.9 78.1 78.5 76.1 75.7 77.1 75.3 73.2 71.6 74.7 70.9 73.4 70.2 63.6 65.5 73.9 1.5e-9 (3) Using cross-lingual templates in (2) 83.9 77.4 78.3 75.6 75.1 76.5 74.8 72.3 70.8 74.3 70.6 72.0 69.7 63.7 65.0 73.3 3.0e-10 (4) W/o the cross-lingual templates 84.8 79.5 79.6 77.9 76.4 78.2 76.7 74.2 72.5 76.0 71.9 74.6 71.6 64.8 66.9 75.0 3.0e-2 (5) Using substitute word templates 84.6 79.0 79.6 77.1 76.5 77.9 75.9 73.8 72.0 75.5 71.5 73.9 70.6 66.3 65.8 74.7 4.2e-4 Table 5: Ablation study results for PCT-XLM-R base .", "and may have translation errors.", "As a future work, we will go on studying whether using training data in multiple languages helps to improve XNLI by collecting more real-world data in other languages.", "Table 5 reports the ablation study results for PCT-XLM-R base .", "For the variant (1), we omit the consistency loss in course of training.", "Results show that the usage of consistency loss achieves better performance on average.", "For (2), we omit the whole PCT framework in course of training.", "Results show that the usage of PCT pushes XLM-R base with standard prompt-learning by an absolute gain of 1.4%.", "For (3), we apply the cross-lingual templates to the variant (2).", "Results show that the performance drops about 0.6% on average when applying only the cross-lingual templates.", "For (4), we use only the English template in the inference phase.", "Results show that PCT-XLM-R base achieves better performance on average when the cross-lingual templates are used in inference.", "For (5), we use the substitute word templates for Arabic and Urdu as for other languages, i.e., the templates for Arabic and Urdu are also left-to-right written.", "Results show that PCT-XLM-R base is able to capture certain language-specific characteristics in the target language to achieve better performance.", "To clarify why the proposed PCT framework improves accuracy in predicting NLI labels, we visually compared the representations of the [MASK]", "token generated by standard prompt-learning based XLM-R base (denoted by PL-XLM-R base ) with that generated by PCT-XLM-R base , by using t-SNE (Laurens and Hinton, 2008) to reduce the dimension.", "The results are shown in Figure 3.", "For the sub-figures", "(a) and", "(d), the points marked with x\", +\" and o correspond to examples with the label entailment, contradiction and neutral, respectively.", "The points with different colors correspond to examples in different languages.", "The figures were obtained by randomly selecting 200 examples for each language from the XNLI test set.", "It can be seen in", "(a) that a group of red points (for Urdu) and purple points (for Arabic) are dissociated while all points from different languages are mutually overlapped in", "(d).", "Considering that the the points from Arabic and Urdu are quite different, we further analyzed them.", "For the sub-figures", "(b),", "(c),", "(e) and", "(f), the points marked with o\" and +\" respectively correspond to examples in English and in either Arabic or Urdu. The points with blue, red and green color correspond to examples with the label entailment, neutral and contradiction, respectively.", "Sub-figures", "(b) and", "(e) (resp.", "sub-figures", "(c) and", "(f)) were obtained by randomly selecting 1000 examples in English and 1000 in Arabic (resp. in Urdu) from the XNLI test set.", "Compared with PL-XLM-R base , PCT-XLM-R base yields clearer distinction between different labels and more confusion between English and the target language (Arabic or Urdu).", "These results imply that the PCT framework tends to align contextualized representations in different languages into the 1917 ++o + x + x oo ++ o + xo +xx o o o o + + + x o+ x + x x x o + x xx + x x + o x + x + + x x o x o o o + + x o + xx o + o o oo x + o + o x x x o o + + + x+ x x + + + o x + x o x + o + o + + o oo x x o+ x x + xo x o + + x+ + +++ o + o o + o o o+ x o x o+o x x o + o xx + o x x + x o o + o + x o + + x x o o x + x o x o x + x x + x +o o x x + x o o o o xx ++ + o x o x + x + o x o o o + o + o x x o o o + o o ox o + + x + x o o +x o o o o o + x + x o x o o + x o o x + o x x + o o + + x o o+ o o + + x o x + + x o o + x o o x o + + x x o o x o x x o x + x + + + o x x x +o o x o o o + + + + + o + o x o + x ++ + + x o x x o o o + x x o + o o o x o o x + o + o x + + + x + + o xx + + x x o + o o x x o x + x o x x o x + + x x o o x o x x + + o x oo + x + o o + + + o o o o + x x x o + x ++ox+ o o x o o o + x oo x+ o o + o o x + o + o +x + + x o o + x + x o o x + o + +x + + + x x o x o x + x o + o x x o x +o + o o + o + ++ x o + x x o + + o ++ + x x o o + + + o o x o x+ x + o x + xo + o o o + o x x o x oo x x + o + x + + +o x x + o o + o + o x + ox o o + x x o x + x x o o + + x + o x o + x o x x x x x x o + x + + o o o x xx o + o o o o + + + o + + + + o + x + o + x o + x oo x + o + + x o x + o o o o o o x o o o o o x x x x + x + + x o o x o + o ++ x x + o + o + x x o o o + + + x x x + x x o + + x x o + + x o o x x o o + o x o + o x + o x o o x o x + + x + x + + x + x x + x + o x xx o + o x o o x o o o + + + + o o x o o + x + o o o x x o x x + x o x x x + x + o o o o + x o x + + x x x o x o oo x + x o + x x o x x o o + o x x x x + + o x + x o x + x + x + x + x + o o o + + + + x x o o x x o x x + + o o o x x x o x o o x o o o o + x o o o + + x + o o x x + x + o + + x x o o + + o x + x + o + x o x o x o + x x x x o + + + o oo + x o + o + x x x x x + + + + + x oo x o x + o + o o x o o x o + o x o x x o o x x + x x + o x o x + o xo o + x x o o o + + xx o x o + + o o o x x o + x + o x o + x o o x x x o o x o + x o + o ++ x o x + x + x ooo + o x x x + x + o + x + + x + + x x x x + + x o + x x x x o o x x x o x x o + o + x o o o + x + o + x o o + + + x o o + x o + o x o x o + + x + o o x x o + + + + + x x o x o x o x + o x o + + x o + + + x x + + o x o x + + x o + + + o x + x x x x o x + + + x x o x + o + o o + x + x o o + o x o x + x o + + o + o o x + x x x x oo + x o o o + o o x o o x xx + o o x o o o x + o + o o + + + + + o o x + o + o ++x o+x o x x x o x x x + x + x x + x o o o o x o x o o o ++ x x + o + + x x + x o x x o o + xx + xo o o + + + o o o o x o + x + + o + x + o x o x ++ o + + xx + x x o o o + x ++ + x + + + + o o o o x + o x o x + o x x oo+ xx x x + + oxo + + + + x o o x o x x+ o x x xo x + x + ++ x + o + + + + o ox x + x x x x ++ + + o x o + xxx + x+o+ x x x x x o o + o x ox + + o + o + x + x xx +o o + x o x x x x o o x + + + x x o x o + o x o o x o x o x o + o x + + + x + o x x o x + + o x x xo + x x x + o o o o o o x x o + o o + + + o x o + x o x + x + x x + x x x + o x x o x x x o + o x x x x +o o x o o o + o + + o o + o o o x x x + x o x x x x x o x + o + +x + + o o + x o x x x x o o + o x x o o + x o + o x x o + o o + o + o x + x + o x x + o + x x o x + o + o x o + x o + x o o o x x + x + o x + x x o + o o + + + o x o o + o + x o + + + + x x o x + + x o x + x o x x x o + + o o o + ++ x o x + o + + o o o + o o + + x o x o x o x x x + xx o + x xx x x o + x + o + x o x o + + +o + x x + + x x + o x o + x o + o x + o x x x x o o o x + + o + o + o o o + x x o x o x + x x x x + o o x x o o o o x x + + o x + + + x x + x + x + o + x o o x x x x o o x o o o o + x + x + x o x x + + x o o + x + x + + x x o + x x + x o + x + + o o o x + + + x x o o x o x + o x + + + o + x + x x x + o x x o x + o xo + x + + x + + + x + + o o + + + x o x x x + + o o o x o + + x x o x x o x x + + + o + + x x + x o x x x xx o x + x + o o o o x + x o o x + x o x x + x x x + + o o x + + + o o x o x o + + + o x o x x o + x x + x + x o x x o o + x + o + x + x oo + x o o + o + + + o + + x o o o + x x + + o o + o x x + x o o x oo x x + + o o o o x x x x x x o + o o + x o + x o o o o x + + x + o + o x x o o x o + o o + x + o + + +o o o x + o x o + + x x + x + + o o + xx xx o x x + o + x o o o x o x + x o + o + + + o o x + o o + o xx + o + + + o + + o + x o o + o x x + x o x o o + x x x o x x x o x o + + o o o o o x o + + + o o o + + + +o + o + o + o x + x + o o x + + o o o o o o o + o x + x x x x x + o x x x + x + + + x + + + o + x + x + + o o o x + o o o o o + x o x + x + x + x + o + x o + + x x o + + + + o + x x + o + + o + + x o x o + + x o x + x o x o o x x + o o + + + x o + o + o + o o o x + + x x o x o x x o o o x x x + x + + o x o x + + + x x x + x + o + x o o o + o + o x x x o o x + o x + x + x x o + x o x x + o + + + x x o o + x x + + x x x + + o o + + + o + x o + + o x x + + o x x o x x x + x o o o x x x x x x o o + o o + + o x x + o + + + + o x + x + x o x + + x x x o oo o x x o + o + + o + o + + + + + + x + + o + x o + o+ + + o x o x + o xxx+ + + o o x o + + o x x o x x o x o o + x o x + o + + o o x o + x + o x + x o + x x x o + o o x x o + o + o + o o x o + + + + x + x + o o + + o o + o x + o o + x + + o o + o o + x o x o o + + + o x + + o x x o x + x x x + + + o x + o o o o o + + x x x x o + x x x x x x + o x + x + o + + + o o x + x x o o o x x x + x x o + x + o o + x o + + + x o o x + x + x x x o x o + + o + + x x + + o x o o o + x x o x o o + o x + + x o o x x o + x o x o + x x o o o + x + + o o + + + x o + o o + o + x + o + + o o o + o x + o x x + o o o o o + + x + o x o o + + x x + x o + x x x + + + o o + x o x o x + x x + o x + + + o o o o + x o + x + x x x o + + + x x + o x o o x x + x + + + x o x + x + + o + + x oo + x + o x x x + x o + + x o x + + x o x o x x o x + o x o x + x + x o o o + o + + x + + x x o o + + x x o x + o x + + o + o o + o o o + o + o x + o + o + x o + x x x o + x x + + x x x x o + + + x + x ++ x + + o + x x x + + + o x + o + o x + o + x + o x + x + o + x + o x ++ + x xx o + o", "same space, which helps to improve the prediction accuracy in the XNLI task.", "We also conducted experiments to show how different strategies for template selection impact the performance.", "The results are reported in Table 6.", "We compared the default uniform strategy with two different selection strategies, where one sets the probabilities for selecting XX directly proportional to and the other inversely proportional to the XX-En BLEU scores, which are directly taken from Table 3 in (Conneau et al., 2018) and can be considered as similarity degrees between the target languages XX and English.", "Results show that the performances of both PCT-XLM-R base and PCT-XLM-R large slightly drop when using the directly proportional strategy.", "It can also be seen that, PCT-XLM-R base with the inversely proportional\" strategy achieves the same average accuracy as with the uniform strategy, while PCT-XLM-R large with the inversely proportional\" strategy is lightly better than with the uniform strategy. This implies that the inversely proportional\" strategy is able to improve the performance by selecting more templates in target languages that are less similar to English. However, the improvements are not significant as p-value > 0.05 by two-tailed t-tests. By considering that XX-En BLEU scores are not available in most practical scenarios, we recommend to use the uniform strategy for template selection. 5 Conclusions In this paper we have proposed a prompt-learning based framework named PCT for cross-lingual natural language inference. PCT enhances pre-trained cross-lingual language models by augmenting data from cross-lingual templates and by introducing the consistency loss to regularize the answer probability distributions. Experimental results on large-scale benchmarks XNLI and PAWS-X show that PCT pushes existing models by a significant absolute gain in accuracy under both the full-shot and few-shot cross-lingual transfer settings. Our ablation study and visualization analysis further confirm the contributions of different enhancements introduced by PCT. Future work will study PCT further under the TRANSLATE-TRAIN-ALL setting with real-world data in different languages. Acknowledgements This paper was supported by the National Natural Science Foundation of China (No. 61976232 and 61876204), Guangdong Basic and Applied Basic Research Foundation (No.2022A1515011355 and 2020A1515010642), Guizhou Science Support Project (No. 2022-259), Humanities and Social Science Research Project of Ministry of Education (18YJCZH006). 1918 References M. Saiful Bari, Tasnim Mohiuddin, and Shafiq R. Joty. 2021. UXLA: A robust unsupervised data augmentation framework for zero-resource cross-lingual NLP. In ACL , pages 19781992. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In NeurIPS , pages 18771901. Zewen Chi, Li Dong, Furu Wei, Nan Yang, Saksham Singhal, Wenhui Wang, Xia Song, Xian-Ling Mao, Heyan Huang, and Ming Zhou. 2021a. Infoxlm: An information-theoretic framework for cross-lingual language model pre-training. In NAACL-HLT , pages 35763588. Zewen Chi, Li Dong, Bo Zheng, Shaohan Huang, Xian-Ling Mao, Heyan Huang, and Furu Wei. 2021b. Improving pretrained cross-lingual language models via self-labeled word alignment. In ACL , pages 3418 3430. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmn, Edouard Grave, Myle Ott, Luke Zettle-moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In ACL , pages 84408451. Alexis Conneau and Guillaume Lample. 2019. Cross-lingual language model pretraining. In NeurIPS , pages 70577067. Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel R. Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: evaluating cross-lingual sentence representations. In EMNLP , pages 24752485. Yann N. Dauphin, Harm de Vries, and Yoshua Bengio. 2015. Equilibrated adaptive learning rates for non-convex optimization. In NIPS , pages 15041512. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT , pages 41714186. Xin Dong, Yaxin Zhu, Zuohui Fu, Dongkuan Xu, and Gerard de Melo. 2021. Data augmentation with adversarial training for cross-lingual NLI. In ACL , pages 51585167. Xu Han, Weilin Zhao, Ning Ding, Zhiyuan Liu, and Maosong Sun. 2021. PTR: prompt tuning with rules for text classification. CoRR , abs/2105.11259. Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, and Melvin Johnson. 2020. XTREME: A massively multilingual multitask benchmark for evaluating cross-lingual generalisation. In ICML , pages 44114421. Haoyang Huang, Yaobo Liang, Nan Duan, Ming Gong, Linjun Shou, Daxin Jiang, and Ming Zhou. 2019. Unicoder: A universal language encoder by pretraining with multiple cross-lingual tasks. In EMNLP , pages 24852494. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR . Van der Maaten Laurens and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of machine learning research , 9(11). Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. In ACL , pages 45824597. Yaobo Liang, Nan Duan, Yeyun Gong, Ning Wu, Fenfei Guo, Weizhen Qi, Ming Gong, Linjun Shou, Daxin Jiang, Guihong Cao, Xiaodong Fan, Ruofei Zhang, Rahul Agrawal, Edward Cui, Sining Wei, Taroon Bharti, Ying Qiao, Jiun-Hung Chen, Winnie Wu, Shuguang Liu, Fan Yang, Daniel Campos, Rangan Majumder, and Ming Zhou. 2020. XGLUE: A new benchmark datasetfor cross-lingual pre-training, understanding and generation. In EMNLP , pages 6008 6018. Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, and Jie Tang. 2021. GPT understands, too. CoRR , abs/2103.10385. Kunxun Qi and Jianfeng Du. 2020. Translation-based matching adversarial network for cross-lingual natural language inference. In AAAI , pages 86328639. Timo Schick and Hinrich Schtze. 2021. Exploiting cloze-questions for few-shot text classification and natural language inference. In EACL , pages 255269. Taylor Shin, Yasaman Razeghi, Robert L. Logan IV, Eric Wallace, and Sameer Singh. 2020. Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In EMNLP , pages 42224235. Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research (JMLR) , 15(1):19291958. Adina Williams, Nikita Nangia, and Samuel R. Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In NAACL-HLT , pages 11121122. Yinfei Yang, Yuan Zhang, Chris Tar, and Jason Baldridge. 2019. PAWS-X: A cross-lingual adversarial dataset for paraphrase identification. In EMNLP , pages 36853690. 1919 Yuan Zhang, Jason Baldridge, and Luheng He. 2019. PAWS: paraphrase adversaries from word scrambling. In NAACL-HLT , pages 12981308. Mengjie Zhao and Hinrich Schtze. 2021. Discrete and soft prompting for multilingual models. In EMNLP , pages 85478555. A Cross-lingual Templates Here we introduce the cross-lingual templates that we used in our experiments. We used the same English template defined by (Zhao and Schtze, 2021) for XNLI and used the English template defined by (Brown et al., 2020) for PAWS-X. The cross-lingual templates are generated by translating the English template to target languages using Google translator. The cross-lingual templates for XNLI are given in Figure 4. The cross-lingual templates for PAWS-X are given in Figure 5. The slots [P] and [H] are filled by the premise and the hypothesis, respectively. B Results with Standard Deviations Here we report the complete experimental results taken from five runs with standard deviations. The means and the standard deviations are reported in the row avg. and s.d., respectively. For XNLI under the full-shot cross-lingual transfer setting, the experimental results are reported in Table 7, including the results for all five runs achieved by PCT-XLM-R base , PCT-XLM-R large and INFOXLM-R base . For PAWS-X under the full-shot cross-lingual transfer setting, the experimental results are reported in Table 8, including the results for all five runs achieved by PCT-XLM-R base , PCT-XLM-R large and INFOXLM-R base . For XNLI under the few-shot cross-lingual transfer setting, the experimental results with K { 16 , 32 , 64 , 128 , 256 } shots per class are reported in Table 9, including the results for all five runs achieved by PCT-XLM-R base . 1920 Template Language <s>[P]</s></s>Question: [H]? Answer: <mask></s> English (en) <s>[P]</s></s>Question: [H]? Rponse: <mask></s> French (fr) <s>[P]</s></s>Pregunta: [H]? Respuesta: <mask></s> Spanish (es) <s>[P]</s></s>Frage: [H]? Antwort: <mask></s> German (de) <s>[P]</s></s>: [H]? : <mask></s> Greek (el) <s>[P]</s></s>: [H]? : <mask></s> Bulgarian (bg) <s>[P]</s></s>: [H]? : <mask></s> Russian (ru) <s>[P]</s></s><mask> : : [H] : </s> Arabic (ar) <s>[P]</s></s>Soru: [H]? Cevap: <mask></s> Turkish (tr) <s>[P]</s></s>Cu hi: [H]? Tr li: <mask></s> Vietnamese (vi) <s>[P]</s></s> : [H]? : <mask></s> Thai (th) <s>[P]</s></s> : [H]? : <mask></s> Chinese (zh) <s>[P]</s></s> !\" : [H]?" ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "other" ]
[ "Hindi grapheme-to-phoneme (G2P) conversion is mostly trivial, with one exception: whether a schwa represented in the orthography is pronounced or unpronounced (deleted).", "Previous work has attempted to predict schwa deletion in a rule-based fashion using prosodic or phonetic analysis.", "We present the first statistical schwa deletion classifier for Hindi, which relies solely on the orthography as the input and outperforms previous approaches.", "We trained our model on a newly-compiled pronunciation lexicon extracted from various online dictionaries.", "Our best Hindi model achieves state of the art performance, and also achieves good performance on a closely related language, Punjabi, without modification.", "Hindi is written in the Devanagari script, which is an abugida, an orthographic system where the basic unit consists of a consonant and an optional vowel diacritic or a single vowel.", "Devanagari is fairly regular, but a Hindi word's actual pronunciation can differ from what is literally written in the Devanagari script.", "1 For instance, in the Hindi word (cid:112)(cid:3)(cid:112)(cid:114) pep@R@ paper', there are three units (cid:112)(cid:3) pe , (cid:112) p@ , and (cid:114) R@ , corresponding to the pronounced forms [pe] , [p@] , and [r] .", "The second unit's inherent schwa is retained in the pronounced form, but the third unit's inherent schwa is deleted.", "Predicting whether a schwa will be deleted from a word's orthographic form is generally difficult.", "Some reliable rules can be stated, e.g. delete any schwa at the end of the word', but these do not perform well enough for use in an application that requires schwa deletion, like a text-to-speech synthesis system.", "1 Throughout this paper, we will adopt the convention of using angle brackets to describe how a word is literally spelled, and [square brackets] to describe how a word is actually pronounced.", "This work approaches the problem of predicting schwa deletion in Hindi with machine learning techniques, achieving high accuracy with minimal human intervention.", "We also successfully apply our Hindi schwa deletion model to a related language, Punjabi.", "Our scripts for obtaining machine-readable versions of the Hindi and Punjabi pronunciation datasets are published to facilitate future comparisons.", "2 2 Previous Work Previous approaches to schwa deletion in Hindi broadly fall into two classes.", "The first class is characterized by its use of rules given in the formalism of The Sound Pattern of English (Chomsky and Halle, 1968).", "Looking to analyses of schwa deletion produced by linguists (e.g., Ohala, 1983) in this framework, others built schwa deletion systems by implementing their rules.", "For example, this is a rule used by Narasimhan et al. (2004), describing schwa deletion for words like (cid:106)(cid:92)(cid:103)(cid:108)(cid:70) dZ@Ng@li: : C V C C a C V C V C C C V dZ @ N g @ l i: dZ @ N g l i: Paraphrasing, this rule could be read, if a schwa occurs with a vowel and two consonants to its left, and a consonant and a vowel to its right, it should be deleted.", "A typical system of this class would apply many of these rules to reach a word's output form, sometimes along with other information, like the set of allowable consonant clusters in Hindi.", "These systems were able to achieve fair accuracy (Narasimhan et al. achieve 89%), but were ill-equipped to deal with cases that seemed to rely on detailed facts about Hindi morphology and prosody.", "2 All of the code, models, and datasets for this research are publicly available at https://github.com/aryamanarora/ schwa-deletion .", "situation such as this, both light syllables together heavy foot. Equally important, the second light cannot combine with the second foot because a foot L H is impossible (see Sect. 2 for more details). are two heavy feet, and the nonfinal foot is the stressed", "Figure 1: A representative example of the linguistic representations used by Tyson and Nagar (2009).", "Proceeding from top to bottom, a prosodic word (PrWd) consists of feet, syllables (which have weights), and syllable templates.", "Similar research has been undertaken in other Indo-Aryan languages that undergo schwa-deletion, albeit to a lesser extent than Hindi.", "Wasala et al. (2006), for example, proposed a rigorous rule-based G2P system for Sinhala.", "since there is a tie between syllable weights 1, Step 4). Within the stressed foot, there are two syllables, so stress falls on the first syllable as indicated frame in the next diagram (Algorithm 1, Step 5): PrWd !! ! \"\" \" F !", "!! \"\" ! light CV ka light CV ra F heavy CVV nA In this word, there is only one unstressed schwa.", "The algorithm deletes it (Algorithm 1, Step 6), and as a result, the consonant [r] is in its own syllable as shown here: 2 Dollar signs denote syllable boundaries, double quotation marks indicate primary lexical stress and uppercase vowels show lengthened vowels.", "Systems of the second class make use of linguistically richer representations of words.", "Typical of this class is the system of Tyson and Nagar (2009), which analyzes each word into a hierarchical phonological representation (see figure 1).", "These same representations had been used in linguistic analyses: Pandey (1990), for instance, as noted by Tyson and Nagar (2009), claimed that schwas in Hindi cannot appear between a strong and weak rhyme 3 within a prosodic foot.", "Systems using prosodic representations perform fairly well, with Tyson and Nagar's (2009) system achieving performance ranging from 86% to 94% but prosody proved not to be a silver bullet; Tyson and Nagar (2009) remark, it appears that schwa deletion is a phenomenon governed by not only prosodic information but by the observance of the phonotactics of consonant clusters.", "Previous work has shown that even with rich linguistic representations of words, it is difficult to discover categorical rules that can predict schwa deletion.", "This led us to approach the problem with machine learning, which we felt would stand a better chance at attaining high performance.", "We obtained training data from digitized dictionaries hosted by the University of Chicago Digital Dictionaries of South Asia project.", "The Hindi data, comprised of the original Devanagari orthography and the phonemic transcription, was parsed out of McGregor (1993) and Bahri (1989) and transcribed into an ASCII format.", "The Punjabi data was similarly processed from Singh (1895).", "Table 1 gives an example entry from the McGregor Hindi dataset.", "There are other approaches to subsets of the schwa-deletion problem.", "One is the diachronic analysis applied by Choudhury et al. (2004) which achieved 99.80% word-level accuracy on native Sanskrit-derived terms.", "To find all instances of schwa retention and schwa deletion, we force-aligned orthographic and phonemic representations of each dictionary entry using a linear-time algorithm.", "In cases where force-alignment failed due to idiosyncrasies in the source data (typos, OCR errors, etc.) we discarded the entire word.", "We provide statistics about our datasets in table", "2. We primarily used the dataset from McGregor in training our Hindi models due to its comprehensiveness and high quality.", "Machine learning has not been applied to schwa deletion in Hindi prior to our work.", "Johny and Jansche (2018) used neural networks to model schwa deletion in Bengali (which is not a binary classification problem as in Hindi) and achieved great advances in accuracy.", "We employ a similar approach to Hindi, but go further by applying gradient-boosting decision trees to the problem, which are more easily interpreted in a linguistic format.", "3 The rhyme in Hindi (not pictured in figure 1), is the part of the syllable that begins with the vowel and includes any consonants that come after the vowel.", "Its weight is determined by vowel length and whether any consonants appear in it.", "Each schwa instance was an input in our training set.", "The output was a boolean value indicating whether the schwa was retained.", "Our features in the input column were a one-hot encoding of a variable window of phones to the left ( c n , . . . , c 1 ) and right ( c + 1 , . . . , c + m ) of the schwa instance ( c 0 ) under consideration.", "The length of the window on either side was treated as a hyperparamater and tuned.", "We also tested whether including phonological features (for vowels: height, backness, rounded-Hindi Dict.", "ness, and length; for consonants: voice, aspiration, and place of articulation) of the adjacent graphemes affected the accuracy of the model.", "We trained three models on each dataset: logistic regression from scikit-learn, MLPClassifier (multilayer perceptron neural network) from scikit-learn, and XGBClassifier (gradient-boosting decision trees) from XGBoost.", "We varied the size of the window of adjacent phonemes and trained with and without phonological feature data.", "Table 3 tabulates the performances of our various models.", "We obtained a maximum of 98.00% accuracy for all schwa instances in our test set from the McGregor dataset with gradient-boosted decision trees from XGBoost.", "We used a window of 5 phonemes to the left and right of the schwa instance, phonological features, 200 estimators, and a maximum tree depth of 11.", "Any model with at least 200 estimators and a depth of at least 5 obtains a comparable accuracy, but this gradually degrades with increasing estimators due to overfitting.", "Without phonological feature data, the model consistently achieves a slightly lower accuracy of 97.93%.", "Logistic regression with the same features achieved 97.19% accuracy.", "An MLP classifier with a single hidden layer of 250 neurons and a learning rate of 10 4 achieved 97.83% accuracy.", "On the Singh dataset for Punjabi, the same XGBoost model (except without phonological features) achieved 94.66% accuracy.", "This shows the extensibility of our system to other Indo-Aryan languages that undergo schwa deletion.", "We were unable to obtain evaluation datasets or code from previous work (Narasimhan et al. 2004, Tyson and Nagar 2009) for a direct comparison of our system with previous ones.", "4 However, we were able to port and test the Hindi transliteration code written in Lua utilized by Wiktionary (2018), an online freely-editable dictionary operated by the Wikimedia Foundation, the parent of Wikipedia.", "That system obtains 94.94% word-level accuracy on the McGregor dataset, which we outperform consistently.", "Our system achieved higher performance than any other.", "The schwa instances which our model did not correctly predict tended to fall into two classes: borrowings from Persian, Arabic, or European languages, or compounds of native or Sanskrit-borrowed morphemes.", "Of the 150 Hindi words from our test set from McGregor that our best model incorrectly predicted schwa deletion for, we sampled 20 instances and tabulated their source languages.", "10 were native Indo-Aryan terms descended through the direct ancestors of Hindi, 4 were learned Sanskrit borrowings, 5 were Perso-Arabic borrowings, and 1 was a Dravidian borrowing.", "9 were composed of multiple morphemes.", "Borrowings are overrepresented relative to the baseline rate for Hindi; in one frequency list, only 8 of the 1,000 top words in Hindi were of Perso-Arabic origin (Ghatage 1964).", "Notably, some of the Perso-Arabic borrowings that the model failed on actually reflected colloquial pronunciation; e.g. a(cid:109)(cid:110) @m@n@ is [@mn] in McGregor yet our model predicts [@m@n] which is standard in most speech.", "We qualitatively analyzed our system to investigate what kind of linguistic representations it seemed to be learning.", "To do this, we inspected several decision trees generated in our model, and found that our system was learning both prosodic 4 We were able to obtain code from Roy (2017) but were unable to run it on our machines.", "Some trees very clearly encoded phonotactic information.", "One tree we examined had a subtree that could be paraphrased like so, where c n indicates the phone n characters away from the schwa being considered: If c + 1 is beyond the end of the word, and c 2 is not beyond the beginning of the word, and c 2 is a t , then if c 1 is a j , then penalize deleting this schwa; 5 otherwise if c 1 is not a j , prefer deleting this schwa.", "Put another way, this subtree penalizes deleting a schwa if it comes at the end of a word, the preceding two characters are exactly tj , and the word extends beyond the preceding two characters.", "This is just the kind of phonetic rule that systems like Narasimhan et al. (2004) were using.", "The extent to which our system encodes prosodic information was less clear.", "Our features were phonetic, not prosodic, but some prosodic information can be somewhat captured in terms of phonetics.", "Take, for instance, this subtree that we found in our model, paraphrasing as before: If c 3 is beyond the beginning of the word, and c 2 is a: , then if c + 2 is @ , prefer deletion; otherwise, if c + 2 is not @ , penalize deletion.", "Consider this rule as it would apply to the first schwa in the Hindi word a(cid:65)(cid:109)(cid:100)(cid:110)(cid:70) a:m@d@ni: -3 -2 -1 0 1 2 3 4 5 a: m @ d @ n i: The rule decides that deleting the first schwa should be penalized, and it decided this by using criteria that entail that the preceding rhyme is heavy and the following rhyme is light.", "6 Obviously, though, this same rule would not work for other heavy and light syllables: if any of the vowels had been different, or at different offsets, a non-deletion rather than a deletion would have been preferred, which is not what it ought to do if it is emulating the prosodic rule.", "It is expected that our model is only able to capture ungeneralized, low-level patterns like this, since it lacks the symbolic vocabulary to capture elegant linguistic generalizations, and it is perhaps surprising that our system is able to achieve the 5 Penalize deleting and not delete , because this tree is only contributing towards the final decision, along with all the other trees.", "6 Actually, this is not exactly true, since if the following syllable had any consonants in the rhyme, it would become heavy, even if there were a schwa present.", "But this is an error that could be corrected by other decision trees.", "performance it does even with this limitation.", "In future work, it would be interesting to give our system more directly prosodic representations, like the moraic weights of the surrounding syllables and syllabic stress.", "Another limitation of our system is that it assumes all schwas are phonologically alike, which may not be the case.", "While most schwas are at all times either pronounced or deleted, there are less determinate cases where a schwa might or might not be deleted according to sociolinguistic and other factors.", "McGregor (1993, p. xi) calls these weakened schwas, describing them as weakened by Hindi speakers in many phonetic contexts, and dropped in others and orthographically indicating them with a breve.", "(cid:115)(cid:40)(cid:121) is transcribed saty .", "Our best model correctly classified 80.4% of the weakened schwas present in our test set taken from McGregor.", "Improving our performance for this class of schwas may require us to treat them differently from other schwas.", "Further research is needed on the nature of weakened schwas.", "We have presented the first statistical schwa deletion classifier for Hindi achieves state-of-the-art performance.", "Our system requires no hard-coded phonological rules, instead relying solely on pairs of orthographic and phonetic forms for Hindi words at training time.", "Furthermore, this research presents the first schwa-deletion model for Punjabi, and has contributed several freely-accessible scripts for scraping Hindi and Punjabi pronunciation data from online sources." ]
[ "abstain", "abstain", "objective", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "method", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "result", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "abstain", "abstain" ]